The Red Cross was using internal Web site “pinging” software to check the availability of the sites every 15 minutes from Chicago where its data center is located. The IT teams often got reports from users experiencing latency issues, but because they were only monitoring from the location where the data was being retrieved (Chicago), their internal software wasn’t picking up any of those problems.
In October 2009—following the devastating earthquakes, typhoon, and tsunami in Indonesia, the Philippines, American Samoa, and Samoa—the Red Cross needed to know more than if their sites were up. They needed to know how responsive each individual site component was and whether the data was deployed and available immediately. This is crucial when disaster strikes. Workers can’t be held up by slow-performing intranet sites. The Red Cross also needed to know how its sites were per-forming from around the globe, since the organization’s information-sharing network extends globally.
For instance, access to these sites from the country where a natural disaster takes place should not be slower than access from New York or D.C. When lives are at stake, and time is of the essence, the Red Cross didn’t want to rely on volunteers and employees to report problems to its IT department as its line of defense against performance issues. They needed a way to monitor site performance in real time, with immediate alerts if an issue occurred, so the IT team could react immediately.
The Red Cross needed geographically-dispersed testing to monitor and measure its site performance from wherever volunteers were located. It also wanted to trace problems to the source and have real-time reports about uptime, availability, and other metrics. Lastly, the solution had to be easily and quickly deployed.