Evaluating Performance of Your Web Applications
Performance testing is a huge topic, so rather than trying to tackle all of the items that could fall under that topic (a nearly impossible feat that would make today's blog post very long), I'd like to stick to three interesting topics that you may find will add value to your testing: single-virtual-user tests, manual pagniation and server monitors.
This is something that is often not given the attention it deserves. The most common use of single-user tests is as a validation mechanism for the scenario or script that has been created for your test. The premise being that if a single-user test does not work, the scenario is flawed and needs continued work. Sometimes this is due to correlations being incomplete or incorrect. At any rate, it's essential that a single-user test be the first step after recording a scenario.
Beyond their use in performing initial validation of scenarios, single-user tests can be extremely valuable in other important ways:
- Due to their low-impact nature, they can be executed without having to worry about impacting customers on your site. They can be used to monitor system behavior and performance without affecting the results.
- They can be used as a model of various user types.
- When fine-tuning your scenarios, it is essential to keep the noise to a minimum. By using a single-virtual-user, it's easier to test and tweak your scenario.
There will be times when the default grouping of requests does not exactly fit with your use model. This will be rare on html sites, but will be the norm on many Rich Internet Application (RIA) sites. Because RIA’s do not refresh the entire page, you could have one long scenario that would end up as a single “page.” In this case, you'll want to manually paginate your scenario. You'll want to group requests by user action.
An example: You push a button that requests some information from the server and updates a region of the screen. Giving this transaction a name will enable you to look at the response time more granularly.
In addition to monitoring the performance of responses coming back from your application(s), you should also monitor the servers (web, application, db and other) that are used by the application. By having this data available (memory, disk I/O, CPU, DB locks, etc.) and included in the report, it's much easier to make correlations when bottlenecks are encountered.
An example: When running a test for a few hours, at some point the application slows and becomes non-responsive. By monitoring the application server it is seen that memory-use is growing over time due to a memory leak in the application.
Yet another example: During a stress test, the application begins responding very slowly as the load increases. Because there are many factors that could cause this, it's difficult to know exactly what is happening. By monitoring the application server, you see that CPU is at 100 percent, and determine that additional servers are necessary.