We've seen it happen in many organizations at some point or another - the blissfully-unaware load tester finds out that their current approach to performance testing is not meeting the needs of the organization, resulting in production issues when their site is under heavy load. Slow response time, broken links and redirects and timeouts all result in poor user experiences, but the internal organization may not have a handle on where the bottlenecks are occurring.
Load testing can be a complex activity, from defining the tests accurately to running them effectively. Most conversations about proper load testing focus on those activities, but an often overlooked part of the discussion is how effective your test reports are. Being able to accurately interpret the results of your load test may help prevent production problems later.
The real question here is this: How would you rate your ability to draw meaningful conclusions on your system’s performance based on the information presented in the test reports you currently use?
Performance reporting should follow the same basic rules of reporting that we learned in elementary school:
- A concise opening statement of theme
- And some concrete argument to back up the theme statement
In essence, a performance reporting system should be able to deliver a high-level overview of how a website is performing under load, but it should also contain detailed visibility into the internal structure of the site and its infrastructure. Reports should illustrate the context and highlights of performance so clearly that anyone could interpret them without being an expert on the data and metrics used to reach these indicators.
The system should also have some degree of customizability, allowing the information to be re-contrasted as needed. Often, the most compelling analysis involves the interaction of only two or three key elements of the data gathered, and the report should be able to be re-formatted to showcase them. The timeliness of this information is paramount, as near-instant availability of the metrics and the ability to concentrate on different testing timeframes, will allow the proper context of an issue to be stressed.
It is also important that the load test reporting system be able to hold and reproduce test results of multiple passes of a given test, as a method to baseline results over time and thus gauge improvement.
A good load test report should be able to show:
A good reporting system identifies and reports on the key performance indicators (KPIs) of the website you’re testing. The reports should focus on KPIs as success criteria. For example:
- “Fulfillment numbers are not where we would like.”
- “Shopping cart abandonment is too high.”
- “Customers are complaining of slowness.”
A good report should also be able to address a discrete time duration, be it an instant test of a single component, or a duration test of multiple scenarios on a website happening over a weekend or longer period. As stated previously, regular testing of a website might also be compiled as a series of referenceable reports, to baseline performance of the site and its infrastructure.
There are two "where" components to consider: a physical version that alludes to the infrastructure and hosts of the website, and the virtual "where," indicating at what step of the application the issues are being seen.
- With a product like SmartBear’s LoadUIWeb Pro, a distributed platform of load testing can be deployed using LoadUIWeb Pro Remote Agent Services at multiple locations on your network, inside and outside of your firewall, to contrast load testing results, and physically pinpoint where bottlenecking is occurring.
- By setting alerts on parameters such as such as Quality of Service or Content Validation, analysis of the steps in the application can be graphically represented in the report as errors, allowing for quick identification of logically where in the application the issue is being seen.
(More correctly “Why Not?”) Reporting should have some method to plainly identify the reason why the KPI objectives are not being met, by indicating an issue on an individual object or subsystem. Hierarchical systems, such as a compiled list of the poor performers or a graphical representation of the performance of each request made to the website can quickly determine a course of action for remediation. SmartBear’s LoadUIWeb offers both a “Top 10” list of poor performers, and a waterfall representation of each of these Top 10, for ease of diagnosis.
With the proper reporting, the overall status and health of the website and infrastructure is quickly comprehended by all of the stakeholders. Such a concise view can suggest possible areas to address, but more importantly gives the intelligence into the system for the Performance Manager to base their decision. However, the ease of generating a concise report, pre-formatted with a summary section and compilation of the relevant data, is of most concern. Using LoadUIWeb, this step has been automated, so that a report is produced at the end of every test run, using the data contained in the test’s unique log file.