Forest from the Trees: Export Your Load Testing Data!
Load tests, unlike front-end performance monitoring, are snapshots of performance at a particular time under specific conditions. Like front-end performance, many factors go in to execution and useful analysis of load testing results. But performance data under load and front-end browser results are totally different animals, and we need tools that give us what we need to make sense of each area.
Generate the load, then what?
When you run a load test, you get a lot of data back, but what does it all mean afterwards? In LoadComplete, there are built in reports such as the ‘Top 10’ slowest performing pages, but what if we have a very specific analysis we want to perform? How about the 90th percentile of slowly performing request? What if we want to see which pages have lots of assets (like third-party scripts) that over-saturate our server with connection requests?
If you’re new to load testing, you may be asking yourself why you should I care about percentiles and why it matters how many third-party scripts are referenced. This is a good starting point and I’d suggest you read a few high-level overviews on how experts like Scott Barber and Steve Sounder use analysis methods to narrow in on the root cause of performance issues.
However, as seasoned performance engineers know, each performance problem dictates its own solution. There is no “best” analysis method, but rather a set of patterns and practices that help to expose the underlying source of the problem, even though many performance issues often boil down to a similar set of common problems.
Check Your Red Herrings at the Door!
Before we try to interpret load testing results, we want to make sure our tests don’t include errors or warnings. Once clear of those, we also may need to normalize our data, removing outliers or known issues; this is on a per case basis, and the reading on this subject abounds online    .
Other known infrastructure bottlenecks and SPOFs should also be considered when running load tests. In most cases, load tests are to test the throughput of infrastructure and code, not browser rendering times or public internet latencies like RUM often focuses on. But resolving front-end performance issues in your application using RUM concepts and tools first isn’t a bad idea; in fact, if you aren’t already doing it, you may want to consider the impact this has on your business.
Once you know there are no obvious performance issues in your testing infrastructure or front-end performance, your load tests will be much more efficient and you can go home earlier (maybe).
What load metrics should I focus on?
Usually you run a load test because you’re looking to resolve a specific performance issue (or check for performance issues recently introduced). In most cases, knowing a bit about the server-side performance also helps, and LoadUIWeb Pro enables you to see server metrics along with the client-side “perceived” metrics like “page load time,” “time to first/last byte,” and “response transfer speed.” For some assets like third-party stylesheets and scripts, there’s just no way to monitor the servers behind these assets, so client perceived metrics will have to do for those.
But really, it’s the combination of metrics that helps to narrow in on the specific characteristics of your performance problems. Contrasting client-side, server-side, and the interplay between these metrics helps to explore your data and pinpoint what’s really going on. Simply plotting client-perceived time-to-last-byte and number of calls to the same resource shows you how much of your traffic is due to specific resource-hogs like large stylesheets and scripts.
Another example of the value of contrasting metrics is lack of proper caching support, which can cause visitors to ask for the full disposition of so much static content that your server server-side disks consistently report overuse. In this case, you’d see the server-side “% disk read time” get worse as the amount of load increases (shown by Virtual Users or Passed Requests).
Similarly, if there is a poorly written database query, you will see the average wait lock time on your database server rise under increasing amounts of load…but only if you compare client-perceived metrics with information from your database server.
No pulling teeth to get raw data!
When we added the ability to export your data from LoadUIWeb Pro, we wanted to make it as incredibly easy as all the other great features like browser-based recording and playback.
From the results of a load test, simply right-click on any area in the messages log, and select either ‘Export to CSV’ or ‘Export to XML’. You can also automate the export of test results via command line, so that this data can be saved out to an archive or data warehouse for later analysis.
This gives you visibility on all the request/response data that was recorded during the test, complete transparency of test results.
Conversely, if you wanted to automate this process, you can use our command-line syntax to do this. A simple batch file dynamically names the results based on date and time, then imports them into a SQL database.For /f "tokens=2-4 delims=/ " %%a in ('date /t') do (set mydate=%%c-%%a-%%b)For /f "tokens=1-2 delims=/:" %%a in ('time /t') do (set mytime=%%a%%b) "C:\Program Files (x86)\SmartBear\LoadUIWeb 2 Pro\Bin\LoadUIWeb.exe" "ScheduledTest.ltp" /run/test:PlaceOrder /exportlog:"PlaceOrder_%mydate%_%mytime%.csv" /exit /SilentMode if %ERRORLEVEL% EQU 0 goto exit_handler REM -- Insert new data into databaseosql -UMyUser -PMyPassword -SMyServer -dMyDB -Q"BULK INSERT CSVTest FROM 'PlaceOrder_%mydate%_%mytime%.csv' WITH (FIELDTERMINATOR = ',',ROWTERMINATOR = '\n')" :err_handlerecho Failed with exit code %ERRORLEVEL% :exit_handlerecho Script ended.Pause Now that we’re here, what can we do?
With an external copy of your test data, you can turn over to the world’s most famous analysis tool…Excel…and load the raw data up. Then you can build pivot tables and charts against the raw data to start making sense of what’s there. Earlier I mentioned extracting the 90th percentile, and here’s an Excel graph of that information based on a sample load test I ran against the website www.wunderground.com:
The graph show that, on average, all users experience poor performance on some of the assets in the ‘Select Activities’ step of my walkthrough of this website. The next most notable issue is that there are a few assets used by each of the pages that more than 99% of our virtual users experienced an over 20 second load time on…as it turns out, third-party assets like ad servers, icons, and tracking scripts inflict major performance degradations on a majority of the virtual user population used during the load test.
To pull in data from LoadUIWeb Pro into Excel, simply click the "Other Sources" option under the ‘Data’ ribbon tab, and select the "From XML Data Import" option to select your exported results. Likewise, you can also set up a connection to a CSV file if you want to overwrite the same test results with new ones from subsequent load tests.
This gets the data into a data table on a sheet (I renamed mine to ‘Data’) that you can then reference with pivot tables and charts. From there you can group, aggregate, and summarize until your heart is warm and fuzzy or your eyes begin to glaze over, whichever comes first.
I [insert expletive] dislike Excel, what else is there?
Well, the industry for data visualization and analysis is chock full of excellent tools…take for instance Tableau!
You can load that same performance data up in Tableau and produce very visually intuitive reports and dashboard to help your human brain focus on areas of concern rather than cells in a spreadsheet. You can also easily explore those contrasting data points we referenced earlier, things like latency over frequency, connections per page, and slow response times by domain. Here are a few examples of using Tableau with data exported from LoadUIWeb Pro.
The Pudding Never Lies
While this is an example, this analysis confirmed the fact that the number of third-party assets loaded on each page seriously impacts the way we load test and how we interpret results. If we were just looking for inefficient code on the primary target site (wunderground.com), we would opt out all the third-party hosts from our scenario. But leaving them in shows the exact same issue that RUM tools report about this site: too many external calls to slow content providers.
(This is my favorite 5-minute Tableau graph based on LoadUIWeb Pro data so far…)
Slow Responses by Host
While wunderground.com has its share of internal poorly performing scripts, optimizing the site might really benefit the most from starting with a reprioritization of exactly how much content is loaded from third-party external resources.
It’s not rocket science, but it really takes knowing what to look for first. The good thing is that once you start playing with your load testing data, you can pretty quickly build your set of tools and patterns to easily ingest the data coming from LoadUIWeb Pro and present you with your own perspectives on the results of load tests.
[Attached here are the example analysis files I used. You should definitely try this kind of thing out on your own data to really see how different perspectives on the same data give you more visibility into the performance characteristics of your web sites and apps.]