Running Parallel Requests in Performance Tests
Test and Monitor | Posted February 04, 2013

One of the most important goals in performance testing is to simulate - as closely as possible - actual traffic to your site. There are a number of ways to create an extremely realistic testing experience, using such tools as parameterization, random think time and parallel requests.

Each of these tools helps simulate traffic in its own unique way:

  • Parameterization allows each user use of their own unique data
  • Random think time allows the simulation of real-user reading and comprehension
  • Parallel requests closely mimic the behavior of actual browsers

Just to clarify, by parallel requests I mean running simultaneous requests for each virtual user. When you fetch a page, your browser sends out multiple requests simultaneously, which bring back images, JavaScript, and page content. These parallel requests help the browser display pages faster. However, some performance testing tools send replay traffic sequentially; that is, requesting the elements of a page one-by-one, only requesting the next item after the previous one has been received.

As you can imagine, performance testing using sequential requests will show pages taking longer to respond. Other tools will allow you to configure a specific number of requests that can run in parallel. This will speed content delivery, but will not really emulate the browser experience – as each browser handles parallel requests in a slightly different way. If you record what the browser does, and use that information to run your test, you will come as close as possible to getting results that are indicative of your actual user experience.

All of this information is invaluable when you are trying to determine scalability of your current application and environment. And you'll have the added benefit of being able to record the same scenario with different browsers, and then compare the timings for each browser.

LoadComplete version 2.7 implements the concept of parallel requests. Not only does it run requests in parallel, it does it exactly as the browser did during recording of the scenario. This enables the tester to get a much more accurate picture of real world performance under load.

Here's an example of a performance test that benefits from parallel requests:

A tester wants to determine how a new application will perform under normal expected load. Because of an existing Service Level Agreement (SLA), no page is allowed to take longer than four seconds under normal traffic conditions. Under this scenario, it's imperative that the test results accurately report performance as experienced by end-users. Due to parallel requests, the tests that are run will much more accurately reflect actual performance, and will be a good indicator of whether or not the application will meet the performance requirements as defined in the SLA.

If an accurate representation of user experience is important to you, your tests will benefit from parallel requests. When used with other features such as parameterization and think time, parallel requests will get you one giant step forward in simulating the actual user experience.



See also:

[dfads params='groups=931&limit=1&orderby=random']

[dfads params='groups=937&limit=1&orderby=random']