Performance testing is an important part of the software development lifecycle. By simulating real users, test engineers can see how the software behaves in a variety of scenarios. Unit tests ensure that code produces a desirable outcome and integration tests ensure that the pieces fit together, but neither ensures that the software will perform under stress.
For instance, an airline might be planning a promotion that will result in a significant increase in traffic. Load tests can help ensure that the booking engine can handle high levels of concurrent users without experiencing errors. After all, the last thing the airline wants is to spend money on a promotion only to disappoint would-be buyers with a blank page!
Load testing identifies bottlenecks before deployment and reduces the risk of downtime during peak times. For example, developers may notice that expensive database queries cause availability issues at a certain threshold. These defects are hard to notice without load testing and may only become apparent when heavy traffic occurs in production.
Read about broswer-based load testing in our article Why You Should Load Test with Real Browsers.
Track these performance metrics to ensure that your application is performant before deploying to production.
- Average Response Time is the roundtrip time that it takes for a request from the client to generate a response from the server.
- Peak Response Time is the longest response time that occurred within a given performance test cycle.
- Error Rate is the number of errors compared to all requests made during a given performance test cycle.
Concurrent Users is the number of virtual users that are active at any given point in time during a performance test cycle.
- Throughput is the number of kilobytes per second transmitted during the performance test cycle, which illustrates the amount of data flowing back and forth.
Ready to give load testing a try? Sign up for your free 14-day LoadNinja trial today.