Agile Testing Challenges - Performance Bottlenecks

 

 

 

 

 

 

 

 

This is the final installment of blogs in the Top 5 Agile Testing Challenges series.

You can view the prior blogs or download a more detailed white paper, here:

1. Agile Testing Challenges - Web Services Testing Issues

2. Agile Testing Challenges - Finding Defects Early

3. Agile Testing Challenges - Broken Builds

4. Agile Testing Challenges: Inadequate Test Coverage

5. Five Challenges for Agile Testing Teams: Solutions to Improve Agile Testing Results (white paper)

Agile development is a faster, more efficient and cost-effective method of delivering high-quality software. However, agile presents testing challenges beyond those of waterfall development. That’s because agile requirements are more lightweight, and agile builds happen more frequently to sustain rapid sprints. Agile testing requires a flexible and streamlined approach that complements the speed of agile.

The Challenge - Performance Bottlenecks

In a perfect world, adding new features in your current release would not cause any performance issues. But we all know that as software starts to mature with the addition of new features, the possibility of performance issues increase substantially. Don’t wait until your customers complain before you begin testing performance. That’s a formula for very unhappy customers.

System slowdowns can be introduced in multiple places—your user interface, batch processes, and APIs—create processes that ensure all performance is monitored and issues are mitigated. Last but by no means least, you also should implement automatic production monitoring to check the speed systems for speed, which provides valuable statistics that enable you to improve performance.

Getting Started with Application Load Testing

Your user interface is the most visible place for performance issues to crop up. Users are very aware when they are waiting “too long” for a new record to be added.  When you’re ready for load testing, it is important to set a performance baseline for your application, website, or API:

  • The response time for major features (e.g., adding and modifying items, running reports)
  • The maximum number of concurrent users the software can handle
  • Whether the application fails or generates errors when ”too many” visitors are using it
  • Compliance with specific quality-of-service goals

It’s very difficult to set a baseline without a tool. For instance, you could easily manually record the response time of every feature simply by using a stopwatch and recording the times on a spreadsheet. But trying to simulate hundreds or thousands of concurrent users is impossible without the right tool. You won’t have enough people connect at the same time to get you those statistics. When you’re looking at tools, consider

SmartBear's load testing tool, LoadComplete. It’s easy to learn, inexpensive, and can handle all the tasks needed to create your baseline.

Once you establish a baseline, you need to run the same tests after each new software release to ensure it didn’t degrade performance. If performance suffers, you need to know what functions degraded so the technical team can address them. Once you create your baseline with

LoadComplete, you can run those same load tests on each release without any additional work. That enables you to collect statistics to determine if a new release has adversely affected performance.

Getting Started with API Load Testing

Another place performance issues can surface is within your Web services API. If your user interface uses your API, it can impact not only performance of the API but also the overall user interface experience for your customers. Similar to application load testing, you need to create a baseline so you know what to expect in terms of average response time and learn what happens when a large number of users are calling your API.

Use the same approach as with application load testing to set baselines and compare them against each code iteration. You can try to set the baseline manually, but it requires a lot of difficult work. You’d have to write harnesses that call your API simultaneously and also write logic to record performance statistics. A low cost Web Services load testing tool like

SmartBear’s loadUI Pro saves you all that work.

What Is Production Monitoring?

Once you’ve shipped your software to production, do you know how well it’s performing? Is your application running fast or does it slow down during specific times of the day? Do your customers in Asia get similar performance as those in North America? How does your website’s performance compare to your competitors’? How soon do you learn if your application has crashed?

These are just some of the thorny questions about software performance that can be difficult to answer. But not knowing the answer can adversely affect how your customers respond to your application… and your company.

Fortunately, you can address all of these critical questions by implementing production-monitoring tools. With them you can:

  • Access website performance
  • Receive an automatic e-mail or other notification if your website crashes
  • Detect API, e-mail, and FTP issues
  • Compare your website’s performance to your competitors’ sites

When searching for the right Web performance monitoring tool, consider

SmartBear’s AlertSite products, one of the top 3 Web performance monitoring solutions to the Internet Retailer Top 500.

What Are Some Metrics to Watch?

Performance monitoring metrics need to focus on performance statistics and peer code review status. Here are some to consider:

Load Testing Metrics

  • Basic Quality: Shows the effect of ramping up the number of users and what happens with the additional load.
  • Load Time: Identifies how long your pages take to load.
  • Throughput: Identifies the average response time for key actions taken.
  • Server Side Metrics: Isolates the time your server takes to respond to requests.

Production Monitoring Metrics

  • Response Time Summary: Shows the response time your clients are receiving from your website. Also separates the DNS, redirect, first byte, and content download times so that you can better understand where time is being spent.
  • Waterfall: Shows the total response time with detailed information by asset (images, pages, etc.).
  • Throughput: Identifies the average response time for key actions taken.
  • Click Errors: Errors your clients see when they click on specific links on your web page to make it easier to identify when a user goes down a broken path.

What Can You Do Every Day to Ensure You’re Working Optimally?

Each day, testing teams should:

  • Review Application Load Testing Metrics: Examine your key metrics and create defects for performance issues.
  • Load Time: Identifies how long your pages take to load.
  • API Load Testing Metrics: If they’re not performing as they should, create defects for resolution.
  • Review Performance Monitoring Metrics: Review your key metrics and create defects for monitoring issues.

See also:


Close

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading