Why API Performance Matters (And What to Do About It)

Why API Performance Matters (And What to Do About It)
Ryan Pinkham
  January 27, 2016

When working with API teams on API testing, almost all of my time went to seeing if the API was functional.

Ideally, programmers create, read, update and delete (CRUD) when developing new endpoints. Testers spend time on more complex scenarios trying to discover problems a user might come across. And then the whole API is wrapped up in a bow with continuous integration systems and automated checks.

All the functionality in the world won’t make a difference however when software doesn’t deal well with multiple concurrent users.

Customers that would otherwise be loyal, are quick to flee to a different product if the software fails under load. As an example, 57 percent of consumers will abandon a web page that takes more than 3 seconds to load.

With consumer attention spans rapidly dwindling, API performance is more important than ever.

Performance, sometimes the overlooked area of software testing, is just as important as function. If the performance is poor, the system doesn’t function.

The shape of functionality

When I test a new feature, I tend to start with functional testing.

The best shops I have worked with have the developers using design tools like test driven development and behavior driven development; one of the benefits of these tools is knowing when the feature is ready for test. Throughout the development effort, testers are looking at the product to find important things that might go wrong for the customer.

On one of those projects, we had a screen that displayed patient information for doctors. The change we were working on added a drop list with 5 or 6 choices to each patient on that page to make it easier for the practitioner to select a case type. Everything seemed fine in the test environment. I could make selections or leave the default option, make updates, and then see the data saved in the database.

We pushed to production and ran a set of automated smoke tests in the browser and everything was bad, very bad. The data set we ran the test against had 50 or more patients on that page. That drop list showing up on 50 patients added so much data to the page that things were now slow to the point of being unusable.

The feature worked fine if it was just one user on a small data set, but when things scaled up to a realistic point, functionality failed fast.

Being functional doesn’t matter when a page won’t load.

There is a real cost for slowness and even worse, downtime. Amazon.com went down for about 30 minutes. Each minute they were down ended up costing about 66 thousand dollars in lost sales opportunities. A Facebook outage in September cost stockholders a 4% drop in value. Every minute a website is down costs money.

Keeping customers

Speaking of page loads, here’s a mission critical situation for me: checking in to airline flights on time.

I was out of town visiting my family a few years ago. We were out having pizza the day before my wife and I were supposed to fly back. I opened up the Southwest app on my phone, entered my name and confirmation number, hit submit and got a spinner that never went away. The form just would not submit. It took four of us about 15 minutes to check in to the flight and ended up late in the B group. I deleted the app from my phone after that.

48 percent respondents to this survey from CIO.com said they would uninstall an app after it performed poorly. That may have meant a lot of lost customers that day for Southwest.

The Southwest app is much better now, by the way. I downloaded it again and use it regularly. Still I have to wonder how many customers stuck around after that specific issue.

Monitoring and optimization

Software monitoring and performance testing are usually something I have seen done at the release level, every two weeks or so. Some companies are closer to continuous deployment and push new software several times a day, but right now they are the outliers.

In some cases this leaves problem discovery to the customers. New functionality is being delivered quickly and regularly, but testing can’t move fast enough to find the important problems and monitoring only tells us about them when the customer is experiencing them and it’s too late.

Shifting monitoring solutions back in the delivery chain can help. Monitoring a development environment can show developers where data is getting a little too big on one page, or API response times are slowing down on another. On testing environments, monitoring can help testers find relevant logs much faster.

Most mobile users will abandon a page that doesn’t load in 4 seconds or less.

Monitoring tools can alert us when there are system slowdowns. With an API though, each individual part can be monitored for load time, and error rate as well as higher level metrics like which endpoints are most popular and what the endpoint chains are.

Going one step further to get exact customer configurations and data can make the monitoring data more relevant.

Going back to the anesthesiologist software project, we worked with a customer to capture parts of their data set. HIPAA regulations restrict how and what kind of clinical data can be shared. We ended up writing a script to scrub identifying information so that we could get a good data set without violating and laws or patient trust. This data set helped us see the effects of design changes quickly, and also understand how response time changed when an API endpoint was being called by many different users at the same time.

Combining function and performance

Software performance and functionality are equally important. A product that does exactly what the customer needs, but takes too long might never get used. On the other hand, software that responds instantly but doesn’t satisfy any needs or help people work isn’t very useful either.

Balance the two and you end up with a product people will enjoy using.

Looking for additional API testing resources? Visit SmartBear’s API Testing Resource Center.

You Might Also Like