Understanding the Difference Between Performance and Functional Testers

  December 28, 2015

I remember when I was introduced to the idea of performance testing, and even the performance tester as a role.

Our company had one person whose sole focus was discovering performance and load-related information. My work, on the other hand, was with a team of seven testers.

My cube was right across from our manager and if my manager wasn't watching his voice, I'd likely get an early scoop on new initiatives. The company wanted a person from our group to do performance testing, and that person wasn't going to be me.

In hindsight, they chose the right person. He did a great job.  But I did always wondered why it wasn't me.

Let’s take a look at some of the differences in mindset and skill between performance and functional testers.

1. People vs. events

When I test software, I like to use personas — short but authentic descriptions of someone that might use my product. For example, knowing that Sarah is a regional sales manager is important. Knowing that Sarah graduated high school but did not go on to a university, has a five-year-old mobile phone, and spends 300 days a year on the road is a game changer.

All of these details guide how we test and the information we look for. We have a better idea of what she values, and what could make a long road trip for sales more tedious.

People are central to performance testing too of course but the persona isn't the guiding light. Performance testers look for the extremes in life, the highest highs and the lowest of lows.

Instead of the persona, performance is about events. Instead of Sarah, performance testers at Amazon think about Black Friday, the largest online shopping day of the year. They think about the flood of traffic happening online, and the millions upon millions of people hitting Amazon.com in the 24 hours after Thanksgiving to do their Christmas shopping.

This is what performance testers live for — Black Friday, the Super Bowl, major news events, or a tweet from a celebrity that is so viral it causes a server failure. Each of these represents a scenario that if not tested for, can take a site down and cause millions of dollars in sales to go to your competitor.

2. Focus on the task

General testing work is extremely varied.

On some days usability is important and I spend time reviewing user studies with someone that is good at design and experience. Other days I think the world is a big input combinations exercise, and I spend all day exploring the data that I should and shouldn't be able to submit in a field, learning about the consequences of 'bad data'.

And on others I dive into a programming language and figure out how to build some change detection scripts for an API that can be run every time a code change is committed to the source code repository.

Performance on the other hand, or at least the dedicated performance person, is a highly focused job. There are few long-lasting themes in the work of a performance tester.

These include: benchmarking, finding the combination of data and concurrent users that brings your product to a tipping point of slowness, and the never-ending question of whether or not the observed performance difference is OK.

When new information about a feature starts coming in slower and slower, I like to either change my focus to something other than functionality or move on to a different task all together.

Performance testers remind me of developers. The bigger focus is on adding value to a project, but the work is split. It is half “testerly”, information discovery, and half “developerly”, creating tooling that will help other people.

3. Working environment

The agile teams I have worked on usually have developers, product people, and testers all mixed up into a team.

Being together like that can make information sharing easier. Instead of getting on Skype to talk to a developer about something I notice in the product, I just lean over this way and demonstrate the part of the software in question. And when the developer tells me he isn't sure how it should work, I lean that way to talk to a product person.

The performance tester will, more often than not, lean in the direction of the developer. The people writing production code usually provide the most guidance about where to look for problems next, and are involved again later to see how their new database indexes, or reduced number of HTTP calls, or smaller data load, affected the performance profile.

The performance tester takes those answers and talks to operations about the impact, how to measure it, and what experiments to conduct if things work as they should.

The performance tester I talked about earlier was officially part of the test team. He had a cube with us, and when it was time for yearly reviews he talked with the same manager we did. About half of the week though, his cube was like the cube of a good sales person — completely empty because he was talking to people in other roles. Over time he had become an honorary member of the development team because of how closely he worked with the staff and management.

Eventually he moved in with his developer brethren and switched managers. Of course, all of this happened when teams were more often thought of as separate family units, each with its own special housing and parent. It might be hard to notice the shift visually today, no one would pack up their computer and move to a new part of the room or a different floor or even building.

Spend some time with the group though, and I bet you can tell who is who.

4. Technical leanings

Even in a co-located, cross functional team, people have their specialties and most performance people lean to the developer side of things.

The last time I did load testing, the name of the game was seeing how many users could simultaneously complete cases on an anesthesia documentation platform. The webpage with case workflow functionality was a little complicated. It took a popular load testing tool, and some JavaScript trickery, to get the scenario running and that was just to collect data. After that, there was the matter of reviewing HTTP calls, load times, and latency.

People with a development background, or at least some varied technical experience will have an advantage. On average, most testers aren't from a programming-heavy background. Often, they come from a liberal arts program at a university — maybe English literature, history, or philosophy — work in an unrelated field for some time, make a career change that lands them on a software support team or maybe product management.

There are of course variances there, but that pattern isn't unheard of and usually turns out quite a few good software testers. Testers that go the liberal arts route have fantastic varied background and perspective on software; that is part of why they make such good testers.

Unfortunately, they don't usually have the full immersion technical experience of a programmer.

Someone with a year or two of programming experience might not have to go home every night and spend hours reading about HTTP, practice learning a new programming language, and figure out how to use a (sometimes) cumbersome tool all at once. If they don't already know the basics, they'll at least have intuition on where to look for information from previous experience.

That doesn't mean "non-technical performance tester" is an oxymoron — but it will be challenging to work in a role that’s focused on strategy, and also requires them to communicate with the people from Ops, database, programming, and other roles to figure out what needs to be tested, who will test it, and how to get the data into a format where non-technical performance tester can analyze it. It can be done, but more often than not this is a consulting role; teaching the team how to analyze, then transitioning off to another team (at a larger company), or a different company entirely.

Mindset is a very subtle thing. It can be hard to say what that means for different roles in a software team, partly because there is overlap and partly because it is built into how we interact socially. But, I can always see the little personality differences between a tester, a programmer, and someone in between the two like a performance person.

Have you noticed other differences between the members on your team? We’d like to hear from you. Share your thoughts in the comments below.