Introduction to Agile Testing Responsibilities

The engines were cranking away on creating and using agile development practices well before the manifesto was written and signed, but testing has been slower to catch up. For a long time, if you asked about the role of an agile tester, the response would be 'collaboration'. Collaborating with the development team is part of the job, sure, but there is a lot more to the job than that. Let's take a look at the role of an agile tester; what do they actually do, and what are some of their responsibilities. 

Why Collaboration?

The pre-agile release cycle was anywhere from three months, to an entire year. Near complete separation of teams helped make this possible. Product people would be on site or on the phone with customers trying to understand what problems they were having, and how software could make that go away. They did this alone, for months at a time, focusing on a release that might happen next year while the development group was working on the release in a a few months and the test team is prepping for the release in a few weeks. That release cycle is a big accordion, and distinct groups is one of the things that made it possible. It was a lot like communism, and it failed is some of the same ways too.

Agile collapsed the release cycles down to about two weeks for most groups. At a minimum, people are striving to release more often than they were. The teams came crashing together at the same time. Completely separate teams — product, live development, support development, test, and database — don't work when a group wants to release every couple of weeks. Each time work changes hands —from product to development to test, and sometimes back and forth — some amount of time slips away and the schedule was a little worse off. Working as a small, tight-knit team, shrinks feedback loops and release timelines.

Agile has created a new era of collaboration, so where does testing fit in all this?

Driving a Team Toward Production

The new role of testing is to help drive development teams toward production. To help sharpen the team, provide fast feedback, and generally crank the speed up on delivering software to customers.

Coaching and Pairing

In the most extreme cases, development teams are pushing code to production several times a day. Most testers will look at this and wonder about how something could possibly be tested in that time span, let alone dealing with pre-release testing and the find/fix/retest cycle. There is no way this would work with distinct groups, but a coaching and pairing model will help. Some developers in agile teams will use tools like test driven development (TDD) and behavior driven development (BDD) to get better code the first time around. These are light testing strategies, but they are sadly shallow. The developers are mostly concerned with how they can prove their code is right than considering how it might fail for a customer.

Add a tester to the mix and something special happens. Imagine a tester and developer sitting together and pairing on a new feature in a wiki product to report on word count. While the programmer is writing code, the tester is building some test data and jotting down test ideas. As soon as part of the code is functional, the programmer can ask the tester to take a look on the development environment. When that tester finds a bug, reporting time usually goes away all together. The programmer is there to see it happens and gets an instant demonstration and can pivot to fixing the problem quickly.

While a programmer might write automated checks for simple scenarios like verifying that there are five words in a document, or what happens when the document body is empty, a tester will go deeper. Do punctuation count as words? Should the count match what comparable products like Microsoft Word return? What happens in very long documents? What happens with special and multi-byte characters?

Coaching and pairing cover the same ground separate groups would, but in a much tighter feedback loop.


The result of agile transformations is usually fewer testers and more infrastructure. Imagine a scenario where almost no feature leaves a programmer's hands without some amount of automation. Maybe that automation is at the service layer exercising an API, maybe it is snugly next to the code, but it is there and is extra code. Code that never gets run isn't very valuable, so there needs to be a system to get that running, and often. Today, those systems come in the shape of continuous integration.

Working on infrastructure is a special way for testers to advocate for quality. Some teams have thousands of lines of test code sitting and slowly rotting. While the ops person might be too busy worrying about the next release and emergency patches for the current release, a tester could get server access and expose some value. CI systems are glorified schedulers, they run on triggers. Each time a new code is checked in, there is a trigger to merge that into the product and create a new build. The savvy tester could do a little configuration to add libraries for the automated checks, and add another trigger to run those checks after the build is created.

Having automation running with every build is a one-two punch of change detection and safety net. Sometimes a check will fail because the product changed, and that's OK. The failing check can be updated and the team can move on. Other times, a programmer will have made a change that affected the product in unexpected parts of the product. If those checks aren't running, no one will know either way. Developing technical ability as a tester is like getting a new superpower. That person can contribute to the team in new ways — tooling, code reviews, infrastructure management, and technical test design.

Shifting Left

The last part of this isn't so much a responsibility as a pattern. Without this, testers are still separated and the time suck of hand offs will appear in the release cycle. The fully shifted tester asks the product manager questions about why a customer wants a feature, sits in on design sessions to see how someone would actually use the feature, and attends review meetings with the programmers to start developing early test ideas. All of this is in the name of getting a more full picture of the product, and to help ask questions from a point of view other roles don't have.

There are a lot more responsibilities of testers on agile teams — working with product managers on defining features, working with customers on feedback, working with designers on usage — but these are the most distinct. When agile is in full swing, and the teams shrink as small as they can go, collaboration is the name of the game. Winning that game means focusing almost exclusively on driving to customer value, and releasing new software as fast as the team can reasonably do. 

Additional Resources