A typical software testing team might have a business expert, a toolsmith, a few technical investigators, and perhaps a manager.
A savvy manager knows one of the investigators is interested in mobile, another in APIs, and tries to ‘steer’ the work toward the right person. This brings up some questions. What happens when the workload doesn't allow this — the mobile expert goes on leave, or the team members complain they are "pigeonholed"?
What's a smart manager to do? Let's talk about it
1. Identifying your “tester types”
We used to talk about team makeup in terms of ratios —”How many testers do we have per programmer?”
Everyone had different opinions — some say one tester to one developer, some say one tester to 10 developers, and then there is everything in between. Agile testing settled the ratio for most people. We get one or two testers, maybe zero, for each small group of developers.
Having more than one slot makes organizational problem a little easier to cope with. In the past when I had two open spaces, I shot for one person with technical skills and another that knows the ins and outs of testing. More than that is gravy and I try to get as many different perspectives and points of view as possible.
Robert Heinlein might not be happy about this, but people like to specialize and testers are no exception. Some have serious technical chops and play the role of the toolsmith creating code that is worthy of any production environment. Others are interested in the origins of testing in philosophy and social science and spend their time learning about measurement, problem solving, and how people think and work. On top of that there are people that are experts in the business domain and product, experts in usability, and people that specialize in making projects work. There is something for everyone and fitting them into teams is a hard game.
One tactic in this situation is to focus on the strengths.
Imagine embedding a toolsmith in a small development team. The toolsmith could work side-by-side with developers, developing automated checks in parallel to the new features. While the new feature is being developed, the toolsmith is stubbing tests and building infrastructure. At the end of a sprint, there is a new set of checks to monitor code quality when things around that new code change.
Alternately, imagine someone like me with just enough technical ability to get by. I can write code and work on build systems, but it is slow and usually isn't my ideal area to be focused on. My strength is in testing through the API and database, or a mostly finished product. When I join a development, I focus on finding ways the product might fail for a user and helping developers find questions they forgot to ask. This brings better code quality too, but in the form of better first builds instead of code change detection tools.
Or, maybe you can focus on developing those weaknesses.
2. Skill up together
A good long term strategy might look a little like that, with some skill improvement mixed in for both the tester and developer.
On new teams, my first big problem is usually figuring out how to be useful before the new code is in a build and officially ready to look at.
Pairing almost always helps. With front-end developers, we walk through javascript and talk about how data is being sanitized by cutting off leading or trailing white space (or not) before being passed to the database. Being immersed in javascript for a while is a good way to keep up with the new libraries that are coming out what seems like continuously. It also taught me how to describe problems in a way that would help them find the code error faster.
On the flip side, the developer would start to remember the questions I had asked and where I would first look for problems. Each time we paired together on a feature was a lesson in test design for both of us. While asking "what if..." type questions out loud, I'd also be performing the actions in software that would answer those questions. We identified class boundaries on variables, discover workflow problems, and walk through testing them together.
Most testers probably won't get to the point of writing production code, and most developers won't turn into testing experts. Unless you have a nice long career, there just isn't enough time in the day. But, there is nothing wrong with getting just a little bit better.
3. Move toward coaching
Some teams have swung as far as they can go and ended up with very few, if any dedicated testers on their team.
It's hard to pick a place to start embedding testers on teams when you have way more teams than testers. You could try to have that overworked tester jump between teams, always on the losing end of the flow of work, and try to work each feature as they come. Or, you could start from the other end of the equation.
One company I worked with had several development teams, each with a handful of developers and only two testers to go around. I was one; the other was very junior. We worked features as they came, but there were usually too many, so one or two would be neglected. My strategy there was to slowly seed testing ideas in the development team through the occasional lunch and learn, demonstrating problems and explaining how I found them, and generally talking about testing. As a result the quality of the code improved before we saw it and we could test less, and have less back and forth, while still maintaining confidence in the work.
Pivotal went all in on the 'tester as a coach' model. Everyone is officially a developer at Pivotal and contribute to production code, but a couple people are testing specialists and share their knowledge. Those testers travel between teams and teach testing through exercises, games, and pairing on testing problems. Over time, the testers have become more technically competent, and have also been able to improve testing. A rising tide can float all boats.
4. Deal with bad fits
This way of organizing teams is tough, it requires people that are dedicated to improvement all around, and willing to deal with change over long periods of time. That won't work for everyone, and some might be a bad fit despite being good people. Here are a few strategies to consider.
- The Shuffle: If a tester doesn't fit into the needs of one team, maybe they will work out with another. Let's say you have a team that needs someone technical that can help with test-driven development (TDD), or write tests at the service layer, but after a few weeks just isn't able to pick it up. Another team that isn't so automation focused might be a better fit. A non-technical person can add value there without the technical learning curve. Maybe they can even work on those rusty tech skills when time is available.
- Whole Team Testing: The old saying 'anyone can test' is true, but you better make sure you have the right people. Organizing is important even when there is no test team. Product managers will usually be the subject matter experts, they know the customer, the business domain, and should be able to find workflow and business logic problems. Sales people are great at finding problems in core parts of the product, anything they demo regularly. Sales are like walking, talking smoke testers.
- Testability: This is how we talk about how easy, or hard your product is to test. Do you have good logging? Do you have ways for people to test the product without a user interface? Is it easy to figure out how to get around and use the product? Making it easier to get information about your product will help testers find their value.
Teams are shrinking, we have small groups of developers and only so many slots for testing specialists. Fitting the specialist into the right team, or figuring out how to build up a skill set so they can contribute will help.
How is your testing skill organized? Share your ideas in the comments below.
Looking to improve the productivity of your testing team?
One of the best ways to improve productivity is to utilize a tool that enables you to easily reuse tests. Download our newest eBook, How to Become an Effective Tester by Reusing Tests.
