A few years ago, at a test design course, I overheard someone complaining that the course did not offer advice for "requirements traceability."
When I asked her what that meant, she pulled out an excel spreadsheet — the test cases were columns and the requirements rows. If a test covered that requirement, she's put the number "1" in the box; a big part of her job was to make sure that each requirement was tested at least once.
The idea that each requirement needs to be tested (likely for each release) is especially common in medical, hardware, and financial software. We've lost track of it, a bit, when the Agile movement took hold.
Many teams have story-tests that connect to a story, and that story could be part of an epic or a release. Some teams have examples that are automated and run before every release, which may or may not cover the logical combinations. For that matter, most 'requirements' don't cover all the combinations themselves, and the implied boundaries in those requirements do not cover the real boundaries.
How do we untangle this mess of traceability and use it for good?
The extreme opposite of traceability is "we played with it, and it seemed fine." That can work ... if you trust your testers and are willing to live with the consequences. For most teams "we played with it" won't work.
So we need some way to break the work down into chunks and look at the amount of testing in each chunk. Management could look at the amount of coverage in the "chunk" and decide if we need more, or, perhaps, that this time we can get away with less. A traceability matrix provides a guarantee that we at least did something in that area. The idea of coverage, as a percentage or number from one to ten, provides more insight into larger "chunks."
You can think of test coverage and traceability, then, as strategies for talking about the things we can see — what has been tested, what has not been tested, and how those things tie back to the requirements.
Most companies make those judgments in terms of test cases and a specification or user story. Every release cycle, teams document test ideas, and sometimes detailed test cases to perform when some part of the product is ready to test. When you have a build, the testers start working through the list of test ideas one by one, checking off the ones that don’t find any problem, and doing more documentation for the ideas that found a bug.
For many teams, test cases, defects, and requirements are managed in a simple Excel spreadsheet. While this can be enough to keep some teams organizations, it also has limitations when it comes to traceability. In this case, a robust test management tool can provide the end-to-end traceability teams need to track testing efforts across manual, Selenium, API, and automated functional tests.
Making traceability a partnership
Managers have a very real need for traceability but there are also ways to improve traceability within your QA team. My preference now, is to avoid as much as possible worrying about traceability and coverage until I have some software in my hands. But more importantly, I try to develop a deeper understanding of traceability and coverage by layering techniques.
Here are few techniques that can help with traceability:
A simple kanban board will give you most of the basics on coverage. There are a couple important principles in play here. With a physical board, there is only so much space for cards. With less work in progress, we can easily see what is in test and just talk about that. Once a card passes from the test column into done, we know that change is tested and ready to go. The other thing to note here is that kanban encourages teams to break work down into small pieces. So, instead of having a feature shoved my way and testing the whole thing all at once and later trying to make sense of what we did, I work on once piece at a time that usually ties back to a line in the user story. The kanban board tells us high level information about product coverage and how it ties back to the user story.
For testing performed by a person that doesn't eventually get automated, I like using mind maps. A mind map is a fast visual tool for getting test ideas, product risks, product areas, or anything that makes sense as a tree, in a format that can be shared and edited. For example, if I am working on page with a new date of birth field, that might look something like this.
In just a few minutes I captured some of the testing that happened for this field. Doing this in real time while the testing is happening is a fun exercise, and a a good communication tool for when a developer comes by with the questions of "Hey, did you test this"?
Test management tool
Some small agile teams, maybe 3 to 5 people, write automated checks as part of the feature development work. Once a feature, or even a part of a feature, is committed and in a build, there is also a set of checks that run with every build. But, we don't want to automate all testing. A test management tool can make the process of iterating an existing manual test through a variety of data set really easy, thereby increasing the coverage of existing manual tests.
By providing a single view across requirements/user stories, tests, and defects, a test management tool gives managers a way to plan for optimal test coverage and account for risk. Test managers can thereby cut down on unnecessary tests, while ensuring ample coverage exists not just for a particular operating platform, but also across multiple environments.
When there is visual data, the deploy train becomes a partnership. I see brief panic in the last days before a release some times. People worry about the testing that was done, and even more about the things they may be forgetting. It is easy to ask the questions "This is what I have so far, is something important missing" when there is a constantly updated picture to share with the team. At that point, you have moved from CYA to partnership. It's a good place to be.
What about Requirements?
You can’t talk about traceability without mentioning software requirements.
Decades of trying to distill ideas down into the most perfect paper representation of software in strict waterfall environments are more like a definition of insanity (doing the same thing over and over again and expecting a different result) than an improvement. Think about it like a game of telephone. A customer gets an idea for new product functionality and them some time passes. Eventually that idea is passed on to someone at your company. That product owner or product manager tries to make some sense of the idea and will either write the idea up into a user story or specification, or just has a conversation with the programmer that will build the thing.
How many opportunities are in that sequence for someone to leave an important detail out, or just plain get it wrong? A lot. Requirements can be nice guides and helpers, but it is important to remember that they are always incomplete and have an expiration date. Use them carefully, or you might get a bad surprise.
Test management in an agile world
Traceability is just one of the benefits of implementing a test management strategy for your organization. By improving efficiency and reducing waste in the testing process, test management helps to better prioritize while simultaneously reducing the time teams spend on problems after the software is delivered.
Learn more abou implementing a test management strategy in our eBook: Test Management in an Agile World: Implementing a Robust Test Management Strategy in Excel and Beyond.
Get your copy.