Test Management Pitfalls

  October 30, 2015

How we plan, store, consider, execute, and report test results - test management, doesn't start off hard. We start off with a blank slate. If all our projects are small - released in a few months and rarely touched, then it is possible that the test management process stays easy. Long-standing projects and software that is under revisions for an extended period of time, however, lead to challenges. After months and years of creating, updating, and deleting documents, web pages, emails and 3-ring binders, not to mention staff turnover and projects that are very different (desktop rewrites as web rewrites as iPhone applications) we end up with a mess of files, instructions, institutional memory, documents and planning software. Over time, team members and managers come and go, each adding their own set of desires and style to the system.

If we keep a few things in mind, test management can flip from being a time sink to a value center.

What Is Test Management

Start any job at a large software organization and you'll likely be handed a great deal of ... stuff. Test Strategies, test plans, and even "test cases", perhaps tied to acceptance criteria tied to a feature tied to a sprint tied to a release. If you are very lucky this big, complicated map will be a single map per product. If you are unlucky, then some of the test documents are in system A, up until a specific date, after which they are stored in system B. Or, perhaps, the "functional" tests are in system A and the security tests in system B, with the web services tests in system C and performance data in system D. It can be a big, complicated map to drill through.

The goal of test management is to turn this storage room from a cobweb filled closet, to a neatly organized and easy to sort through room.

The simplest way to deal with this is to "set it and forget it", by which I mean, actually forget about any test documentation. Your team buys and sets up a tool, you get a few accounts, and all of a sudden, people act like they are in the wild west. Testers create and reuse their own test documentation. Those get updated and re-written by different people at different times, hopefully by people that understand the original intent, and eventually you have gigabytes of data. How useful all this is an important question to ask.

Test management isn't the job of one person, but of everyone on the team. Anytime a new test artifact, something left behind after the test is performed, is created, I am thinking about what value that might add to the project.

How Much Detail

The traditional ideas about how much detail should go into a test case are something like this:

1 - Navigate to Amazon.com

  •  Verify you are now at Amazon.com

2 - Click Login button

  • Verify you landed on Login page
  • Verify User Name field is present
  • Verify Password field is present

3 - Enter User Name

4 - Enter password

And on and on. Tests were supposed to detail every little thing that might be slightly important during a test. On top of that, we were supposed to document as many different tests we could think of, or at least as many as we could get down before a build was ready to test. There was a lot of overhead, some rework to make the documentation match up with the actual software that was produced, and a lot of bored testers writing things down they weren't really interested in.

My philosophy for detail in tests is simple: as little as you can possibly get away with.

If you work in a regulated industry, working under FDA or SEC guidelines, there may be some requirements for how and what, and how, you document test work. For the rest of us, that level of detail might not be worth the time. Something more high level, like a few test ideas -- check for boundary problems in the bid field, submit fails frequently on the buy now page -- or even charters to describe a theme and mission when testing a feature.

I use test ideas and charters can guide my testing, and also give a history of what I've worked on in the past, without sucking up time and catering to an unneeded documentation standard.

To Repeat or Not To Repeat

Imagine this.

You've been working on testing a new feature -- talking to developers, asking questions, and discovering problems, getting new fixed code back. At the end of the cycle, right before you want to release, you want to do one last pass to see if any new issues have been introduced from all that recent change.

This very scenario is the most popular reason for having a large suite of test ideas documented. When you have that, the decision for what to do before a release seems simple -- run all the tests one last time, just in case. From the tester’s perspective, repeating the exact (or close to it) test after having already performed it, feels like drudgery. Most people don't want to do it.

Maybe we don't have to run all the tests every time we release new software, complete repetition isn't always needed

My colleague, Matt Heusser, likes to refer to regression testing as a dial, like a volume control, that we can spin up or down depending on the need. Every time I help release software, I like to get an idea of the parts of the software that have changed. With that list, I can go to a few in-the-know programmers, and talk to them about their experiences with that part of the code and the concerns they might have. This helps me to think about risk -- what is important in this release, and what areas of the software might be in danger.

With that information, I can select specific sets of test charters and themes to look at before the release. I might not perform the test exactly the same as the person that wrote that documentation, chances are I'll do things differently on purpose, but they can still be a valuable reference. By carefully thinking about pre-release testing based on risk instead of a 'run all the tests' strategy, we have immediately shortened the testing cycle.

Tend Your Garden

I worked on one project that leaned heavily on detailed specifications. A product manager would talk with customers, and find out what they needed and then create a specification document detailing the entire minutia that went into the new feature down to the size and location of buttons. This spec was signed off on, like a contract, so any deviations were a big deal and tool time to revise and get accepted to development going get going again.

Even after all that, the specification would still sometimes have mistakes. After a few more iterations on the feature, it was just plain out of date and wasn't useful any more. And so goes our test documentation.

Unless you are working on an older legacy software product that is mostly in maintenance mode, software changes. Every time the software changes, we are pushed a little further away from the test documentation we created. Eventually, you'll open up a test description and have absolutely no clue where to start. You might make this a little easier to deal with by spending a little time each release updating your docs.

My personal preference is to try to avoid this altogether. Every specific navigation step, and 'expected' outcome, and reference to field labels increases the amount of time you will spend fussing over the docs in the future. What I do instead is focus on describing the value of the feature to the user, along with some details about testing themes -- what data should and should not be usable, what error conditions you might want to look for, what questions I have, and the problems I found.

Focusing on the test idea, and telling the story of the testing I performed, created information that will me much more long lived and useful than highly detailed test information.

It Doesn't Have To Be A Script

What comes to mind when you hear the term "test management?" For me it is pictures of lists and instructions, like a flashback from something I tried to repress. Test documentation really doesn't have to look like that though. One of the more popular ways to describe test ideas I am seeing at conferences now is though mindmaps. Instead of writing steps that detail inputs and outputs, in a mindmap I could create a node for different aspects of the test work. For example if I wanted to test the bid field on eBay, I could create nodes for data types, internationalization, data length, and workflow. Off of each of those, I'd create a few more nodes as reminders of things that might be interesting to test.

Mindmaps are a great way to categorize and share your ideas in a very light way. These in addition to your test story can make for powerful and useful documentation.

Managing test documentation can be a messy after thought, or part of keeping your test machine running smoothly. If you absolutely must have test documentation, a little strategy and forethought will go a long way. Developing ideas on what useful test documentation looks like certainly won't hurt either.