How to Write Test Cases in Software Development

The act of software testing starts off in a simple manner. You begin by writing a tiny piece of software and then you see if it does what you set out for it to do. This is a test case, distilled to its core essence.

To visualize it in concrete terms, imagine something simple. You write a piece of software designed to open a message box that says, "hello world." Then you run the software, looking for that message box. If you see it, the test case passes. If you do not see it, the test case fails.

Of course, software always starts with "hello world" and follows an exponential curve toward complexity. Therefore, your testing efforts must as well. This makes the straightforward notion of testing really complicated.

The Compelling Factors for Test Cases

What, specifically, drives this complexity? Of course, more software entails more testing. But this relationship would just suggest a linear increase in complexity but there are many more nuances to consider than just that.

To begin, you have to understand the complicating factor of conditional complexity. To understand this, picture a couple of trees. The first, a sad sight, has only a trunk. The second, stretches up toward the heavens and branches out in every imaginable direction toward a canopy of leaves. Now imagine tests as traversals toward each leaf. With the trunk-only tree, you just work your way up. But to 'test' the leafy tree, you have to work your way exhaustively along all of its branches and leaves. Software branches out like a large canopy tree filled with many branches and sub-branches making testing increasingly difficult with each decision made as it runs.

Of course, test cases draw additional complexity from other factors as well. "Hello world" as a program doesn't deal with users. Users add decision points and thus create many conceptual branches. But they also add chaos. You need test cases that cover reasonable success scenarios, such as entering your birth date. But they must also cover unreasonable scenarios ("your birth date can't happen in the future") and nonsensical scenarios ("you can't type 'dog' and a smiley face for your birth date.")

And, speaking of chaos, you must also test for unexpected or weird conditions. What happens if someone unplugs the server? Does the user receive an explanatory message? Does the data wind up in a coherent state?

What Is a Test Case?

So far, we've had a look at the general concept of testing, with the vague idea that the atomic units of that activity are called test cases. But let's now get a little more specific about what we actually mean by a test case.

Software Engineering Fundamentals has a standard definition for a test case:

A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly.

The above definition make its point however it feels a little... formal. Let's say that a test case is a means for verifying an expected behavior of the software. In order to do that, a test case needs to describe certain other essentials, such as how to setup the application to elicit that behavior and what to expect.

But, more philosophically, a test case represents a "slice" of the software. Let's look at an example to better understand this: think of an entire piece of software, such as your browser. If you were assigned a task such as "test the browser," you'd be overwhelmed by that request and want to divide it into smaller tasks. Test cases helps you do this. They slice the command "test the browser" into smaller pieces like "verify that when you open a new tab, the back button is disabled."

The Anatomy of a Test Case

What do these slices look like? What core components do they have? And what qualifies them as well-formed?

Well, let's take a look at a quick list of properties. This is not intended to be an exhaustive list of every piece of information you could include in a test case but rather, this gives you an idea of the information you'll need in order to assemble an effective list of these into a testing strategy. For each test, you'll want the following:

  • A method of uniquely identifying the test case. You wouldn't necessarily think of this immediately, but as your test plan grows you'll need this to prevent chaos.
  • A means of relating the test case to a business or non-functional requirement. Otherwise, you risk testing for behaviors that do not hold any significance.
  • A concise, readable description. Someone familiar with the software should be able to glance at this and understand quickly.
  • Detailed setup information. To verify a behavior, you need to know exactly how to get the software in a state where the behavior is possible.
  • Execution steps. Once setup, exactly how do you drive the exact behavior under test?
  • Repeatability. A well-written test case has enough end to end detail that it should leave you in a state where you could execute it again.
  • Independence. Test cases should be able to stand alone and not depend on a certain sequence of execution with other test cases.

Executing Test Cases

After defining a test case, the next step is to execute it. But this has a subtle, conceptual relationship. The test case defines a template of sorts, and the template corresponds with one or more results.

In that sense, the test case has a life cycle. With the introduction of new requirements, you define test cases that describe the successful implementation of those requirements. These test cases verify the initial satisfaction of those requirements. But then they continue to serve in a regression testing capacity with each subsequent release. They then remain live until such time as the requirement becomes defunct.

When it comes to executing test cases and recording the results, you do this on a continuous basis. This lets you track the health and behavior of the software over the course of time. The best way to do this is to not only utilize a good test case management tool that will enable you to continuously test, track and link passed or failed tests cases, but by using an automated test tool.

Pitfalls and Pain

All of this sounds pretty promising for having a robust quality assurance process. And, to be sure, test cases have stood guard for countless successful applications. But this strategy is not without pitfalls.

Management staffs everywhere see scarcity of knowledge as the bane of their existence. Good software developers and good testers are hard to find, and they're expensive. And modern management theory places a high premium on identifying repetitive tasks and delegating or automating them.

You can probably imagine the temptation managers face when confronted with a detailed binder full of test cases. "Let's make these so clear and so simple that ANYONE can do this, even without knowledge or training." Then, they start chasing a dragon. They hire people with little testing knowledge and less domain knowledge, and watch them struggle. They demand more detailed steps, driving down into the tiniest, minute details and expectations. They create gigantic matrices and demands so these hired testers can run countless variants on near-identical processes.

And eventually, something like this happens: "How could you not mention that clicking that button crashed the ENTIRE application!?"

"The test case just said, 'verify that no error message appears' and none appeared, so the test passed."

The Detail Paradox and Unintentional Blindness

The more detailed you make a suite of test cases, the more mind-numbing their execution becomes. And the more mind-numbing the execution becomes, the more frequently you'll find errors.

In the brief, imaginary dialog from a moment ago, you see one such error vector: unintentional blindness. The test executors become so focused on mundane details that they miss important, major events. Their sense of context goes out the window, and automaton-like adherence to the script trumps all.

On top of that, you have the more mundane problem that mind-numbing work invites mistake. Even the most well-intentioned people cannot perform vacuous, similar knowledge tasks for hours without attention slipping. As they're going through that gigantic matrix, they will inevitably forget which row and column they just executed or do the same one twice.

Seeing the Forest Again

To borrow from an old cliche, productivity-seeking management can easily fall into the trap of encouraging people to look at trees without ever seeing the forest. You look around one day and see that you have tens of thousands of insanely detailed test cases being executed by dozens of temp-to-hire workers and interns. Is this really what testing the software should involve?

If you think of testing in common sense terms, you realize that you've gotten into something different. When you test things in your day to day life, you experiment. "My phone isn't charging, so I'm going to try a different charger to test whether or not the phone's charger is defective." That's an intelligent deduction that indicates actual understanding of the problem at hand.

You want your testing strategy to get back there. You want your testers seeing forests, even if they do zoom in for closer looks at the trees.

Automate and Trust

So how do you get back there? Well, you can start by recognizing the labor savings fallacy for what it is. When you try to make the instructions simple enough for an unskilled temp to execute, you're turning knowledge work into manual labor. And you should recognize that computers are much better suited for that task. Well-managed teams leverage automation and turn all that detail into the blueprint for it. This will save money in the long run and it will let you assign tasks to staff that require intelligence.

If well-trained testers are freed up for this intelligent work, they can make a much bigger impact. Let them acquire domain and product knowledge and actually manage the test suite. Let them do exploratory testing, wherein they leverage their domain knowledge into unscripted, context-based exploration of the software's behavior. When they encounter the unexpected, they can then add to the test suite.

Consider the hello world example from the very beginning of the post. Software starts with that, and branches out in complexity at an incredible rate. You're not going to keep up using brute force and cheap labor. You need to find smart people, encourage them to automate, and trust them to do the right thing.