Improve Software Quality With Requirements-Based Testing
The prime directive for software testing and QA professionals is to ferret out software defects. But what if the defects are in the software requirements? RBT might help.
Faulty requirements can pop up anywhere. For example, imagine you're reserving a vacation suite. The requirements might include the length of stay, number of beds, kitchen, WiFi, proximity to attractions, and so on. If the booking clerk gets just one of these requirements wrong, the experience could be quite different than what was intended by the vacationer.
This scenario contains three major potential points of failure: describing, interpreting, and recording the requirements. That is, the vacationer might fail to include one or more requirements or might describe them inaccurately; the booking clerk might misunderstand what he was told or record something incorrectly; and the recording system might fail or malfunction.
To draw the parallel between this and a software project, the vacationer would represent the application’s user, the booking clerk is the developer or business analyst, and the reservation system can be anything from SmartBear’s DevComplete to the back of an envelope.
Adding to these potential points of failure is change: changing business needs, changing user needs, or the discovery that a particular requirement might be too costly, complex, or is just no longer needed.
For as long as there have been IT projects, there have been IT-project failures, and bad requirements are often the chief culprit. The most recentstudy (PDF) I could find was published in 2011 by ProjectManagementSolutions, indicating that requirements were the primary cause for the failure of more than 20,800 projects over 134 companies involved in the survey.
What's a tester to do? Well, it should seem fairly obvious that the first step should be to correct the requirements. According to Bender RBT Inc., a self-proclaimed expert in requirements-based testing, that's exactly what needs to be done. The consultancy is built around the process that first "ensures that the [requirement] specifications are correct, complete, unambiguous and logically consistent," and then sets out to develop a manageable collection of test cases that allow testers to arrive at "the right answer for the right reason."
"That's what I do all the time," said Michele Kennedy, a software developer with TSI, which develops mapping and geographic information systems. Although she has never followed a formal RBT process, per se, her usual modus operandi includes fixing up or clarifying requirements before getting started with a new project. "Just today I was working to solve a specific problem that required a particular data set as a result," she said, but an anomalous result was coming back. "I was expecting my data to look one way and when it came in it was missing something."
Two weeks into the project, someone told her that the software would have to cope with not only that problem, but several others. In other words, the original requirement was incomplete. "The user has to give me all the cases and the data I need to look for. This time they didn't include every single exception I might encounter and how to handle them," Kennedy explained. Obviously, this requirement hole was a show stopper, and had to be fixed before the application could be considered complete.
Of course, it's best to document requirements as completely as possible and to build tests into the requirements. According to Kennedy, this is particularly important when engaging contract developers. "When you have the requirements on paper and it's all documented, we don't have to spend as much time at the customer's site. That makes it more cost effective." It also allows errors to be found and corrected early, when the cost of doing so is low. To this end, Bender describes an eight-step process for test development and execution, and explains how RBT integrates directly with steps 1, 2, and 6.
1. Define Test Completion Criteria
What's the end game for your testing? It's important, according to Bender, to define exactly what it means to be finished with testing. Your plan should define goals for the number and types of tests to be developed and for the quality those tests are meant to achieve. For example, "Testing is complete when all functional variations, fully sensitized for the detection of defects, and 100% of all statements and branch vectors have executed successfully in single run or set of runs with no code changes in between."
2. Design Test Cases
Bender describes five characteristics to be captured by each logical test case:
● System state prior to testing
● Data in the database
● Expected outputs
● Final system state
3. Build Test Cases
Bender defines two parts involved in the building of test cases from their logical descriptions: creating the necessary data, and building the components to support testing. Such components might include the logic necessary to navigate to the portion of the program being in need of testing.
4. Execute Tests
Once you built the tests, execute them against the software under test and record the results.
5. Verify Test Results
Once test results are in, they should be compared with the results expected according to application requirements.
6. Verify Test Coverage
Be sure to test all functions and execution paths of the application under test to prevent any code from going uncovered. Testers should keep careful records of functional and code coverage achieved by successful tests.
7. Manage and Track Defects
A defect tracking tool is recommended, and can help ensure that testing and defects are tracked to resolution. Such tools also can provide statistics and trends.
8. Manage the Test Library
This involves keeping track of test cases and programs under test, which tests were executed and whether they passed or failed.
But before most of that can take place, Bender's 12-step process for requirements based testing should be well understood, because the first 11 steps take place before any tests are executed against program code.
- Validate requirements against objectives
- Apply scenarios against requirements
- Perform initial ambiguity review
- Perform domain expert reviews
- Create cause-effect graph
- Logical consistency check
- Review of test cases by specification writers
- Review of test cases by users
- Review of test cases by developers
- Walk test cases through design
- Walk test cases through code
- Execute test cases against code
Of course, requirements based testing is not the end-all and be-all of software testing. As career software developer and tester Matt Heusser illustrates, a program might very well pass all the requirements-based tests but still contain lots of defects.
He described a Windows-based time and billing application he once tested that had a dialog box with four buttons. After successfully testing the functionality behind each of the buttons, he tried resizing the dialog box by dragging one of its corners; the buttons didn't look right. "That problem would never have been found with RBT." That's one of the problems with requirements based testing, said Heusser. "There's no guarantee requirements will cover all cases and find all bugs."
RBT is best when used in combination with other techniques. That’s what Nathan Jakubiak does. He’s a 10-year developer who's now with test-tools maker Parasoft. "We typically have a set of tasks to implement requirements, and defined test cases that link back to those requirements," Jakubiak said. At Parasoft, developers (who also serve as testers) break requirements into sets of functionality, and then go to work on test cases. "When we implement requirements, we have to have test cases. Each has to have test cases defined – manual or automated – to run every night, and those are linked back to the requirements. So we can track functions back through requirements."
Jakubiak agrees that RBT is not designed to catch everything, but is useful for providing a baseline for functionality. "It serves as a double check for what we're developing. We find a lot of things [using RBT], but we have other tests that we go through too." One such thing is exploratory testing, which Jakubiak said is similar to what Heusser described employing in the unscripted tests of that time and billing app.
In Heusser's experience, RBT also can help organizations build better requirements. "The problem with many projects is that it's assumed that requirements mean something. [Once past] the symbolic, we eventually stopped writing what was needed and wrote down what we agreed on. What RBT pushed was to have good requirements, without which you end up with namby-pamby confirmatory tests."
Unless you're building apps for avionics, medical, or another regulated industry, in which requirements are mandated by law, this can happen to any project, he said. "Building better requirements costs time and money." The bottom line, says Heusser, is that development and test teams must understand the program requirements for anything beyond the trivial. If there's any level of deep business logic, you need to understand what the software is trying to accomplish before you can possibly know whether it has.