A simplistic view of testing projects looks something like this: feature tests, and then the set of everything created in the history of our product used for pre-release testing. Run the feature tests, run through the pre-release tests, and then boom. It's time to release.
Real testing, however, is all over the place. One minute you’re working the script step after step, the next minute you find a bug that takes me in a completely new direction.
Software testing is something that could go till the sun burns out. How do we build a software testing strategy to get the most benefit out of testing knowing that we will have to stop before the work is completely done?
Selection and Combinations
Most products aren’t single serve anymore. We write one web app and then see it used in 10 different web browsers and untold numbers of mobile platforms. Consider an e-commerce project built for the web and mobile. On the web, customers access the product in any number of browsers -- Internet Explorer all the way back to version 7, Chrome, FireFox, Safari, and a few other outliers. Then, there is a whole battery of mobile device platform and operating system combinations to consider.
One change to the software needs to be tested in about 20 different environments to get full environment coverage, and that doesn't even begin to talk about what happens when you get into those environments.
That problem is what you get when you consider every possible combination of tests you can run and multiply them out. In tester land, we call that the combinatorial explosion.
If each change has to be tested in 20 different environments, how can you possibly get through it all before release day? You can use a forked approach here:
- All-pairs testing tools to visualize the work
- Sampling to find what is important
Combinatorial tools do a little math in the background to help you visualize the exact number of unique tests that are possible with a group of variables. Let’s say you have two radio button sets to test on a webpage, one for high school graduate and the other for over/under 21 years of age. If you test those radio buttons in only one browser there are 4 possible combinations, 5 if you include having no value selected at all. That seems simple enough.
But, in modern software, you have that list of browsers and platforms looming over you. Let’s get more realistic and say there are 10 different browser and platforms that have to be tested before those new radio buttons can be pushed to production. Those 5 tests just exploded to 50 and we certainly aren't getting more time.
What do you do?
The first step is taking a look at what browsers your customers are actually using.
Your list of 10 browser and environment combinations might have 50 percent of your users on Internet Explorer 10, 20 percent on IE 9, another 10 percent on Chrome, and then small clusters of users scattered on the other devices. Those numbers immediately point to where you will want to start based on customer impact.
Spend most of your time between the three most heavily used environments and then to cursory testing on the remaining environments if all things are equal. Sometimes, they aren’t.
Another approach is doing test selection based on financial impact to the company.
It is pretty easy to figure out where to set focus first if customers make $500,000 in purchases each week using iOS 9 on an iPad, and $20,000 each week on Windows Mobile.
The other side of the test selection coin is picking and performing the tests that will teach you something important and not wasting time on everything else. There are an infinite number of inputs that could go into a new age field on your user profile page. Which ones are important?
To start, there is the range of values that might be actual ages, 1 - 105 or so. There are things that aren't numbers like 'asdf!@##$'. There are things that aren't typically ages like -1, or 1.5. But which values do you try? All of them? Probably not. There are a few categories here -- valid values, boundaries, and things that are not ages. Start by testing at the boundaries -- 0,1, 105, 106 to see how the system handles good values and things outside of the valid range. After that choose one or two input values that aren't ages, -1 and 'asdf!@#$' for example.
These should both teach me how the system handles bad data. Careful test selection helps reduce the number of tests from way too many, to less than 10. This gets you closer to a reasonable amount of work on new changes, but what about pre-release testing?
In a waterfall environment, a regression testing strategy usually amounts to taking all of the test cases over the past few releases and cramming them into a new folder. Over the next few days (or weeks), the team works through the list test by test, passing some, failing and documenting bugs for others, and ticking off items in a pre-release checklist.
Test cases are documented so that anyone in the company could magically become a tester in the last few days before a release when it became clear that there was too much work and too few people.
That strategy didn't work well for waterfall shops, and it certainly won't work for companies releasing every other week or more.
Pre-release testing helps discover new problems introduced by the changes that happened over the course of the last sprint. Think of this approach as something that you can zoom in or out on depending on what changed in the release. If the release is mostly new features do two things:
- First, talk with the developers about the integration points, places where the new code interacts with older parts of the application.
- Next, look at the source code repository commit messages for the release branch to see if anything we didn't know about was snuck in.
These two pieces of information will change the conversation from 'test all the things' to 'these areas of the software changed, so we are going to focus on testing X, Y, and Z'. Maybe initially, this gets you from 4 days of testing to 3, but that is still a full day you get back.
It is tempting to get a little nervous when larger architectural changes are introduced, upgrading your version of Elastic Search for example. But, the same reductionist strategy can be used. The larger changes probably affect more areas in your product, everywhere search is used in Elastic Search example, but we might be able to reduce testing to one of those places instead of retesting every search field individually.
For all of its faults, pre-release testing is one area I think the agile world’s fixation on tooling and automation can be very useful. If developers are writing unit tests along with new feature code that get checked in and run with CI, and other engineers are writing checks for services and API to test at a higher level, and then testers are writing a few checks in the UI and performing other tests without code then you have layers of safety built in like a net below the circus trapeze.
Every time a change is made, each of these layers of checks can be run against the new build alerting the technical staff of something that is no longer working because of this new change. It is a long road to build that safety net, but little by little the release procedure gets simpler and faster.
It is easy to look at testing work and say that there is just too much to do and not enough time to do it in. Taking a closer look at the tests you perform and questioning what each one will tell you is a step in the right direction.
Test Management in an Agile World
Today, a majority of teams are still using Excel spreadsheets to organize and manage test cases. And while this old school method of test management still makes sense for some, a growing number of teams continue to be held back by their reliance on Excel.
In our eBook, Test Management in an Agile World: Implementing a Robust Test Management Strategy in Excel and Beyond, we look at how to implement a successful test management strategy. Find out how to manage your growing list of testing responsibilities with a test management tool.
Get your copy.