How to Increase Test Coverage Over Time with Automation

  March 05, 2018

When it comes to software quality, we want to test as much code as humanly (or mechanically) possible, right? Actually, yes and no. For each test cycle, it’s important to consider multiple strategies for measuring test coverage and put a system into place where it can be maximized over the long-term as well.

Test coverage is one of the measurements of test quality that tells us how much of the application under test has been tested. You can think about it like sweeping the floors of a house. Imagine if I only included sweeping bedrooms for my sweep coverage criteria. With that criteria, if I swept 100% of the bedrooms, would that mean that the whole house is clean? No, because I completely missed the kitchen, dining room, bathrooms… You get the point! Therefore, we must always be careful with test coverage and recognize that sometimes it has its limitations.

Test coverage is useful for defining certain parts of the software in order to cover them with tests. It also tells us when we have tested sufficiently, gives us ideas of what else to test (thus expanding coverage), and helps us know quantitatively the extent of our tests. It’s a great measuring stick, but even with 100% test coverage, we’re not guaranteed that our application is 100% bug free.

Even if you only managed to achieve 20% coverage, it may not necessarily be a bad thing. The ideal amount of test coverage to aim for should be based on your priorities and analysis of risk.

There are many ways to consider test coverage. Here we’ll examine code coverage, data-oriented coverage, and the plethora of other techniques at a tester’s disposal.

Code Coverage

Code coverage is the most popular metric for measuring test coverage. It measures the number of lines covered by the test cases, reporting total number of lines in the code and number of lines executed by tests. Essentially it’s the degree to which the source code of a program is executed when a test suite runs. The more code coverage, the less chance of undetected bugs making it to production. This measurement can also be broken down into different levels; not only lines of code covered, but there are also branches, decisions inside logic constructors, etc.

Data-Oriented Coverage

With data-oriented coverage, you have input and output parameters, each of them with their own domain (the spectrum of possible values they can have). If you think about all the possibilities, you’ll end up with a Cartesian product because you can test every possible combination.

On the other hand, you can test less and go with “each choice” coverage, which means that you choose at each possible value at least once. There is also all-pairs, which is empirically said to have the best cost-benefit relationship, being the best mix between each-choice and all combinations.

Other Kinds of Coverage

In addition to those previously mentioned, there are several more ways to cover the product that you are testing such as state-machines, decision tables, decision trees, equivalence partition and boundary values, etc. It’s very interesting to see that each technique is supported by an “error theory”. The error theory takes into account the typical errors that programmers commit. For example, equivalence partition and boundary values consider the error of using a “<” instead of a “<=”, misunderstanding business logic, etc.

Additionally, there are other kinds of test coverage that are not related to lines of code or inputting test data. One thing we must cover is mobile fragmentation: are we covering the main mobile devices, operating systems, and screen sizes? When it comes to browsers and operating systems, we must consider how our web system will behave in any combination of operating systems and browsers and how many combinations we should test. Lastly, we must think about the test environment, context, etc.

Laying Out a Plan to Optimize Coverage in the Long-Term

What happens when you never have enough time to reach certain criteria for your test cycles? In this case, you might want to consider the following method for improving test coverage over multiple test cycles.

Imagine we have different features to test on different browsers and have organized different test cases with different test suites, each one with its own priority. We need to execute the most critical against all browsers, but the rest, we can decide to execute on a different browser. In the following test cycles, we can exchange all pairs (suite/browser). That way, in each test cycle we do not have perfect coverage, but after multiple test cycles we improve it. We cannot ever be assured that we are done with testing, but when time is scarce, we have to use it wisely and do our best to reduce risk.

Here’s an example of how to plan good test coverage over many test cycles:

test coverage in software testing coverage and time

Where it says “date 1”, it could also say “sprint 1”, “iteration 1”, “day 1”, “version 1”, etc. The goal here is to distinguish which test cases you will execute in each iteration in each environment. For some of them, it’s mandatory to execute the test every time time on all browsers (probably the most critical ones). Others can be divided into groups and executed only in one browser, but this has to be done in a very clever way in order to have each one executed in each browser by the 4th round.

Here is another example applied to mobile testing in order to reduce risk related to device fragmentation:

test coverage in software testing fragmentation

After the third execution, you’d have this coverage:

test coverage in software testing

Conclusion

Test coverage criteria are very useful, but they don’t guarantee anything. Some criteria are linked to others- when one is forgotten, so are the rest and vice versa. We need to use the ones that best suit our needs and also consider priorities for each module and define coverage to look at each one according to priority and complexity. Finally, we can apply long-term coverage criteria to optimize test coverage over time.

Join us for a webinar on August 7 at 2 PM EDT with to learn more about how to increase test coverage over time with automation.

Register Now

About the Author: Federico Toledo is the co-founder and director of the software testing company Abstracta and holds a Ph.D. in Computer Science from UCLM, Spain. With over 10 years of experience in quality engineering, he's helped many companies to successfully improve their digital products. Dedicated to testing education, he's written one of the first books in Spanish on testing and formed Abstracta Academy. He is also a co-organizer of TestingUY, the biggest testing conference in Latin America. Follow him on Linkedin or Twitter