How Manual Testing Feeds Into Automation

We testers like to slice things up into neat categories.

We have whitebox and blackbox testing, we have specific test techniques, each designed to get a special type of information from a software product; then we split the work into "manual" and "automated" testing. If I'm manually testing a product, that means very few or no tools are being used at all — just hands on keyboard and mouse, clicking and typing away. Automated testing usually means unassisted test execution, with the human only pressing a "run" button and waiting for results.

There might be some value in naming and talking about these activities as if they live on their own. But, in practice I'm not so sure. Hands-on testing and using tools feed into each other and braid together. Let's take a closer look at how manual testing and testing with tools feed into each other.

Is there really A division?

There are too many different ways of using tools to help testing for me to list them all. Let's start with just three to simplify the conversation and create some examples around— unit testing, writing code to drive a web browser, and testing an API. Each of these approaches to using tools to help testing along has a different purpose, but they are also very popular and fall into the category of what people like to call test automation.

I see this slicing in job advertisements more than anywhere else. Go on Dice, or even a more highbrow tech recruiting site like stackoverflow and you'll see calls to hire specific types of people. Programmers build the product code, and if they are standouts they might write some unit tests. Manual testers take a mostly finished product and explore to find problems that might make someone regret paying for your product. Automated testers are the new black. These people have enough programming knowledge to write code on their own (but maybe not enough to write production code) and enough skill in test design to find software problems that matter.

Those job titles really are a disservice to the people filling the role. Programmers, whether they realize it or not, test almost constantly. Manual testers are never entirely 'manual', and well, automated testing involves a manual component, as well. (Consider: In order to create the "automated test", the automated tester needs to manually test. And maybe file some bugs. The automated test won't even run until everything is fixed — when the running has no value; it only has value for the next build.)

Let's take a closer look and I'll give you couple of examples showing how the division between testing and using tools gets blurry.

In the beginning

Automation projects start from a few different ideas. One school thought says that making changes to code is dangerous and we should do something to try to reduce that risk. Adding a field to a marketplace like Etsy to allow merchants to apply discounts could be problematic on its own — what happens if the merchant enters a discount greater than the product cost, what is the max number length; what about non-number values? That change can also introduce unanticipated problems in other parts of the software. Having checks that run when new code is committed to the source repository might alert us that what was working yesterday is broken today because of that change.

Another school of thought says that testers are too slow. If I want to send a new version of our product off to production land every two weeks, having a two week regression cycle just won't cut it. One way to break that bottle neck open and help us release software more often is to script away some of the work that would otherwise take too long, usually by driving a browser.

None of that tooling magically implements itself. Someone with the mixed bag of skills in software testing and programming has to be there to light the way.

Creating these scripted checks is an exploratory process. Think about a request to build a few new automated checks for a feature that was is being developed this sprint. If it was me working on this, and a lot of the time it is, I would start by talking with people. The developer on the project will usually offer a demo of the feature by stepping through the workflow and giving commentary. Nearly every time that walk through happens, there will be an "Oops, that's not supposed to happen" moment.

Even if the guts of the test are in a shape that seems straight forward — the click log in, check that user name is displayed, type 50.00, click submit — someone still has to open up a browser. The script isn't the test, it is just a sometimes convenient place to start. When building the test, I might also want to know what happens if I enter 50, or -50, or 50.000, and sometimes those questions turn into discovering a new bug right around the corner. The dirty secret of automating a browser is that things have to be in pretty good shape for the test to run. What this usually looks like is a start and stop, find a bug and zap it, process.

Programmers are always knee deep in tooling and automation, but are always looking at the problem from a different perspective. Tools like test driven development (TDD), and behavior driven development (BDD) are sometimes marginalized as design helpers. A programmer writes some 'test' code and then immediately after writes the production code that should make that test pass. The test code is there to show that the programmer is on the right track and also to shorten the feedback loop. Instead of seeing that my change to a name field allows more characters than the database field in the next build, I can run tests on my machine now. Just like our humble tester building checks for a web browser, these checks that are more tightly related to the code are written in an act of exploration. Each check written is a theory about the software that can be proven right or wrong and that information guides the next experiment.


The end result of all this work is something that might run in a few minutes after a build, or in a few hours over night. Run the build, run the checks, and then get an email telling you which checks 'failed'.

Once I get that email with a red bar letting me know something bad happened, the game is afoot. Now, the mission is to find out what happened. Is there a new bug in the software? Is there a bug in the automation code? Or maybe the product has changed in a way that made the automation no longer work. Either way, I need to know. Sometimes rerunning the offending checks will help sort things out. When I run a check on its own I can watch what is happening and take advantage of current logs instead of wading through hours of transactions hoping to find something relevant. Sometimes, the test only points in a general direction and I have to jump into the product and walk through the scenario myself, or recruit any number of other tools like HTTP watchers and data generators to figure out where things went bad.

Using tools and automation fit nicely into a testing strategy. But actually running them is just a small part of the value, and it is always wrapped up in a blanket of testing.

Software testing rarely happens without some amount of help from tools, and we never use tools without testing. These two approaches feed into each other, one helps make the other more useful and when balanced just right, the tester learns more about their product faster.