The Manual Testing Spectrum

The term manual testing brings up images of a new or low skill tester sitting in a cube grinding their way through a never ending list of test cases. If you take a look at job advertisements for testers, especially in the Silicon Valley, it is hard to tell the difference between a tester and a programmer. Rumors of the death of the manual tester have been circulating for more than a decade now. Companies on the leading edge are focusing more on tools to improve code quality, improving the build deploy pipeline, and reducing defect exposure. The rest of the world is some ways off, but following suit.

What is the relevance of manual testing in a world that is clearly getting more technical?

The Manual Tester

Most people imagine manual testing as the lone tester at a keyboard. This tester does not read code, and they don't dream of writing code. Probably the most technical this tester gets is opening up the Javascript console to grab an error message, or maybe digging into the database. That is the myth of the manual tester, at least.

The manual tester opens up a new browser and sees a few things — new functions, text fields, buttons, workflows and navigation, and different usage scenarios. The most popular testing technique, domain testing, is where most people jump in to start finding problems. If a website has a special page for merchants to create discount codes, discount values, and valid date ranges for the discount, each field will have ranges for valid and invalid values. 

Looking only at the date range fields, someone could test ranges in the future, a range where the start or end date are in the past, very wide ranges lasting hundreds of years, and things that are not dates at all like a start date of abc!@#. Each value is typed in and the submit button is clicked. Once results, or an error, is displayed someone can make a decision about whether or not that scenario is a problem. Each test performed helps to create new ideas about what to do next. When one line of thought isn’t shaking out bugs, it is easy to pivot and move in a new direction.

This scenario only covers the values entered into two fields, there is plenty more to learn about the new feature. Someone still needs to see if the discount codes can actually be applied to a purchase, if the configured date ranges are honored, and then that last hidden assumption about nothing else bad happening.

Could some of this testing be more efficient, or powerful if a few tools were kicked into the mix?

Tester Plus Tools

Once there is a plan, data testing can sometimes feel tedious. Some of that can be performed faster by using an automated testing, like TestComplete. Test automation tools can change the game from enter value -> click submit -> evaluate result (rinse and repeat), to create a CSV file with the values and run them through a POST. After all the values have been POSTed, the tester can evaluate the results. A person is still there performing the test, but chances are a tool and enter the values and submit them faster than a person.

Some of the powerful features that comes with a testing tool, like TestComplete include:

  • Support for multiple scripting languages
  • Record robust automated tests without knowing scripting
  • Write regression tests that don’t fail when UI changes
  • Perform Data Driven testing
  • Create custom plugins and extensions

When tools are involved, is that still manual testing? 

Is it automated testing now? Does it even matter?

It might be more useful to talk about testing and how much or how little a team wants to involve tools, instead of talking about automation. Take behavior driven development  (BDD) for example. The simplest way of talking about BDD is to say that it is a code design too. Developers write a little code in a given format, then they write some production code to satisfy the test. The developer goes back and forth between the test and the production code, making changes and rerunning the test to make sure things are still OK. There is much more to it than a test and some production code, though.

Before any code is written, the developer needs to have a few conversations. A product manager wants a new option added to the checkout procedure that offers free shipping when someone orders two copies of the Mythical Man Month. To get a little clarity on the feature, they set up a meeting with the developer and tester. That meeting is all testing without tools. What happens if a customer orders a copy of the book from two different vendors? What happens if someone adds several copies to their cart at first, but then remove all but one.  Is everyone eligible for free shipping, even people overseas? These are the questions that fuel given, when, then scenarios. They also help define the feature so that the right thing is built the first time around.

There is a little sliver in the middle that is the automated check running against every new code check in through a continuous integration (CI) system.

But, what happens when one of those checks fail? Someone has to take a look at the failure logs, maybe run the test again, maybe open the product and perform the scenario to get a better idea of what is happening, and make a judgment about whether the failure is caused by a reasonable product change, or if the check code needs to be updated, or if a bug has been exposed. It is not possible to do 'test automation' without real testing by a person. The automation is really just sandwiched between the testing activity. 

When is a test no longer manual? 

The real secret is that all testing is manual. Tools help to do some work faster, like finding problems introduced from one build to the next, but without a person investigating and making decisions. The automated and manual tester division probably isn't as important as being skilled enough to make decisions about when, where, and how much to use tools. Manual testing is a skill that all good testers, and even developers for that matter, use to find out how valuable the software is for the customer.