On the Reusability of Test Scripts

On the Reusability of Test Scripts
Paul Bruce
  July 25, 2014

Automating your testing efforts is paramount to agile and expedient software delivery. However, the implementation details of automating tests diverge very quickly down paths defined by what level of testing you’re performing. Developers have unit tests in IDEs, QA testers have their own tools and scripts, and operations folks have monitoring – all to make visible what is either broken or performing poorly.

This is not wrong. Your testing strategy must embrace technical diversity.

We do the best we can, but no one is an expert at everything. Each of the more traditional levels of testing (i.e. unit/integration/interface/system) exposes issues with different aspects of how software performs within certain periods of the delivery cycle or under certain conditions. Managing test scripts in and of itself can be a tough gig. Feature delta causing script breakage, code coverage and analysis, compositionality of tests, and scalability of scripts all impact how much our automated tests actually simplify our day to day software quality efforts.

So we’d expect that the implementation details of each testing level differ from each other, right?

Alas, even experienced testers often fall prey to the “this should be simpler” notion, which might be a great long-term direction, but is counter to the very nature of why we do testing to begin with. The statement usually goes:

“Why can’t I use this [unit/functional] test as a [load/performance] test?

Shouldn’t it be as simple as that?”

We test things because that which is complicated (like software) needs to be checked thoroughly so that it remains as simple as possible. While simplicity in a user’s experience or in management process is a good goal, all notions of simplicity should be checked at the door when a tester puts their testing hat and gloves on.

In other words, simplicity is a perspective on outcome, not a technical approach to determining quality.

Take for instance the use of functional, interface-based web tests (like those created in TestComplete, Selenium, etc.) to make sure a feature still works before release. Interface tests by their very nature infer the need to have interactive browser or window session, where concepts like “click” and “type” apply. In contrast, a load test is typically a representative set of traffic between computers, having no concept of “click”, just a representation of what happened due to the click. The overhead of “interface” as part of the test dramatically increases the cost of each instance of the test you run.

Functional testing also often requires validation based on data retrieved from a back-end system, for proof that the app completed a logical process *accurately*. However, in a load test, the simple act of retrieving a database value for comparison to the app’s results introduces both additional chatter (side traffic to your QA data, etc.) and increased latency (query plus comparison times).

I can’t help at this time but mention how extremely unpleasant and how much of a dark art it is to quantify the performance impact of custom scripts, since they themselves take time to execute and are hard to separate from other performance metrics. As a 15+ year developer, I know that when you give people the capability to do something, they *will* do that thing. The more custom script you use to run a test, the more you run the risk of tying your own shoe laces together. This is why the testing industry typically divides functional testing from load testing, in strategy and in tool sets.

There are very good reasons why there is no “one test to rule them all”, and vendors who try either hit scalability problems or provide inaccurate results. Mitigating cost and scalability issues by having separate scripts for different levels of testing is reasonable in comparison to the alternative: unwieldy or incomprehensible scripts and test results. We at SmartBear know the minefield that is unified testing and have tools that can safely navigate you to success.

We look forward to hearing about your strategies and testing efforts

You Might Also Like