What To Do When You're Set Up For Failure
Test and Monitor | Posted November 20, 2005

Your management may be asking you to fail. They may even be demanding it.


As
the new year comes creeping up on us, many companies look to improve
the way they make software and what easier way to do that than to set
an unreachable goal for their hard working test team? "100% of testing
must be automated by the end of the first quarter." They might give
your team six months or a year, but the goal is usually the same,
automate it all or total failure. Not much room for creativity in that
situation, is there?


Your team has anywhere from 1 to 100 people
doing manual testing right now. They have their Word docs of test
cases, maybe they're using Excel spreadsheets to keep things organized
a little better. If you're really invested, you might have a database
with shared test cases, though probably everyone has their own area of
expertise, their own set of test cases and their own method of storing
them.


Automating has to be better than the chaos you encounter
every time you test a new release, right? Not necessarily, at least,
not right away. No matter how haphazard your manual testing looks to
the outside eye, your team knows how to use the system they have now.
They're comfortable with their tools and habits which they've
cultivated over many projects.


What will happen to them if their
familiar tools are taken away and they're thrown into the great wide
open of automated testing? If they're like most of us, they'll probably
hate it.


A good test team does the best they can with the tools
at hand to shake the bugs out and ship a quality product. Anything that
gets in the way of that goal will frustrate them. A new tool takes time
to learn and use well.  However, that automated testing ultimatum
I mentioned usually doesn't come with any extra time. And the coders
don't stop shipping new versions, new features and bug fixes while the
test team learns how to get automated testing up and running. The test
team is expected to continue to perform all the same manual testing of
new releases *and* implement automated testing.


It's not fair,
but management usually doesn't know what else to do. Manual testing is
inefficient and expensive. Managers see smart people performing dumb
repetitive actions and they know there has to be a better way. The
testers know this as well, but they don't have time to change it.
They've got to get through this 100 page Word document full of test
cases so the latest changes can be shipped by the end of the week. How
can they break out of the cycle?


Even good managers do dumb
things. They know the company and test team will be better off with
automated testing, so they issue the ultimatum. All testing will be
automated by (insert date here). That's the right direction, but it's
like saying you've got to lose 40 pounds by tomorrow afternoon. The
only way to do it is to chop off a few limbs. Goal reached, but to no one's benefit.


How
can you avoid screwing up your current system when the ultimatum comes
down? Just ordering some software and asking your team to make it work
is a sure path to unhappiness.

You've got to talk to your manager
about the team's real goals and find a solution that fits your team and
your company's priorities.


The team's basic goals are usually along the lines of:

    - Test new builds for bugs

          - Do regression tests to prevent releasing previously found bugs and side effects.

          - Do exploratory testing to find new defects

          - Review releases for usability issues and standards conformance

    - Increase test coverage

    - Increase testing efficiency - stop wasting time and money on redundant testing


Automated
testing is all about coverage and efficiency, so many people start with
that in mind. They set out to build a set of automated tests that cover
a big chunk of their manual tests, and are optimized to add more. They
build cathedrals of scripts aimed at eliminating all the problems of
manual testing. However, these scripts can't be introduced into daily
usage until they are complete. As it grows, the big
automated test project becomes complex and fragile, breaking with every
major revision of the tested application. Eventually, the test
builder starts relying on on older, more stable versions of the tested
application so they can keep the tests working. The weeks go by,
and finally everyone starts wondering why automated testing isn't
working. Is the problem is the tester or the tool?


The problem isn't the tester or
the tool. The problem is that the original goals and priorities were
unattainable. Everyone was asking the question, "How can we automate
all testing?" That's the last thing most teams should be thinking about
when they start testing. The first question that should be asked is,
"What's the simplest thing we can do with automated testing to improve
our software and save money?" Get your manager to buy into this goal
and you'll make life a lot easier for everyone.


Doing the
simplest thing first will take you a long way down the path to
automation. The hardest thing about doing automated testing is usually
just getting started and making it part of your daily life. You'll need
to set up a foundation for
automated testing, and that in itself can involve multiple PC's,
several compilers, installation programs and changing the way you build
your applications. Many people start creating automated tests before
they've established the most basic framework for the work process of
using the automated tests. Setting up a standard workflow
of coding, check-in, automated build and automated testing is a huge
leap in quality. Start with the foundation, automate your build and
make automated testing a part of every build.


Once you've established your flow, start the automated testing with a test that's so simple it's laughable:

Test #1: Does the application start and exit cleanly?


That's
a test that anyone can automate and it's a high value test. Everyone
has a horror story about shipping a build of their application after a
small, quick code change that broke the build. Preventing one of those
could pay for automated testing by itself.


When you've got one
automated test, add the next tests incrementally. Pick small, high
value tests to start and create one per week. Create more each week if
you have the resources, but make your goals very easy to reach. Spread
the test creation around so that your team learns how to work with this
new system.

If you are able to have a dedicated resource for automated test
creation, have them work each week with a different tester. The more
familiar everyone is with your tools, the better. They've probably
had months, maybe years to get comfortable with manual testing. You've
got to expect that it will take time for them to see the value in
automated testing, as well. They'll be talking about how fantastic it
is only when they understand how the tool works and find that it gives
them time to do more exploratory testing and catch more bugs.


There's
another reason to start with small and simple automated tests, your
first tests are disposable. They'll be more than worth the effort, but
unlikely to last. That's not a problem, though. Most software
developers know that the first version of any piece of software is
almost like a prototype. The lessons learned creating version 1.0 help
make version 2.0 much better. Your automated tests will be the same.
Don't spend time on v1.0 looking for perfect code or complete coverage.
Keep your tests small and simple until automated testing becomes second
nature for your team, it won't reduce the value of the tests. Any good,
regular automated testing will catch problems that manual testing would
have missed and you're likely to end up with more tests of a higher
quality if you develop them incrementally and add to them regularly.


When
the v1.0 tests break because the tested application has undergone major
revisions, your entire team will be ready to start a new batch of
incremental test creation. But this time they'll be familiar with
automated testing and looking forward to applying their experience to
the new version.


So when management sets you up for failure by
pronouncing that all tests must be automated by (insert date here),
show them the value of incremental automated test development. You can
have practical, useful tests running quickly that show concrete
benefits without a complete conversion. Make small improvements
regularly, share the experience throughout the team and share your
successes with the company. After a few months, management will really
understand how an incremental approach to implementing automated
testing contributes to the success of a strong testing team. They'll be
confident that the next cycle will continue to make your software even
better without forcing the team to commit to an unreachable goal of
"100% automation".

Close

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox