A Test Design Q&A with @joshin4colours

If Lorinda Brandon is following you on Twitter, watch out because she might read one of your posts and convince you to do a 45-minute webinar about it. Her first victim was Josh Grant, a test automation expert from CaseWare in Toronto, Ontario. On March 5, Josh tweeted:

Josh was a great sport, of course, and a couple weeks later he presented a thought-provoking webinar called “Automated Test Design: Single Use vs. Reusable Tests.” Watch the On-Demand Webinar.

Over 1,300 people registered for Josh’s webinar debut and many of the attendees chimed in to ask questions at the end. Due to the volume, we weren’t able to answer all of the test design questions during the webinar. So, we’re recapping some of the best and most popular ones here.


 

Q: "Can you give an example of a one-time test?" - Michael P.

A: Two examples come to mind. One is looking for memory leaks in applications. A single memory leak can be a massive blocker issue, but automated solutions can help. You could purchase a tool to search for a specific memory leak. You may only use it once, but it's still an effective use of automation.

Another example of a one-time test is code generation. Sometimes you need to generate or modify many lines of code but you only need to do it once (e.g.: renaming methods to suit a new naming convention). I've done this and it's an excellent use of automation. Remember: automation is a tool like any other. Sometimes automating a task can be helpful, even if done only once.

Q: "GUI automation – best practices for less brittle tests?" - Chuck D.

A: Some ideas for keeping tests from being brittle:

  1. Use the Page Object pattern
  2. Separate tests from application test logic
  3. Keep tests focused and less complicated
  4. Develop your app with hooks/locators to uniquely identify GUI controls in an automation friendly way
  5. Write fewer automated GUI tests.

 

Q: Could you define Fragility please?  - Gunjan D.

A: To me, a fragile test is one that fails constantly, independent of the app under test. Another way of thinking of it is a test that's non-deterministic. Meaning, the test result doesn't reflect the actual condition of the underlying app. One way to look at fragility is to ask yourself “Do I actually believe this test result?” If the answer is often “no”, then it could be a fragile test. There may be a better definition out there, but this is how I’ve experienced fragility in testing.

Q: "How do you get Development to 'build in' testability to the system under test?" - Jim H.

A: Testability is important in automation and elsewhere. It can improve the overall quality of your application. Try to discuss with Development about how you may want to test your app. If you want to automate, ask about providing APIs or other hooks that automation can access. If you want to automate UI, ask for locators for controls. Most importantly, communicate with your Development team and discuss how you can make testing and automation a built-in feature. Good testing and automation can help Development as well as improving quality – that’s the selling point.

Q: "What are your views on automated scripts to create small test modules as opposed to to one big script?" – Pankaj G.

A: Keeping scripts small and modular is usually preferable over large, single-file scripts. Modular scripts are easier to maintain, easier to manipulate and easier for developers and testers to understand. The ability to use automation in a variety of situations like in a continuous integration setup means keeping things compact. Larger scripts are easier to write at first but can become more difficult to maintain over time.

Q: "Have you found a way to measure or determine when a reusable test is becoming too complex or hard to maintain?"  - Matthew C.

A: This stumped me during the live webinar because it’s a very tough question. Measurement of test quality is a challenging problem overall. I've found that over time, more specialized automated tests can lose value because they don't tend to find many bugs. These tests often still require maintenance, and so might not be worthwhile in the long run. Tests that aren't often run (e.g. less than once every six months) can also be a sign that they aren't needed any more.

* When we posed this question to the Twittersphere, Allen Johnson (@allenjfly) weighed in:

See also:

 


Close

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading