STAREAST Recap: Fostering Long-Term Test Automation Success

  Mai 10, 2016

This week, we are sharing our favorite interviews from the STAREAST Software Testing Conference, which took place May 2-6 in Orlando, Florida.

In today’s video interview, we caught up with Carl Nagle, principal software developer at SAS Institute. Carl discusses how the open source Software Automation Framework Support (The SAFS Framework) can be used to run stable and scalable automated tests with tools like TestComplete or Selenium.

Watch the full interview below.

(Miss yesterday’s STAREAST Recap? Watch it here.)

What was the topic of your talk at STAREAST?

I spoke about our ability, over nineteen years, to have successful, sustainable test automation with an open source test automation framework that SAS provides to the outside world.

Tell me how the process began. What are some of the different things you considered when developing that framework and making it sustainable and scalable?

The most important thing was that the test automation framework had to be able to persist over years and accept the changing tools and technologies that came over time.

Let’s take a trip down memory lane — what are some of the technologies that have come over time?

We were testing VB6 apps and native Windows apps and we were moving to Web apps and Java apps. Then .NET and Flex, and it keeps changing over time.

How did you structure the test automation framework to keep up with these technologies?

By separating out the test designs — or test scripts if you want to call them that — from the test execution tools that will run them. By separating those out, you can keep the same testing strategy and design over time, while changing out the technologies and tools over all the years.

Can you give me a real world example of how that would work? Let’s say I am testing a web application like SmartBear.com?

When a new technology comes — like HTML5, JavaScript, AngularJS — we decide what kind of testing tool will be best to test it. We then take that tool we chose to use for that technology and build an engine and make it interpret our test so that it can be plugged into the framework as one of many different tools that are available.

Let’s take an example of HTML5 — how would go about the process of setting that up?

In this particular case, SAS, as the primary user of the framework, the core testers wanted to use Selenium as the testing tool for that. We created the interpreter that would allow the Selenium WebDriver-enabled framework to interpret our tests and plugged it in. So, alongside SmartBear’s TestComplete or other tools, they could choose to use Selenium for that part of the test if they wanted to.

What are some of the things you do to ensure a test is stable, specifically in Selenium?

That’s one of the beauties of our type of testing framework. Users don’t code their test in Selenium or TestComplete. We define actions that are available and implement those actions in the underlying testing tool engines and whenever we find problems, we go in and fix that code and make it the most robust. That fix now works for everybody that’s testing any application, using that engine. As we find issues — Selenium says it clicked something and it didn’t — we go in and try to identify the issue and solve it, so that those future clicks all work.

How do you get insight into the reporting aspect of Selenium? What do you use for reporting?

In our testing framework, we have our own logging of pass and fail because, for one thing, a single test isn’t only using Selenium. They may be using Selenium and TestComplete. So we have a single test log that we write to that can be XML format and transferred to any reporting mechanism you use.

Where can people get more information?

Learn more about the testing framework on GitHub: github.com/safsdev

We’ll be sharing more interviews from the STAREAST Software Testing Conference all this week. You can stay up-to-date about updates from upcoming shows and all of our latest content by subscribing to the SmartBear Blog.