Better Web Testing with Selenium

  April 11, 2016

Ten years after its introduction, Selenium is arguably the most popular open source testing tool, and for good reason.

Not only is it free and open, Selenium is also fully feature and browser-neutral. Change a single variable in the code, and the same Selenium code that drives Internet Explorer can drive Chrome or FireFox. Switch to a Mac and change the value, and it will run Safari. The Selenium community mostly lives online and mostly without a login required, so a Google search can find answers quickly — but they do also have a conference twice a year on two continents. If you are having Selenium problems, chances are someone else out there has as well, wrote questions, and probably got answers, from the internet.

Getting started with Selenium is easy, and so is getting tips to click a hard-to-click object. Creating long-term, sustained success with the tool, patterns for how to do the work and manage test results ... that is a little more challenge.

SmartBear Sofware recently published a new eBook, focused on Web Testing and Selenium in 2016. You can download the eBook for free today.

If you're new to Selenium, this post will give you some actionable advice for getting the most from your Selenium tests.

Get Stable

User Interface test suites tend to start out with compromises to get running, then fall behind while they grow in code.

Put differently: they start ugly and get uglier. If we ignore that ugliness as "under the hood" and focus on the daily results and new tests, we tend to see something like a mood disorder. One day you get an email claiming all the tests in the nightly run passed and everything is OK. The next day, two-thirds of the nightly tests have failed and it looks like the product you are testing has completely fallen apart. Look under the hood and we see the suite made an assumption about the codebase ... somewhere ... and that assumption has been invalidated ... somewhere.

These mixed results destroy confidence in browser automation as a testing approach and also in the people doing the work.

There are two main points to consider when making your UI checks useful — good test design and stability.

Let's talk about getting some consistent results from your test suite.

I see tests fail a lot in my daily work. Lately, those failures point to a regression in the software I'm testing; something that was working before that had a problem introduced from a code change the day before. It requires a lot of tweaks to the reusable components, and to the tests themselves, to get stable. Most of those changes were to banish failures related to timing, and to finding objects on the page.

Timing:

Timing is something we don't normally notice unless things are really bad. If I am on Amazon.com trying to buy a few books and things are slow, I might grumble about it, or maybe just chalk it up to my internet connection being slow that day. But, in the end I still buy a few books. Your WebDriver scripts won't ignore the problem by sighing and waiting things out, they just fail if the page isn't ready at the right moment.

A typical WebDriver test is very procedural:

Navigate to Amazon

Type string The Shape of Actions into search field

Click search

Click book link

Click Add to cart

The tests run into a hiccup because of everything that happens between each of those actions — data is sent back and forth between your computer and the server, images load, elements on the page render and become visible. If my WebDriver selects the book, and then immediately tries to click the Add to Cart button, it's going to fail. The script doesn't magically know that a button is ready to be clicked, or that a field can be typed in now. So, we have to wait.

WebDriver has a few different ways to temporarily pause a script in the middle of a run. The easiest, and worst way, is an explicit wait. This is when you tell the script to hang out for some amount of time, maybe 15 seconds. Explicit waits hide real problems. A lot of the time, we see the wait fail and bump the time up a few more seconds in hopes that it will work next time. Eventually we have padded enough time in the script so that the page loads completely before trying to perform the next step. But, how long is too long? These explicit waits can conceal performance problems if we aren't careful.

The smarter way to handle waits is to base them on the specific element you want to use next. WebDriver calls these explicit waits. I have had the most luck in improving stability of a check by stacking explicit waits. After some sort of navigation, like searching for a book or clicking on a specific book link, I will generally wait for the add to cart link to exist in the page DOM and then wait for that element to be visible. It might seem like a hack or just plain overkill, but when I do this checks fail because of a software problem, not because the page was not ready.

Objects:

Objects are the other tricky part of stability. At this point most everyone knows that there is a pretty clear progression towards consistency in the methods of finding an object. Clicking or typing based on pixel location is just terrible, if this is all you can do they you probably should just stop now and find some other approach to test your software. After that, we have XPath where you point tell your WebDriver exactly where to click in the page DOM based on it's path. This is a little bit better, your test won't fail because of a browser resize now which is nice, but moving a button into a new frame will cause problems. The clear winner is searching by an unchanging element ID. An ID search that looks something like driver.findElement(By.id("myButton")).click() will search high and low across a webpage for that element and click consistently no matter where it is today or tomorrow.

This sounds like a simple solution, but we aren't all working on a pristine new software application where the developers consulted others about testability hooks before building a page. Some pages have objects that are built dynamically and just can't be given an ID ahead of time, others have a mix of elements with and without IDs, and others just have no IDs at all. Getting IDs on your page elements is often just as much of a social challenge in convincing the development management and the developers that the work is worth the time, as it is technical.

Once you have tests that consistently report real problems, you'll want to develop a strategy to run them.

Usage Strategy

WebDriver checks need a little room to breath, unlike tests running against an API or unit tests. Unless you're running headless, which I'll go into here in a minute, as soon as you click the run button a new browser instance is going to open up and the script will take control of your keyboard and mouse. That can be OK for running individual scripts, maybe you want to see exactly where a failure is happening or look for other things that the script isn't asserting on, but a strategy is important when there are more.

The simplest approach is a nightly run of the entire suite of checks against the latest available build. The project I am working on now uses a CI system that builds on demand throughout the day, and then at the end of the day at 8pm there is an official build that also includes some data setup. We have one test suite that takes about two hours to run that is configured to run against 3 different environments. Starting at 10pm, a test run is kicked off every three hours. By the time I get up and start working at 7am, there is an email with results from each of the three test runs with results to give me a place to start researching. Depending on how well built out your set of checks is, this strategy can give you a pretty good regression testing strategy almost daily.

Alternately, maybe you don't need to run the entire suite of checks each night. Think of regression testing as a dial to crank up and down depending on your needs. Sometimes it might make sense to run the whole thing. Other times, maybe all you need is the set of tests that covers checkout to get some quick information about how a recent change affected checkout.

Larger test sets take longer of course and running through a browser can only go so fast. Parallelization, running across multiple machines at the same time, will speed things up a little bit. Going headless will cut that time in half at least. That extra speed comes with a trade off, every approach does. Headless test runs don't open up a browser. The test runner runs webkit quietly in the background while WebDriver simulates clicks, and button presses exactly as the script you wrote instructs. The risk here is that you miss out on some of the goodness that comes from using a real browser.

A headless test might simulate a click on a date picker and select a value with no problems, while a script that performs a click on the browser might find that clicking the date picker throws a javascript error. That extra speed comes at the loss of power and bug finding ability.

A Word of Warning

Selenium can be seductive in that it leads to a lot of power quickly (at first.)  The problem comes two, six, or nine months in, when the test results as unstable and the basic architecture is ugly.

Too many people try to develop test strategies with UI automation and WebDriver but fail to see long-term results. As a test consultant, it is a sort of condemnation of our role, because by the time the project has failed, the three to six-month test contractor is long gone.

In other words, working with Selenium can be a bit like the dark side of Force. Focusing on consistent and stable test results, with a strategy around running the tests that includes full setup and teardown, can make your project more like the light side of the force than the dark.

Web Testing and Selenium: Current State and Future Opportunities

Whether you’re looking to get the most out of your Selenium tests or manage the limitations that come with an open source solution, integrating with an automated testing tool like TestComplete can help. In our newest eBook, Web Testing and Selenium: Current State and Future Opportunities, we discuss web testing trends in 2016 and how Selenium is evolving.

We also take closer look at how you can scale your Selenium tests with an automated testing tool.

Get your copy.

Selenium_Ads_1200x628