How Developers and Testers Can Work Together to Overcome the Challenge of Object Recognition
Test and Monitor | Posted March 15, 2016

Webpages are fickle.

Right when I think I know where everything is a label changes, a button is moved a couple of pixels to somewhere completely different in the page. For the last year or so, I have been working full time on a project building automated checks for a user interface.

There are a lot of challenges in UI automation — timing, selecting the right tests, figuring how to make tests powerful — but handling page objects is always the first I have to figure out.

Here are a few object recognition challenges I have come across and how I got past them.

The Elusive Object

I have come across a lot of projects trying to script away some testing problems in the user interface, but not a lot of success stories. There are a couple of patterns I notice in these hard lessons. The first one is having one person on the test team spend a week building a proof of concept.

Usually that person will spend a couple days getting their head around the tool, and then a couple more building or recording a few checks to demonstrate. When I did this, the demo was a nightmare. These one-off scripts would fail on and off, and in different ways each time. Management understandably lost interest and we took another route.

The other pattern throws caution out and charges forward. The first automation project I worked on was like this. We built a proof of concept that was small and simple enough that it mislead the team into thinking things would be simple. Over the next 6 months we built some shaky infrastructure and a little over a hundred checks. Unfortunately, we rarely had more than half of those passing during the nightly runs. We ended up with an expensive source of information that couldn't be trusted.

Usually when these failed, the script just plain couldn't find the screen element it was supposed be touching. We would kick off a script, it would navigate to the next page and then start the search and the page would load and then, boom; red light.

This can happen for a few reasons.

Probably the most basic problem you'll ever have is dealing with pixels. The first tool I ever used to figure out this UI automation thing was for automating basic IT jobs like getting a nightly server backup running. When this tool performed a click, or typed into a text field, it found those fields by their pixel location on the screen. Any time something on that screen changed like a button changing size, or moving to another place on the same screen, or even the browser getting resized, the script would break. If this is how your tool finds objects, you should probably get a different tool.

The next journey was with a very basic framework, I think it was one of the first UI libraries build for Ruby. This tool used XPath to find everything on the page. It is a little bit like specifying a file path. Consider this trivial example:

/body/table/tr[2]/td[2]

That is the second column of the second row of the first table on the screen. This is a little less fragile than using exact location, but it also has challenges.

The first drawback was how slow the process of building checks was. Each time I wanted to touch a new button or field, I had to stop and open up developer tools to find the path. It was a little tedious. If we added a second table above the first, the locator would break. If the rows in the table are sorted and the default sort breaks, the locator would report the text on row 2, column 2 was now "incorrect." If the rows are orders, sorted newest first, and someone logs in as that user and creates another order, or deletes the first, the automated tests would report the test is, again, incorrect.

Interested in learning more about building maintainable UI tests? SmartBear is hosting a special webinar on March 22, focused on the challenges of GUI testing at the UI level. We are hosting three sessions throughout the day. Learn more and reserve your spot. 

Tools have come a long way those days.

An automated testing tool like TestComplete, provides built-in functionality to improve the stability of automated tests. The TestComplete platform works at the object level. This means that when TestComplete captures user actions over an application, it records more than just mouse clicks and simple screen coordinate-based actions.

Searching for an object based on it's ID in the DOM is one of the most commonly used methods right now. It isn't perfect, but it is much better than the alternative. Usually when this fails, it's because there is a timing issue. For example, I work on a product that is very data dense. Every page has expandable sections with buttons, and rows of data. When a test fails from object problems now, it's because the script is trying to interact with the page before everything loaded and ready.

Sometimes the ID is generated based on an order number — with a different button for each order number. This can be very challenging, especially if you want the tool to go through create an order, then click the "right" radio box. A human might be able to understand what to click, but coding that in software can be challenging.

Let's talk about how to code to make the job of test tooling easier, sometimes called testability requirements.

Develop For Testability

One of the stranger things I've seen in user interface automation isn't related to technology at all. When these projects start, managers decide that testers should build these systems all on their own. The marketing and product demos tend to support this idea; after all, why should test automation require the help of the people writing the production code? When the project starts moving along and the testers inevitably find a hitch, people get understandably frustrated.

The first thing we noticed was that we couldn't just lay the scripts on top of our product and have it run. Instead, the product itself needed to change, to add testability features. Originally, our software wasn't designed with any sort of UI automation in mind, so developers weren't very careful about adding IDs to the page elements. Each time I wanted to work on a new page, or even a new element on an old page, I had to put in a change request to get an ID added for that field. Hopefully I remembered all the fields on the first try. That created a one day lag at minimum before I could go to work, longer if something important was going on at the time.

The other thing we did was a larger effort. The initial tests we wrote were full of hard coded wait commands. Each button click, or navigation was followed by a 5 (or more) second wait to give the page time to load before moving on to the next step in the script. Those waits made the scripts incredibly brittle, we never really knew how long a page would take to load. If 5 seconds wasn't long enough, the script would try to click on a button or type into a field that wasn't there and fail. To get around this problem, we had to design a polling solution that would tell us when a page was ready to be used. Not just design it, but also convince the development managers that it was a good idea and in their best interest. The end result was a flag in the DOM that would be set to isReady='true' when the page was fully loaded.

Successful UI automation projects take effort from everyone.

The product I am working on now probably has the most successful UI automation work I have seen. The product is mostly legacy, it isn't getting new javascript libraries thrown in once a week and we don't have to worry about responsive design. It is built mostly on top of SQL stored procedures. Changes can be tricky because of that, sometimes a change in one place will break something in an area we had never considered. It's also very hard to unit test.

Every time a new feature is added or an old bit of functionality is changed somehow, the developers do that with the idea that I may need to add that into the suite of checks that runs over night. Every new element in the UI gets a static ID that hopefully doesn't change, and we try to minimize the number of fields that are added and removed dynamically. When it helps stability, the developers also work with me to write SQL scripts to seed data into the database.

Getting a strategy in order to handle your user interface objects will save a lot of pain and frustration down the road. The usual objections follow the old story of 'we are already short on developers, we don't have time for that'.

A better question might be, do you have the time a couple months from now to untangle the web you are making today? I bet the answer is no, we'd rather handle that now.

Announcing TestLeft: A functional testing tool fully embedded in standard IDEs

TestLeft is a powerful yet lean functional testing tool for developers who are testing in Agile teams. It fully embeds into standard development IDEs such as Visual Studio, which helps minimize context switching and allows developers to create test cases in their favorite IDEs.

In addition, TestLeft comes with visual tools, which assists developers in solving another common testing challenge — the ability to quickly and easily identify correct object properties for application under test. Using TestLeft, developers can easily generate test code simply by dragging and dropping identifiers over objects on the screen. Furthermore, access to built-in methods and classes is available for code completion and faster scripting. Tests created by TestLeft can even be brought within SmartBear’s TestComplete for consumption by testers.

Try TestLeft for free!

TL_PPC_1200x628

Close

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox