Improving GUI Automated Testing
Develop | Posted August 11, 2011

Customer Guest Blog

Boris Eligulashvili

Systems Software Architect

ViewRay, Inc.

Testing a complex medical device GUI is a daunting challenge. In this paper I describe obtained practical experience in improving automated testing based on our keyword-driven approach.

Editorial Note: The concepts and structures described in this article
are geared towards advanced users of TestComplete and test automation in
general. It does require some intermediate knowledge of Jscript coding
practices and advanced knowledge of TestComplete’s programming objects.


After working for some time with developers of Windows GUI for software products and QA testers who used those GUIs according to formal verification test protocols, we have learned a few things about building automated tests. Analysis of numerous Internet sources shows that automated testing while solving some problems brings new problems to solve. The following are some problematic issues with automated tests that we considered in our solution:

  • Creating and maintaining test reports that clearly reflect what was done.
  • Creating and maintaining test protocols that clearly reflect what should be done.
  • Maintaining relationship between test reports and protocols.
  • Using multiple external and internal data sets.
  • Automating ambiguous manual test protocols.
  • Restricting recorded user activities for playbacks.
  • Learning curve to use automated tools is time consuming.
  • Differing knowledge and skills required for test design and automated test implementation.
  • Treating test code differently than production code.

We decided to create our solution based on the keyword-driven testing methodology. A few reasons behind our decision were:

  • Keyword test protocols are easy to audit, understand, create and maintain. We use them as a direct input in the testing subsystem.
  • Access to UI elements is separated from the execution code. We use multidimensional collections to create proprietary name mapping mechanism.
  • Reporting mechanism is coupled with the test protocols. We copy test protocols in test results initially to report the status granularly for each step.
  • No need to know programming for test protocol creation. We reuse the framework and toolkits by separating tests, toolkit maps, GUI elements maps, and execution code. Thus different people will work on different steps of automated testing preparation.

Our solution is built on the top of SmartBear Software´s TestComplete, one of the popular automated software testing tools.


The major parts of the solution are:

  • The tested application
  • Data for use in the automated tests
  • Automated tests that describe:

Actions applied to GUI elements; Checks of GUI element´s states; and Flow of tests
  • Toolkits, expendable set of software components that implement predefined actions and checksverifications for GUI elements under the test
  • Framework, software component that implement:
    • Sequencer that analyzes tests´ metadata to determine what test should be performed in the current run
    • Test processor that enumerates tests´ data and determines what toolkit should be involved in processing the current test step

There is a special intent to use framework and toolkits in the same subsystem:

  • Framework gives the structure and requires writing tests in a certain way
  • Toolkits give building blocks to be used for executing particular actions and checks. They can be used across your company for testing the applications independently from the described subsystem. They represent a set of functionality that can be extended with new actionschecks or actionschecks for new GUI elements.

The functioning of this testing subsystem is based on the use of the following processes that require a different skills set and can be performed by different people:

  • Writing or assembling new tests
  • Writing new toolkits
  • Writing new name maps

The solution can be run manually or automatically. The results of testing are copies of the test files with added information regarding the status of each step (PassedFailed) and Remarks. Because the tests may run in loops, result information is relative to the number of iterations ended with a particular status. TestComplete provides its own log capabilities that were utilized by our subsystem as well.

Figure 1 gives an overall understanding about the design of the solution.

Figure 1: High Level Design of the Solution

Figure 2 is an overall view of data processing.

Files with tables are read. Each row is analyzed to determine if another test should be performed, or if an Activity, or a CheckPoint should be executed. For Activities and CheckPoints additional data is analyzed to determine GUI elements and supplemental data. According to this analysis a processing is performed. The status of processing is added to the result file. If the test is over, it ends, if not, it goes to the next line.

Figure 2: High Level Design of the Data Processing

Writing the tests

Test sets represent multiple test cases. Test cases are executed sequentially. Tests are built of keywords that drive automated tests and deliver optional data to the toolkits.

Tests can be written using Microsoft Excel or Microsoft Word so they are presented in easy-to-read formats of sheets or tables respectively. Either format can be used because the respective files are easily accessed from TestComplete Jscript runtime by the script using Automation objects. One or more tests are included in a separate file on the disk. Each row in the tablessheet represents a test step. Cells on each row contain information on what to do, where to apply and also may specify input data. Tests are reusable, independent and can be recursive.

Each formatted test may pass control to (PerformStepOn) another formatted test by specifying its name and optional filename separated by a semicolon. In addition to name, each formatted test has a second property - the "application screen name". It is the imaginary tag for a window. For each window there can be many tests. For easier reuse, tests are written according to a single responsibility principle.

Content of a test is created by a domain specialist and presents an intelligent mix of meaningful keywords that represent Actions, CheckPoints, Control Structures and Initializations:

  • Actions simulate user actions that can be applied to Windows controls on GUI and inject user input. Examples are below. They are presented in a few variations depending on the Windows controls they apply to. For example, the code for executing Click actions for a CheckBox and Button are different. Also, Type may have a variation like TypeWithIndex which means that value will be typed in during sequential actions, and, in a loop, will be updated according to predefined rules. Each action specifies a name of the Windows control it applies to. Some actions require another parameter that represents a value. So Select Action requires an index or the name of an Item of the collection where this action is performed. The Type Action needs a value that will be typed on the control.
  • CheckPoints simulate user verification steps that verify a set of object property values. Examples are below. Each check row specifies a name of the Windows control it applies to. Checks can be performed to verify conditions to be true or false. When it is necessary to wait until the object will change its state, a variation of the checks can be used with a configurable value of the maximum time to wait. The Windows Forms and WPF use different property models, so the corresponding functionality is performed.
  • Control Structures are constructs that simulate user options which affect the flow of the test execution. Example in this version of testing subsystem are:
    • Conditional Structure: If-IfEnd to execute a block only if a condition is fulfilled
    • Iteration Structure: Loop-EndLoop to repeat a block a certain number of times
    • Jump Statement: PerformStepOn to make an absolute jump to another test in the same or different file
    • TestStart: provides tests screen name where that test is exercised
    • TestEnd: terminates the current test with a specific exit code
  • Initialization: SetUp and Cleanup to satisfy pre-requirements for the test cases and prepare environment to run applications under the testing in a loop.

Test Scripts Examples

The following are examples of the test tablessheets.

The test execution starts with reading the configuration file. In Figure 3 we show an example of information that is needed for executing tests. We list applications that should be started with the name of their executable files, and applications UIs that will be tested by their marketing names. Marketing names are used internally because we need to identify the UI elements with the same names used in the different applications. Also we need to specify the graphical subsystem: WPF or WinForms, because they may use different methods for the same action.

Figure 3: A Sheet from a Configuration Excel File

Figure 4 is an example of the Sequence file that will be selected during execution of the tests with the configuration file shown in Figure 3. The Sequence file describes the name of an Excel file and name of its sheet that will be used to run the test. After the execution of the test, the report file will have a mark in the corresponding column depending on the result of the test.

Figure 4: The SequenceOfTests Sheet from the Application1-UI-TestExecutionSequence Excel File

Figure 5 is an example of a composite test. The name of the test is PrepareForAdmission. It works with the HomePageCONSOLE mapped page representing whole screen or a part of a screen. Different colors are used for easier text reading by the users. It has columns for Actions and Checking points as well as columns for the execution results of each row. The Comments column can be used to document the tests. The Rem[arks] column is used by the software to present clarifications.

Figure 5: The PrepareForAdmission Sheet from the OurApplication Excel File

Figure 6 is an example of an atomic test. It contains detailed step-by-step instructions formatted in each row.

Figure 6: TThe InitTable Sheet from the Initialization Excel File


Implementation of the framework is done according to the structure of the test files. It assumes that the test file structure is a leading requirement in the framework design.

The framework also has some preprocessing features that are concerned with initializing common constants and setting global variables from metadata in the configuration file. This kind of metadata describes one or more application under the test. More than one means that in some cases we tested a set of collaborative applications.

The concept of the framework is very simple. Each test case is listed in a specially formatted sequence file. The content of this file is enumerated by core code of the framework and is executed in prescribed order. The test case processor consists of an interpreter that reads the test cases row by row. When the processor encounters a keyword, the function that implements the specific keyword is called and executed. The results are written into a test results report for the corresponding row.

The framework also has support functionality that includes .Net Dll extensions for starting and stopping applications under the test and wrappers for TestComplete provided functionality for IO, MS Excel driver, error handling, manipulation of arrays, sending emails and exporting logs.


In the core of toolkits there is a set of software components that implement application-independent reusable functionality that directly deals with Windows controls. Some of the components enable actions that can be applied to Windows controls, thus simulating the user’s input. Other components enable getting Windows controls property values and comparing them against expected values.

In addition to application independent code, the toolkits include application dependent maps of Windows controls that belong to applications under test. The controls are mapped programmatically to clear custom names that saved in collections for later use. Creating maps dynamically during the test runs ensures their existence. Action and Checkpoint components will use the mapped objects that are extracted from the collections. TestComplete provides Alias functionality to construct mapped object hierarchy and access Windows objects via wrapped interfaces. These maps encapsulate information about Windows controls layouts and thus minimize cost of maintenance for possible UI modifications.

Bridging Framework and Toolkits

In order to reuse application independent toolkits in application dependent automated tests, another mapping step is used. It lets us uniquely indentify toolkits. It is expandable because function pointers that represent toolkits are saved in a collection. The corresponding function pointer is selected for the keyword on the current row and corresponding toolkit is invoked for execution.


Configuring the solution for automated testing includes specifying location of scripts and tests. It also gives information on applications that will be tested. A separate step also includes setting up different constants.


Shims are used to provide application-specific adaptations to the testing functionality. Examples of shims are: moving some windows to the second monitor to let other windows be visible or copying log files to the output directories.

Test Data

For simple tests the test data can be embedded in test scripts (external files). In this case, it is contained in a cell under the Value column for corresponding action andor checking points in Test Script files.

For example, for some tests executed in a loop, such as a patient’s name with simulated typing in a Windows control, each value in each cycle should be unique. To accommodate it, we designed a modification of actions and checking points. Their names have a “WithIndex” suffix; which means that each value for this step will be built as concatenation of the original value from the script table and value of the current cycle in the loop. For example, if a new patient’s name is “Smith”, for each cycle of the test we will use: “Smith1”, Smith2”, “Smith3” and so on.

If you want to separate data and script, you need to allocate your data in separate Excel files, group it on sheets and craft namevalue pairs. In the script you will specify “File;Sheet;Name”.

Figure 7 shows an example of two data sheets that can be in one Excel file.

Figure 7: Two sheets with data from a Data Excel File

Another kind of data beside alphanumerical strings can be presented by graphics data, e.g., bitmaps. This data is used for image comparison. The name of the file with a stored image will be specified in the script file.

Implementation Details

The following Jscript examples are here for illustration purpose of some implementation decisions that were made during the development.

Figure 8 illustrates how we enumerated files, sheets and values of data that can be used during execution of the tests. Using Dictionary objects we created an easy access to it at run time. These Dictionary objects present the bridging mechanism between the Test files and data. TestComplete’s ExcelDriver method is used to create a DDT driver for a sheet of an Excel file. This driver is used to get name-value pairs of data.

Figure 8: Illustration of data mapping

Figure 9 illustrates how we make collections of objects that hold properties that provide access to the corresponding name mapping item. The outer collection can be enumerated to find the current Page (defined by TestsStart action). The inner collection can be enumerated to find current Windows control under the test (defined by ApplyTo cell value).

Figure 9: Illustration of window controls objects mapping

Figure 10 illustrates arrays of function pointers that represent implemented actions. Each action function as well as each check function is coded in separate files for easy maintenance.

Figure 10: Illustration of action function points mapping

Figure 11 is a list of Process Control and Support keywords.

  • TestStart
  • TestEnd
  • LoopStart
  • LoopEnd
  • If
  • IfEnd
  • SetUp
  • CleanUp

Figure 11: Process control and support keywords

Figure 12 shows examples of Action keywords.

  • ClickButton
  • ClickCheckBox
  • ClickMenu
  • ClickTab
  • ClickRadioButton
  • Type
  • TypeWithIndex
  • SlectComboBoxByName
  • SelectComboBoxByIndex
  • Sleep
  • Log

Figure 12: Action keywords

Figure 13 is a list of Checkpoint keywords.

  • Visible
  • InVisible
  • Enabled
  • Disabled
  • Checked
  • UnChecked
  • TextSame
  • VisualSame
  • BackColor
  • Background
  • SelectedTab
  • WaitForVisible
  • WaitForEnabled
  • WaitForChecked

Figure 13: Checkpoint keywords

Figure14 illustrates implementation of the ClickButton action.

Figure 14: Click Button Action Example

Figure 15 illustrates implementation of the Visible check.

Figure 15: Visibility Check Example

Test Designer Responsibilities

Good test design is as important as good code design. Test cases should not be hard to automate.The following is a list of things that should be done:

  • Creating new tests andor updating existing tests with changes related to new screens, updates to existing screens, new flows or changes to flows.
  • Setting requirements for new toolkits by specifying what toolkits should be implemented for new custom functions.
  • Requesting customized keywords to achieve business level clarity, maximize efficiency, and improve maintainability. Good keywords are the key to success. Good keywords make the test designer’s job easier and clearer. Keywords alone; however, are not magic; they need a framework.

    Keywords should be constructed as single compound words that include both Verb and Target (windows control name or its property name). Use Pascal Casing convention. Avoid specific words related to tested application. Specific words can be used for test names and page names.
  • Preparing files with data that will be entered during executing of the “Type” related keywords and/or bitmaps that will be used during executing of "CompareView" related keywords. This may require running manual tests and collecting and processing the data.
  • Analyzing testing results. It is very important that tests be designed with the understanding that somebody should maintain and manage them. However, that topic is not within the scope of this paper.

Automation Code Designer Responsibilities

Automation code creation is development of software product. Application Development Company should recognize that here should not be difference between application development and automation testing solution development. In addition to defect fixes, this kind of work includes:

  • Adding new items to a collection of aliases for new Windows controls and new application under testing. This includes using TestComplete for obtaining the mapped object names. Implementation follows a template and adding them is simple.
  • Adding new toolkits for new keywords. Each toolkit is linked with the corresponding method that is applied to the Windows control Implementation and follows a template so adding them is easy.
  • Bridging new toolkits includes adding an additional “IF” block of code with a call to the toolkit’s method.
  • Inserting software shims to resolve compatibility issues, multi-application execution and multi-monitor use or resolve inter-departmental differences.

Software Application Designer Responsibilities

A GUI should always be designed with understanding that somebody will test it; however, that topic is not within the scope of this paper.

Test Results

Test Results are shown in the copies of files with test cases. Three last columns are used to numerically indicate the execution status of each row. Numeric value is used because different rows can be executed in a loop. In this case some iterations may succeed and others may fail. Any row is considered passed if it was executed successfully. In addition, the Checkpoints should pass their test conditions. The last column is used for remarks. In case of “failed” rows the corresponding cell will have an explanation of what happened. Each test creates its unique directory for all output files. This directory will also have an exported version of TestComplete’s log file and any other files that may be created during the test. Examples of these files are saved bitmaps. For failed tests, it is imported to collect the application’s own log files that can be used by developers for analysis and debugging. This is done by using per-application shims. It is very important to write test cases so that the format of the test results makes sense to non-technical personnel.

Error Handling

Error recovery is very important for automated tests. Jscript runtime errors that are raised by the test engine are trapped with the use of try-catch-final blocks. Tests report failures and then gracefully exit. This prevents the cascading effect of reporting failures if tests continue to run. Because unexpected errors may always happen, reporting should be as clear as possible. This is important for application debugging as well as for solution fixes. TestComplete provides a mechanism to handle a number of error conditions that were used in this solution. Examples are errors produced by TestComplete’s test objects, methods and properties. A test that runs at that time will be interrupted. The solution is always trying to close the applications it runs as well as all instances of Microsoft Word or Excel that it started. We will stop execution of a failed test, but continue to run according to the sequence test.


In this paper our learned practical experience that we acquired during initial implementation of improvements to our GUI automated testing is described. Our approach allowed us to get simple-to-understand and detailed software testing evidences to support our claims. The solution is easily extendable by adding new toolkits for not-yet implemented operations with Windows controls. It is done using collections of specialized maps. Test cases are built as tables and can be linked together. Thus, test cases are aggregations of smaller, more reusable modular tests. This approach splits test creation between programmers and testers by enabling programmers to build the modules, and testers to piece those modules together. TestComplete is a versatile tool that drives the automated testing and is the core part of the solution.


The author wishes to thank all colleagues in the Software and Quality Control Groups with whom he had fruitful discussions about software testing. First of all, I would like to thank Ketal Patel, Director of Software Engineering, for support and guidance I received from him. I would also like to extend my gratitude to people who discussed various aspects of this work during different phases of its development. Special thanks to Jeffrey Adair for suggesting some key words and Sireesha Gogineni and Jason Bradley for sharing their experience working with TestComplete and suggesting ideas for logging and control flow. I would like to also thank Debie Urycki for proof reading and suggestions on corrections.

About the author

Boris Eligulashvili is a Systems Software Architect at ViewRay, Inc., an Image-guided radiation therapy device manufacturer. Before joining the ViewRay software team, Boris developed software solutions for Rockwell Automation, Thermo Electron, Philips, and Hitachi.

In his career he has implemented many innovative software solutions for various stages of software development projects. Coupled with Boris’ commitment to software process and software quality improvements and lifelong learning, is his desire to share his experience, knowledge and ideas with the software development community.

In 1982 the Council of the Odessa Institute of Communication (USSR) awarded Boris the degree which is equivalent to U.S. Doctor of Philosophy in Electrical Engineering.

Boris´ email is


By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox