Advanced project structures, or how to avoid test failures

Here's a new article from Alexey Toryanik:

This article covers advanced design techniques in TestComplete projects. A smart exception handling mechanism model is proposed. With the use of TestComplete, the power and flexibility of the described approaches make automated testing flexible, reliable, crash-protected and easy-to-run.

 

Introduction

Modern approaches in application development have extremely strict requirements for the speed and quality of product testing. Accordingly, automated testing should be an extremely reliable and human-independent process. Once launched, tests should return valid results for the coverage area. Test crashes are a completely undesirable phenomenon, they often cause serious plan inconsistencies with all the related consequences. Risks of this kind incite testing team leaders to substitute automated process with manual effort, and as a result the testing team's efficiency decreases.

 

The other important requirement for product testing is flexibility and selectivity for tested areas. Due to the limits of testing terms or customer preferences, in many cases only limited scope functionality is tested. And the list of tested areas to be covered by automation may vary from one build to another. So a flexible mechanism to test only desired areas is required. And it must be simple enough to be used by testers who don't know how to program.

This article proposes an extremely effective approach to resolve both problems by combining them into one methodology and it makes automated tests reliable and effective. A skilled scripter isn't required to run the tests and get results. And the results will be obtained with guaranteed success no matter what the condition of the tested application.

1. Classic project scheme

First, I propose the following project structure scheme (see Figure 1).

 

Figure 1. Classic project structure scheme

The <Initial section> section represents all the initial actions for running tests: reading external sources, initializing global variables, setting TestComplete environment options, clearing or preparing previous tests results files and database tables, launching tested applications, etc.

Data-driven methodology encourages developers to store test data in separate sources such as:

- XML, INI or other text files

- Databases

- Predefined local variables storage in Test Complete

- Local and global script variables

A TestComplete <Initial section> example is represented below (see Listing 1). It reads data from an INI file and fills internal TestComplete local variables .

Listing 1. Initial Section example

const INIFile='TestData.ini';

procedure Main;

begin

 //<Initial section>

 Getini (INIFile);

//… other sections

end;//Main

procedure Getini(INIFile: String); //reads variables from ini to local vars var w, Section: OleVariant;

begin

Log.CreateNode ('Getting test info from INI file: '+INIFile);

  w := Storages.INI(INIFile);

 

  Section := w.GetSubSection('Source settings');//all path parameters reading

 

  Options.LocalVars.lvApplicatinLocation:=

   Section.GetOption('PRLOCATION', 'C:Program FilesXXXApplication');

  Options.LocalVars.lvSERVER:=

   Section.GetOption('SERVER', 'XXXServer');

  //… other sections

Log.CloseNode;//Log.CreateNode ('Getting test info from INI file: '+INIFile); 

end;//Getini

The <Main section> represents essentially all test actions (see Listing 2).

The <Final section> represents all final operations before completing the tests. Here all "garbage" collected during testing should be removed. If the tested application (TA) was reconfigured, it should be rolled back in this action so that tests can be run later with the same initial conditions. Also, test results not analyzed in <Main section> may exist, so their analysis can be completed here. If TAs were not closed in <Main section> (accidentally or intentionally) they are closed here.

This is the classical scheme of the project structure. In simple, ideal situations it will do, but when it comes to complicated testing, this scheme is not efficient.

2. Advanced project schemes

Modern software automation standards demand tests that are more universal and flexible.  Let's assume that our TestComplete project is a kind of black box. This means that when we give it some input data, we get output data according to an unknown rule inside the "black box" (see Figure 2):

 

Figure 2. Data-Driven scripts model

Here:

Iin - Input data/information

Iout - Output data/information

Rta - Tested Application (TA) interaction with the provided Iin,

f(I) - Some kind of rule performed inside the "black box" (algorithm of test scripts).

Thus "universal and flexible" means that Iout and Iin are not just constant values. They can vary in a wide range based on the test data sent to the TA.

Now let's get back to our TestComplete project. Tests should not be developed for specific data, they should be insensitive to it. No matter what data is given as input data (Iin), tests will apply this data to TA and verify TA response (Iout). This approach is called data-driven testing.

Let's interpret input data as 3 types:

- Test flow parameters (e.g. TestComplete parameters such as Options.Log.Enabled)

- TA parameters (e.g. the path to tested project, login information etc.)

- A list of TA areas to be covered

There should be some way to provide this information to TestComplete scripts that's quick and easy, universal and intuitive. It should be in the scope of the classic scheme, but extend it. And, processing this information via scripts should be faultless even if the TA's interaction with the proposed input data is negative (see Figure 2). This can be done with the help of a perfect exception handling mechanism.  And all these aims are achieved by dividing the project into basic units - the Test Sections set. 

3. Test Sections management

Let's divide TA into some quantity of independent features TAi (see Figure 3)

 

Figure 3. Test Sections division

Here N is the number of independent functional areas the project is separated into.

Our task is to get the option to run TAi testing sections in any order, and any sub quantity of them.

Toward this end we'll implement <Main section> as (see Listing 2):

Listing 2. Main Section model

procedure Main;

begin

 //<Initial section>

 …

 //<Main section>

 repeat

 begin

  CurrentTS:=SetNextTS (SourceA);

  case CurrentTS of

  1: TS1Procedure; 

  2: TS2Procedure;

  //3..N-1: test sections

  N: TS3Procedure;

  end;//case CurrentTS of

 end;//repeat

 until FinishTSCycle (CurrentTS);

 

 //<Final section>

 …

end;//Main

There is some source (let's say, SourceA) from where test data can be obtained (it included Test Sections list and the information about their order, accordingly to QA's taste).

In Listing 2 example SetNextTS function returns the section unique index (by abstract SourceA information) to be currently run. FinishTSCycle (CurrentTS) function returns TRUE if CurrentTS is the last section index to be performed during current run session.

TsiProcedures perform all actions required to test the current Test Section.

So, with the help of such a structure we have the ability to specify the sections required only for current testing. And their order is of our own consideration this moment.

And most important is that this approach gives us a flexible way to handle unexpected exceptions. This will be described a bit later in the Exceptions handling part.

4. Recommendations to Test Sections division

 Now some natural questions may come up. What is the principle of dividing Test Sections? How can the optimal size of elementary Test Section be defined?

Unfortunately, all applications are different. That is why there is no one answer for this. But some recommendations can be given.

The main and most important issue is the independance of the Test Sections. This will help to avoid inconsistencies when running sections in a random order. So it's acceptable that some application actions are duplicated in multiple Test Sections.

 The other problem is the size of a section and their total number. First of all sections division should comply with business rules of application features. On the one hand the smaller section size you have the less functionality will be skipped in case of section failure and the more flexibilities you have when defining list of sections to be run. On the other hand the huge number of sections causes inconveniences and difficulties when composing Test Sections list to run. So, automated tests developer should find the compromise of Test Section size to make maximal convenience for person who will run the tests.

 Also, there are two approaches to build Test Sections. Both approaches have its advantages and drawbacks and are used in different situations.

Approach 1. TAs are launched from <Initial section>, as a rule all Test Sections perform actions on application launched once. In this case all opened TAs should be closed in <Final section>. This approach is often applied to "heavy" TAs (it takes much time to complete such application launching).

" Advantages:

  1) Rapid tests running (TA is launched once)

" Drawbacks:

  1) Less reliability;

  2) Too complicated exceptions handling mechanism required;

  3) Accumulative side action, i.e. defect found in tested application may be caused by several Test Sections interaction.

Approach 2. TAs are launched in every Test Section. After current Test Section is completed all TAs are closed as a rule.

"  Advantages:

  1) High tests reliability;

  2) Less possibility of defects missing, more objective testing (Test Sections do not influent each other)

" Drawbacks:

  1) Slow test performance

5. Results representation

 It would be very helpful to spread the light on results representation model. Test Complete gives very powerful possibilities to give the user results in convenient format. It is achieved by using nodes structure. This structure gives the QA possibility to get the information on test results at one glance:

 

Figure 4. Results representation model

 The user sees which test sections passed verification and which did not. Not getting into deep details, user may have the entire opinion on TA condition.

 Thus keeping test results in strict order plays a great role in the testing process speed and quality. From (Figure 4) it is completely clear that SCENARY 2 and SCENARY 3 have passed the verification and do not require further interference. The problem found in SCENARY 1 is unclear and this problematical section requires further consideration.

 No matter what level of deepness Test Sections have, we don't have to look through them unless we are interested in extended investigation. So the strict log structure is the obligatory requirement for well-structured Test Compete project.

 

 The good practice is to create separate node for each function, this will make tests debugging and test results analysis easy and transparent.

 For reasons explained a bit later it is required to know the number of opened node levels. In order to do this there is a very simple way - using Test Complete events. Simply add ProjectEvents component on Main form, specify events handlers for following events from Object Inspector (see Listing 3).

Listing 3. Events procedures example

procedure ProjectEvents1_OnLogCreateNode(Sender: OleVariant; LogParams: OleVariant);

begin

 if (Options.LocalVars.lvErrorBlock) then

 begin

  LogParams.Locked:=True;

  Exit;

 end else LogParams.Locked:=False;

 Options.LocalVars.lvLogNodesOpen:=Options.LocalVars.lvLogNodesOpen+1;

end;

procedure ProjectEvents1_OnLogCloseNode(Sender: OleVariant; LogParams: OleVariant);

begin

 if (Options.LocalVars.lvErrorBlock) then

 begin

  LogParams.Locked:=True;

  Exit;

 end else LogParams.Locked:=False;

 Options.LocalVars.lvLogNodesOpen:=Options.LocalVars.lvLogNodesOpen-1;

end;

 Note that Options.LocalVars.lvLogNodesOpen variable should be initialized to 0 somewhere in code before log starts. The aim of if block will be explained later, it is used in Exceptions Handling Mechanism.

6. Exceptions handling

 So, its high time to discuss the main problem of automated testing - unexpected exceptions handling functionality.

 The entire success of automated testing completely depends on how exceptions are handled. By nature of appearance all the exceptions can be divided into 2 groups:

1) Predictable exceptions, can be handled conditionally in scripts (it can be predicted when exactly the error may appear, accordingly to this fact exceptions are handled);

2) Unexpected exceptions, no way to handle in scripts (exception may occur anywhere in script, may be caused by some TA bug or incorrect tests flow)

 Professionally developed tests cover most predictable exceptions (but not all). But unexpected exceptions still may carry in the instability.

 

So, what is the best way to deal with unexpected exceptions? The first method is a standard Test Complete event - OnUnexpectedWindow. Recommendations to working with Unexpected windows may be obtained from technical paper by Robert Leahey "Handling Unexpected Windows in TestComplete 2.0" (http://www.automatedqa.com/downloads/tc02unexpectedwindowspdf.zip). But it works only if unexpected window is captured as "unexpected" by Test Complete. Here is the fragment from Robert Leahey technical paper:

"The important thing to remember is that not all windows are truly unexpected. If your application contains within itself any calls which generate message dialogs or other windows, your test project should contain code which can handle any of these windows. If you know that they might appear, then you can handle them before they do"

But in fact in many cases it is impossible to predict created in application code exceptional windows (especially when dealing with black-box testing). And tests may fail when one of them appears.

 As a first idea, the block try..except could be used in Main unit of scripts. But it works only in case when exception happens directly in its body, but not a procedure called from its body.

 So, we suggest the non-standard method. As practice proved, the method is completely faultless and reliable. It works independently of problem root, with the help of Test Complete universality.

 

 Now we are going to get deeper into the algorithm of exceptions handling. Let's call the reason which caused the exception - "Exception Trigger". We can mark out 2 variants of it:

1) Unexpected window capturing by Test Complete;

2) Other subjective sign of incorrect tests flow.

In common, the scheme of exceptions handling mechanism looks like (see Figure 5):

 

Figure 5. Exception modeling

 In OnUnexpectedWindow event captured unexpected window plays the role of Exception Trigger, and the body of this procedure - Exceptions Handling Routines.

 But let's find more reliable way (i.e. trigger) to determine the exceptional situation.

 When some test flow fails, error messages are put into log by Test Complete one by one. This happens because of impossibility to continue normal test flow. So, let's consider some number of error messages in Test log (one by one, not interrupted by other types of messages) to be the sign of unexpected tests failure. When such situation is discovered, error handling mechanism should be immediately activated.

 

 It would be appropriate to expose one of variants of this Trigger mechanism implementation in Test Complete. Let's add 2 local variables:

Options.LocalVars.lvLogErrorsNum

Options.LocalVars.lvLogMAXErrorsNum

First Options.LocalVars.lvLogErrorsNum local variable should be initialized to 0 at the beginning of tests (in script place where error is nearly impossible, e.g. in initial part of tests or initial part of Test Section). Options.LocalVars.lvLogMAXErrorsNum is the option which determines the minimum errors number which causes the Trigger activation; it should be set as the option by the user. The recommended value of Options.LocalVars.lvLogMAXErrorsNum is 3-5.

 Let's create event handlers for all types of events. In order to avoid Trigger activation due to eventually accumulated errors during testing Options.LocalVars.lvLogErrorsNum

variable should be reset to 0 every time message is put into log (of any type except for Error). In Test Complete (for Approach 2 Test Sections model described in Recommendations to Test Sections division) this would look like (see Listing 4):

Listing 4. Exceptions handling in events

procedure ProjectEvents1_OnLogError(Sender: OleVariant; LogParams: OleVariant);

var p, w: OleVariant;

    i: integer;

begin

 {No error messages should be passed to log while error handling}

 if (Options.LocalVars.lvErrorBlock) then

 begin

  LogParams.Locked:=True;

  Exit;

 end else LogParams.Locked:=False;

 

 {In order to make TA problem more clear when analyzing

  test log screenshoting problematical screen}

 if Options.LocalVars.lvLogErrorsNum = 0 then

  Log.Picture (Sys.Screen, 'Error screenshot');

 

 {Last attempt to resolve the problem to complete

  current Test Section}

 if Options.LocalVars.lvLogErrorsNum = 2  then

 begin

  Log.Warning ('Generating Esc - maybe it can help to continue');

  Sys.Keys ('[Esc]');

 end;

 Options.LocalVars.lvLogErrorsNum:=

  Options.LocalVars.lvLogErrorsNum+1; //errors counter

 

 if (Options.LocalVars.lvLogErrorsNum > Options.LocalVars.lvMAXERRORS)

  and not (Options.LocalVars.lvErrorHandled) then

 begin

  {the flag that error in section was handled->

   ->there is no sense to continue this section testing}

  Options.LocalVars.lvErrorHandled:=

   True; //helps to avoid recursion of this procedure (turns off the Trigger)

 

  {Closing required for your Test Complete project

   log structure number of nodes}

  for i:=1 to Options.LocalVars.lvLogNodesOpen-3 do Log.CloseNode;

 

  {Exception handling routines - SOME ERROR HANDLING ACTIONS}

  {1 - TA closing}

  ...

  {2 - Actions required to skip not required commands

   in this Test Section in order to start next one}

  Options.LocalVars.lvLogErrorsNum:=0;

  Options.Run.Timeout:=0;//0-timeout in order to skip commands

  Options.LocalVars.lvErrorBlock:=True;//all errors now are…

   //…the product of error handling, they have no sense…

   //…and should be ignored to keep log clean

  Exit;

 end;

end;//procedure ProjectEvents3_OnLogError

procedure ProjectEvents1_OnLogMessage(Sender: OleVariant; LogParams: OleVariant);

begin

 if (Options.LocalVars.lvErrorBlock) then

 begin

  LogParams.Locked:=True;

  Exit;

 end else LogParams.Locked:=False;

 //if normal msg is put into log=>not error situation

 Options.LocalVars.lvLogErrorsNum:=0;

end;

 This code listing requires some clarifications. So,

1) if (Options.LocalVars.lvErrorBlock) then block helps to avoid undesirable messages in log while skipping failed Test Section's commands. This block is executed in every OnLogCommand handler for all commands;

2) Screenshot is made for every error to illustrate the problem, it will help clarify its root;

3) Key-presses (like Esc or other) are generated to give one more chance to complete the Test Section without error handling actions and Test Section skipping;

4) if (Options.LocalVars.lvLogErrorsNum > Options.LocalVars.lvMAXERRORS)

  and not (Options.LocalVars.lvErrorHandled) then block is executed when it is completely clear that current Test Section fails and extra actions are required to avoid entire project failure. In order to refrain from recursion flag Options.LocalVars.lvErrorHandled is used.

 The method has following basis. When it is obvious that current Test Section fails and there is no sense to continue script commands execution for it, Exception handling routines are started. Tested application is immediately closed. After that the task is to skip all the actions left for current section (they were not finished when exception occurred).

 In this case there is a very artful method and it helps us. Test Complete has Options.Run.Timeout option which defines the auto-wait timeout option. Sure, at the beginning of each section it should be set to 10 seconds by default. But when Options.Run.Timeout=0, all current (failed) Test Section actions are executed in idle almost immediately, without TA feedback. The errors and other informational messages have no value now, their flow is suppressed by LogParams.Locked:=True log ability (variable Options.LocalVars.lvErrorBlock defines log locking). After failed Test Section ends, Options.Run.Timeout value is set back to default, log is unlocked. New Test Section is started with ideal initial conditions.

 With the help of such algorithm we have:

1) Strict-ordered log, with all necessary information; all side events are excluded;

2) Test Sections independency; when one of them fails, it does not cause entire tests failure.

7. Data exchange

 And the last problem is the way Test Data and Test Information is provided to Test Complete scripts. Sure, it can be specified directly to .ini or other config, or even into scripts code. But this makes the process of test data providing more complicated and impossible for non-aware of programming and data structure person.

 That's why it would be great to have some tool which could:

1) Set all required test options and parameters;

2) Help the user to specify the actual list and order of Test Sections to be executed.

 The most difficult is to set Test Sections list. The good practice is to specify the list in following format in source file:

TestSectionsIndex<i>=<j>

Here <i> are the numbers in strict order (1..N, N - the total number of sections to be executed), <j> are the corresponding Test Section indexes (unique ID for Test Section in script). The example is shown below (see Listing 5)

Listing 5. INI file fragment defining Test Sections order

[TestSectionsList]

TestSectionsIndex1=102

TestSectionsIndex2=101

TestSectionsIndex3=108

TestSectionsIndex4=104

 In this example Test Sections are executed in following order: 102>101>108>104, though they may have any location in script code (see structure in Listing 2)

 This ini-file block can be constructed with the help of such GUI design:

 

Figure 6. Test Sections list management with external application

 The good idea is to create separate application in Test Data specification purposes (lets call it TDManager). This application would be adapted to your specific TA needs and give maximum convenience to the end user of automatic tests. This application would allow user to:

a) Specify test data and test options for current testing;

b) Design the ordered list of Test Section to be currently tested.

 The scheme of interaction between this application and Test Complete may look like (see Figure 7):

 

Figure 7. Data exchange between Test Complete and TDManager GUI by means of interim Data Storage

 In order to separate the real tests-user from Test Complete difficulties TDManager should be built so that it completely specifies test information and launches Test Complete project, runs tests. After the tests are completed the user comes and examines ready results in log.

 This task can be fulfilled launching Test Complete with parameters from TDManager when Test Data is filled and Shared data storage is ready. This command line would look like:

"C:Program FilesAutomated QATestCompleteTestComplete.exe" "C:ProjectsMy

My.mds" /ns /Run

 Parameters "/ns /Run" will provide launching Test Complete without splashing screen and run the specified project accordingly.

 Also, one more note would be useful: Shared data storage should be accessible by both Test Complete and TDManager, its location should be predetermined in both of them. As soon as data storage is prepared by TDManager, it becomes used by Test Complete.

Conclusions

-Modern standards in software development industry demand the strict stability and predictability of all kinds of testings, including automated testing

-Classic automated tests project scheme is not enough for real projects, test failures are very probable in it

-Smart Exceptions handling mechanism and thought-out Log structure are the guarantee of automated tests efficiency and success. These two issues are to be included into advanced project structure by means of functionality dividing into separate and independent Test Sections

-Test Sections should be absolutely independent, their size should be chosen accordingly to tested project peculiarities

-Nodal results representation is a part of Exceptions handling mechanism, it gives the possibility to analyze the results quickly and effectively

-Not all exceptions can be handled in scripts or with Test Complete OnUnexpectedWindow functionality. Its normal situation, but it has dangerous consequences. Proposed Exceptions handling mechanism allows to handle any types of exceptions occurred in scripts and in tested application. This model makes it possible to complete entire tests package even if tests code is not consistent and tested application is not stable

- In order to make tested data, parameters and covered areas specification quick and easy it may be reasonable to create a separate application for data specification purposes

Author: Alexey V. Toryanik
E-mail: altory@list.ru
Company: Eclipse SP


Close

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading