Integrating TestComplete and HP Quality Center

Customer Guest Blog

Glen Accardo

Test Automation Lead, Houston
Technology Center

Schlumberger



When Alex initially approached me to write this article, he had several scenarios he wanted to cover along with implementation details:

  • starting TestComplete™ from HP® Quality Center
  • arranging regression runs from HP Quality Center
  • uploading data from TestComplete to HP Quality Center
  • viewing HP Quality Center data from TestComplete.

The reason for integrating the two products is that many organizations use HP Quality Center for tracking test results.  Posting results directly from TestComplete saves a manual step and increases the visibility of the automated test results. But I don’t do the first two items—not because it isn’t possible, but because in my experience, it is better to run automated tests without being so tightly coupled to HP Quality Center.  Let me explain.

Automated tests have numerous dependencies. One of the main dependencies in most test suites is the order of test case execution—installation has to run before configuration, login has to happen before data access, and so on. To launch TestComplete from HP Quality Center requires that the list of tests to run and the order in which to run them be stored outside of TestComplete, separate from the source control used for the TestComplete project suite.  By keeping the list of test cases as part of my TestComplete script source and using HP Quality Center only as a results repository, I have one less complication and consequently better reliability.

Another issue that each of us must tackle is deciding when to run automated tests.  While HP Quality Center provides a scheduler, I find the continuous integration approach of running tests as part of a build to be much more effective.  Keeping the list of test cases as part of my TestComplete script source allows much more flexibility in triggering automated test runs.

Because of those two factors (and a few others specific to our organization), I use HP Quality Center only as a repository for our TestComplete results.  While my approach may be a bit different from the normal HP Quality Center workflow, it has proved to be an excellent way to use the superior flexibility of TestComplete and still put the automated test results into the same repository as the manual test results.

The Test Suite Approach

The HP Quality Center and TestComplete integration extension included as part of this article contains the basic elements for integrating TestComplete with HP Quality Center. It supports the following workflow:

  1. Connect to HP Quality Center.
  2. Create a test set.
  3. Add test cases to the test set.
  4. Post results for test cases.
  5. Zip TestComplete logs and add them as an attachment to the test set.
  6. Disconnect from HP Quality Center.

At the Schlumberger Houston Technology Center, we developed a test suite script extension that executes all tests, and also posts results to Microsoft TFS, HP Quality Center, and Hudson.  While most of the details of this test suite are specific to our organization, there is one design decision that everyone will face: how and when to post results. The following bits of pseudo-code show a subtle difference in how results get posted to HP Quality Center.

Post Results Continuously

Post Results as a Batch

 Connect to HP Quality Center

for each test case
Execute test case
Post result

Disconnect from HP Quality Center

 for each test case

Execute test case
Connect to HP Quality Center

for each test case

Post result

Disconnect from HP Quality Center

Table 1 - Differences between continuous and batch processing of results.

I have used both approaches at various times, and the decision is as simple as the HP Quality Center configuration.  If you have a short inactivity timeout period (the WAIT_BEFORE_DISCONNECT parameter) and long-running test cases, posting the results as a batch is more reliable.  However, if this is not an issue for you, posting the results as soon as each test finishes allows you to use HP Quality Center to monitor automation progress.

One final note about the HP Quality Center extension: it is meant to be a starting point.  It includes no validation and minimal error handling. To incorporate it in your organization, you will need to validate parameters, handle error conditions more gracefully, and follow secure coding practices.

Connecting and Disconnecting

The code for connecting and disconnecting from HP Quality Center is rather simple.  It creates an OLE object for TDConnection, initializes the connection, and connects to a project.  The main dependency here is that the HP Quality Center Client must be installed.  Calling the following function accomplishes the same thing as using the HP Quality Center log-in page.

function Connect (host, domain, project, user, password)
{
eval (Project.variables.functionEnter);
returnCode_ = null;
try
{
tdc_ = Sys.OleObject ('TDApiOle80.TDConnection');
tdc_.InitConnectionEx (host);
tdc_.ConnectProjectEx (domain, project, user, password);
}
catch (exception)
{
returnCode_ = exception.description;
}
}
Figure 2a - Connect () method source.
 
Figure 2b - HP Quality Center login dialog. Domain and Project are hidden for privacy reasons.
 

There is one other, not quite so obvious, issue.  This TDConnection object is rather tenacious.  Even though you think you have disconnected, there are still digital remnants that can cause problems.  So, if you have code that works properly but it suddenly starts throwing odd exceptions or returning incorrect results, restart TestComplete.  That often solves the problem.  This isn’t a TestComplete issue.  The API has the same issue when accessed via VBA in Microsoft Excel.

The above code works with HP Quality Center Version 10, and the extension includes code that works for Version 9. Both versions still use the same OLE object. The remaining code provides traceability (see my earlier blog post on this subject) and provides exception handling. 

Creating a Test Set

Earlier versions of Quality Center (when the product was called TestDirector and was owned by Mercury) did not have folders for test results, and this causes issues when using the API.  Essentially, any failure to put a test set into a folder results in an unattached test set and potentially leaves the test set in a state where it can’t be deleted immediately.

Figure 3 - The Test Lab tree, showing a Regression Tests test set in the Unattached folder.
 

As soon as a test set gets created with the AddItem () method, it exists, but it is in the Unattached folder and locked.  It will remain in this state until setting the TestSetFolder property and calling the Post () method. For any failures that occur before calling Post () you will have to wait for the database lock timeout to occur or get your HP Quality Center administrator to remove the test set from the database. In short, if you have a failure while creating a test set, look in the Unattached folder.

var treeManager = tdc_.TestSetTreeManager;
var testSetFactory = tdc_.TestSetFactory;

var testSetFolder = treeManager.NodeByPath (testSetFolderName);
var testSet = testSetFactory.AddItem (testSetName);
testSet.TestSetFolder = testSetFolder;
testSet.Field ('CY_COMMENT') = testSetComment;
testSet.Post ();
Figure 4 - CreateTestSet () method source.
 

One of the most useful enhancements to the extension is adding the ability to create folders instead of just using existing ones.  This allows your test suite to create folders based on versions, iterations, builds, dates, etc. If you make such an enhancement, using the API’s AddNode () method, it is imperative that you create the folder before creating the test set.  This gives you an opportunity to fail gracefully if there is an issue with the folder. Another good idea is to create test sets with unique names—the current date and time is a good choice—to ensure that even if you have an unattached test set, you can still continue to work with HP Quality Center.

Add Test Cases to the Test Set

Adding a test case to a test set is a two-step operation.  First, you add a test case to the test set, and then you add a run to the test case instance.  This run object is what has a pass/fail status, execution time, and other information.  The biggest choice you must make here is how to handle duplicate test cases.

For example, you may run the same test case multiple times with different parameters.  This could be considered multiple runs of the same test case or multiple test cases each with one run.  The figures show how this would appear in HP Quality Center. 

Figure 5 - Adding the same test case to a test set multiple times.
 
Figure 6 - Adding multiple runs to the same test case instance.
 

Which method you choose is partly a philosophical debate and partly a matter of how you report on the information. The main difference is that a single test case has a single pass/fail result—even if it has multiple runs.  For that reason, I choose to add the test case to the test set multiple times.

A few more notes about building a test set: I prefer to build the test set completely before posting results.  The main reason is that something could go wrong between creating the test set and posting the results.  By building the test set completely before posting results, I have a record of what I intend to run rather than just a list of what actually ran.

Post Results

Posting results to a test run is a fairly simple process.  It is a matter of setting a couple fields in the test run and posting it.  The interesting part is mapping a TestComplete state to an HP Quality Center status.  The example shows a simple way of getting a pass/fail result using the Log.ErrCount property.  If the count after executing the test case is different from the count before, then the test case has failed.

// Start of Test 

var startTime = aqDateTime.Now();
var errorCountBefore = Log.ErrCount;

TestCase ();

// End of Test
var endTime = aqDateTime.Now ();
var totalTime = aqDateTime.TimeInterval (endTime, startTime);
totalTime = (aqDateTime.GetSeconds(totalTime)) + (aqDateTime.GetMinutes(totalTime) * 60) + (aqDateTime.GetHours(totalTime) * 60 * 60);

errorCountAfter = Log.ErrCount;
var result = (errorCountBefore == errorCountAfter) ? 'Passed' : 'Failed';

QualityCenter.PostTestRun (myTestRun, result, totalTime);

I exclude warnings from results, which allows me to see a set of warning messages (to do items, known failures, etc.) that I don’t necessarily want to post as failed test cases.  You will have to create your own mechanism for determining which test cases are blocked, skipped, not completed, etc.

Nice Touches

After running the automated tests, you should have a TestComplete log with a complete record of everything done in every test case.  Storing this log as an attachment in HP Quality Center is an excellent addition to the pass/fail results, and provides a detailed log of the test execution.  The extension contains a method to add an attachment to the test set and mark the test set as closed.  Marking the test set as closed is completely optional but is often part of organization’s internal test processes.

  Log.SaveResultsAs (resultsDir, lsHTML);
var fileList = slPacker.GetFileListFromFolder (resultsDir);
if (slPacker.Pack (fileList, resultsDir, resultsZip))
{
QualityCenter.CloseTestSet (myTestSet, resultsZip);
}
else
{
Log.Error ('Error archiving files.');

Putting It All Together

The following function shows how to use all the extension methods.  You can use this example to validate the integration of TestComplete and HP Quality Center, and then extend it to work for all your tests.

function QualityCenterSample ()
{
var host = 'http://url//qcbin';
var domain = 'myDomain';
var project = 'myProject';
var user = 'UserName';
var password  = 'ThisIsABadPassword';

var testSetFolder = 'Root\Test Lab Folder 1\Folder2'
var testSetName = 'TestComplete Tests';
var testSetComment = 'Created by TestComplete';
var testCase = 'Subject\Test Plan Folder 1\Folder 2\Test Case';
var testRunName = 'TestComplete automated test run';

var resultsDir = 'C:\temp
esults';
var resultsZip = 'C:\temp\TestCompleteLog.zip';

QualityCenter.Connect (host, domain, project, user, password);
if (QualityCenter.returnCode != null)
{ /* Deal with failure to connect. */ }

var myTestSet = QualityCenter.NewTestSet (testSetFolder, testSetName, testSetComment);
if (QualityCenter.returnCode != null)
{ /* Deal with failure to create test set. */ }

var myTestCase = QualityCenter.GetTestCase (testCase);
if (QualityCenter.returnCode != null)
{ /* Deal with failure to get a test case. */ }

var myTestRun = QualityCenter.AddRunToTestSet(myTestSet, myTestCase, testRunName);
if (QualityCenter.returnCode != null)
{ /* Deal with failure to add test run. */ }

// Start of Test
var startTime = aqDateTime.Now();
var errorCountBefore = Log.ErrCount;

TestCaseSample ();

// End of Test
var endTime = aqDateTime.Now ();
var totalTime = aqDateTime.TimeInterval (endTime, startTime);
totalTime = (aqDateTime.GetSeconds(totalTime)) + (aqDateTime.GetMinutes(totalTime) * 60)
+ (aqDateTime.GetHours(totalTime) * 60 * 60);

errorCountAfter = Log.ErrCount;
var result = (errorCountBefore == errorCountAfter) ? 'Passed' : 'Failed';

QualityCenter.PostTestRun (myTestRun, result, totalTime);
if (QualityCenter.returnCode != null)
{ /* Deal with failure to post result. */ }

Log.SaveResultsAs (resultsDir, lsHTML);
var fileList = slPacker.GetFileListFromFolder (resultsDir);
if (slPacker.Pack (fileList, resultsDir, resultsZip))
{
QualityCenter.CloseTestSet (myTestSet, resultsZip);
}
else
{
Log.Error ('Error archiving files.');
}

QualityCenter.Disconnect ();
}


function TestCaseSample ()
// I wish all test cases were this easy to write...
{
Delay (1000);
}

Happy testing,

Glen


Close

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading