"Jim, we need to get some automation in place for the Aquarius project." Jim looked up at Laurie in amazement. The Aquarius project was still being prototyped, and probably months away from being stable enough to create any worthwhile scripts.
"Laurie, the app’s just not ready for automation right now. Anything I write is just going to be throwaway code in a week or two. By the time I get a decent script written, it’ll be obsolete."
"Look Jim," Laurie said, her voice going stern, "our VP is questioning what his ROI on automation is right now. The last thing I need is him breathing down my neck wondering what my resources are doing. It doesn’t matter if it’s throwaway work, we just need to make sure it looks like we’re doing something."
Does this dialog sound familiar? At some point in virtually every automator’s career, a manager, peer or VP goes on a kind of "automation crusade" where everything has to be automated immediately in order for automation to be considered worthwhile. Some managers even go so far as to completely abandon automation efforts because automation won’t give that sought-after bang-for-the-buck ten minutes after the first line of the AUT’s code is written. It’s a simple fact that in order for a good automated script to function it requires two things. First, the script must be well written, and second, the application it’s testing can’t drastically change from one build to the next. So what’s an automator to do? Create throwaway code? Give up on automation?
Of course not. The solution is not to abandon automation, but to abandon the typical perception of automation. We all know that TestComplete can run scripts X times faster than a human can. However, if your application looks dramatically different every time you launch it, TC won’t be able to help you. You’ll probably wind up spending more time writing and re-writing the script to make it work than it would take to run the test by hand. I’ve seen many people hit this particular wall, and they give up as a result. In situations like this, don’t focus on automating manual test cases. Instead, find other areas where automation can add value to your test efforts. I suggest two alternatives: the masher and the scanner.
Mashers are incredibly useful for discovering memory leaks and areas of poor performance within your application. They are simple, short scripts that focus on performing one particular action over and over again. For example, let’s say you have an application that can save records to a database. The save process consumes a certain amount of system resources each time the Save button is pressed. You could create a masher that did nothing but click the Save button over and over again, logging how much of the system’s resources are being used after each click.
By placing the masher inside an infinite loop, you can see how your application performs over time. You could have the masher start when you left in the evening and then stop the script when you came in the next morning. Or, you could just kick the masher off on Friday night and not stop it until Monday morning. This allows you to gather data for a testing scenario that would be virtually impossible to run by hand.
Typically, I’ll have mashers log CPU usage, along with virtual and physical memory. The masher then writes its findings to a csv file that can later be opened in Excel. This allows you to take the masher’s output and create a nice report that displays the resource usage over time. The beauty of a masher is that because it’s so tightly focused, it only takes a minute to write. Thus, if the particular piece of functionality changes, it doesn’t take a great deal of effort to modify it. Below is some Jscript to illustrate the concept of the masher.
//Masher runs a given function continuously, until manually stopped or the AUT no longer exists
//masherGuts - parameter referencing the function that will be run
//logFileName - string representing the name of the csv file used for the function's output
var p = Sys.Process("MyApp");
var fso = Sys.GetOleObject("Scripting.FileSystemObject");
objDataFile = fso.OpenTextFile("c:\" + logFileName + ".csv",8,true);
strMessage = "VM Size: " + p.VMSize.toString() +
" RAM: " + p.MemUsage.toString() +
" CPU: " + p.CPUUsage.toString()
Log.Message(strMessage,"Iteration: " + x.toString(),1);
objDataFile.WriteLine(p.VMSize.toString() + "," +
p.MemUsage.toString() + "," +
Log.Error("App is no longer running");
//This function calls the Masher routine, tells it to run the SaveMasherGuts function and save its results to a file called SaveMasher.
//This function continously saves a record by pressing Alt+F to open the File Menu, and then presses S to save.
Data-driven testing (DDT) is a term that a lot of people throw around without ever having actually done it. In a nutshell, DDT separates the information you enter in an application from the script code. So for example, let’s say you have an automated script that entered values into the Billing Information fields of an online store. It would probably look something like this pseudo-code:
billingInfoForm = Sys.Process("IE").MainForm
billingInfoForm.txtFirstName.Text = "Nicholas";
billingInfoForm.txtLastName.Text = "Olivo";
billingInfoForm.txtAddress.Text = "143 Angel St"
billingInfoForm.txtCity.Text = "Anytown"
And so on.
In the example above, all the values for the fields are hard coded. If you wanted to test the fields with different data, you’d have to edit the script. Worse, if you wanted to test the fields multiple times, you’d have to copy and paste the same code over and over again. DDT separates out the values entered for the fields, and stores them in an external data source, such as a csv file. Then you’d create a loop that reads values from the data source and enters them as appropriate. So building on our previous example:
Now, the problem becomes creating and maintaining the external data files. Someone still needs to figure out what fields are available on a given screen, create the data file and enter values for them. This can be a tedious process, especially if your UI is changing. My solution to this is to have TC create the data files for you, via a "scanner" script. The example below examines all the controls on a form, and if the control is a text box, the code writes the text box’s name, maximum number of characters, and a randomly generated sample string to a csv file called c:inputData. (Note: the GetRandomString function originally appeared here, as written by Tips McGee, and modified by a poster named David: Create a random string)
strCSVFileName = "c:\inputData.csv";
var fs = new ActiveXObject("Scripting.FileSystemObject");
csvFile = fs.OpenTextFile(strCSVFileName,2);
csvHeaders = "Object Name,Max Length,Sample Value
var p = Sys.Process("WindowsApplication2");
var w = p.Form1;
The above code creates the input data for you, which saves you from having to create it yourself. It also tells you how many characters a given field can accept. This can be useful for when you perform boundary testing. It also allows you to track changes in the UI. So if the txtFirstName field used to be limited to 55 characters, and after a new build it only accepts 15, you will be aware of this information ahead of time, instead of learning it the hard way.
In short, mashers can help you isolate memory leaks and client-side performance issues just by repeatedly performing a single action, and scanners automatically generate input files for data-driven testing. These are only two ways that you could use automation on an unstable UI. Other potential ideas include automating an API, running unit tests, or calling functions directly from dlls. Remember that automation does not focus solely on GUI-based workflows. So the next time that you’re put in a situation like Jim’s, take a step back and ask yourself, "What can I automate that will add value to my test efforts?"
Nick Olivo is an Automation Lead for a software company in Nashua, NH. When he’s not coding, he can usually be found playing with his son in the sandbox.