This is the third installment of blogs regarding Agile Testing Challenges. You can view the prior blogs or download a more detailed white paper, here:
Agile development is a faster, more efficient and cost-effective method of delivering high-quality software. However, agile presents testing challenges beyond those of waterfall development. That’s because agile requirements are more lightweight, and agile builds happen more frequently to sustain rapid sprints. Agile testing requires a flexible and streamlined approach that complements the speed of agile.
The Challenge - Broken Builds
Performing daily builds introduces the risk of breaking existing code. If you rely solely on manual test runs, it’s not practical to fully regress your existing code each day. A better approach is to use an automated testing tool that records and runs tests automatically. This is a great way to test more stable features to ensure that new code has not broken them.
Most agile teams perform continuous integration, which simply means that they check source code frequently (typically several times a day). Upon code check-in, they have an automated process for creating a software build. An automated testing tool can perform regression testing whenever you launch a new build. There are many tools on the market for continuous integration, including SmartBear’s Automated Build Studio, Cruise Control, and Hudson. It’s a best practice to have the build system automatically launch automated tests to detect the stability and integrity of the build.
How Automated Testing Helps with Broken Builds
The best way to get started is to proceed with baby steps. Don’t try to create automated tests for every feature. Focus on the tests that provide the biggest bang for your buck. Here are some proven methods:
- Assign a Dedicated Resource: Few manual testers can do double duty and create both manual and automated regression tests. Automated testing requires a specialist with both programming and analytical skills. Optimize your efforts by dedicating a person to work solely on automation.
- Start Small: Create positive automated tests that are simple. For example, imagine you are creating an automated test to ensure that the order processing software can add a new order. Start by creating the test so that it adds a new order with all valid data (positive test). You’ll drive yourself crazy if you try to create a set of automated tests to perform every negative scenario that you can imagine. Don’t sweat it. You can always add more tests later. Focus on proving that your customers can add a valid order and that new code doesn’t break that feature.
- Conduct High-Use Tests: Create tests that cover the most frequently used software features. For example, in an order processing system, users create, modify, and cancel orders every day; be sure you have tests for that. However, if orders are exported rarely, don’t waste time automating the export process until you complete all the high-use tests.
- Automate Time-Intensive Tests/Test Activities: Next, focus on tests that require a long setup time. For example, you may have tests that require you to set up the environment (i.e., create a virtual machine instance, install a database, enter data into the database, and run a test). Automating the setup process saves substantial time during a release cycle. You may also find that a single test takes four hours to run by hand. Imagine the amount of time you will recoup by automating that test so you can run it by clicking a button!
- Prioritize Complex Calculation Tests: Focus on tests that are hard to validate. For example, maybe your mortgage software has complex calculations that are very difficult to verify because the formulas for producing the calculation are error-prone if done manually. By automating this test, you eliminate the manual calculations. This speeds up testing, ensures the calculation is repeatable, reduces the chance of human error, and raises confidence in the test results.
- Use Source Control: Store the automated tests you create in a source control system. This safeguards against losing your work due to hard drive crashes and prevents overwriting of completed tests. Source control systems provide a safeguard by allowing you to check code in and out and retain prior test versions without fear of accidental overwriting.
Once you create a base set of automated tests, schedule them to run on each build. Daily, identify tests that failed. Confirm if they flag a legitimate issue or if the failure is due to an unexpected change to the code. When a defect is identified, you should be very pleased that your adoption of test automation is paying dividends. Remember, start small and build your automated test arsenal over time. You’ll be very pleased by how much of your regression testing has been automated, which frees you and your team to perform deeper functional testing of new features. Reliable automated testing requires a proven tool. As you assess options, remember that SmartBear’s TestComplete is easy to learn and offers the added benefit of integrating with QAComplete, so you can schedule your automated tests to run unattended and view the run results on a browser.
Broken Builds and Test-Driven Development
Agile practitioners sometimes use test-driven development (TDD) to improve unit testing. Using this approach, the agile developer writes code by using automated testing as the driver to code completion. Imagine a developer is designing an order entry screen. She might start by creating a prototype of the screen without connecting any logic, and then create an automated test of steps for adding an order. The automated test would validate field values, ensure that constraints were being enforced properly, etc. The test would be run before any logic was written into the order entry screen. The developer would then write code for the order entry screen and run automated tests to see if it passes. She would only consider the screen to be “done” when the automated test runs to completion without errors.
To illustrate further, let’s say you’re writing an object that when called with a specific input, produces a specific output. By implementing a TDD approach, you can write code, run the automated tests, and continue that process recursively until attaining the expected input and output.
Important Metrics for Successfully Eliminating Broken Builds with Automated Testing
As you grapple with this challenge, focus on metrics that analyze automated test coverage, automated test run progress, defect discovery, and defect fix rate, including:
- Feature Coverage: Count the number of automated tests for each feature. You’ll know when you have enough tests to be confident that you are fully covered from a regression perspective.
- Requirement/Feature Blocked: Use this metric to identify what requirements are blocking automation. For example, third-party controls require custom coding; current team members may lack the expertise to write them.
- Daily Test Run Trending: This shows you, day-by-day, the number of automated tests that are run, passed, and failed. Inspect each failed test and post defects for issues you find.
- Daily Test Runs by Host: When running automated tests on different host machines (i.e., machines with different operating systems or browser combinations), analyzing your runs by host alerts you to specific OS or browser combinations that introduce new defects.
Ensuring No More Broken Builds with Your Automated Test Team
We recommend that testing teams perform these tasks every day:
- Review Automated Run Metrics: When overnight automated test runs flag defects, do an immediate manual retest to rule out false positives. Then log real defects for resolution.
- Use Source Control: Review changes you’ve made to your automated tests and check them into your source control system for protection.
- Continue to build on your Automated Tests: Work on adding more automated tests to your arsenal following the guidelines described previously.
For tools to support these practices:
Download a free trial of QAComplete
Download a free trial of TestComplete
Download a free trial of Automated Build Studio
Be sure to check out my next post: Agile Testing Challenges - Finding Defects Early