Test automation makes software testing easier, faster and more reliable, and is essential in today's fast-moving software delivery environment. Usually seen an alternative to-time-consuming and labor-intensive manual testing, test automation uses software tools to run a large number of tests repeatedly to make sure an application doesn’t break whenever new changes are introduced. Implementing a test automation strategy within your organization is not for the faint of heart and you need to rely on well-chosen metrics that measure the performance (past, present, and future) of your automated testing process to determine if your company is getting an acceptable return on its test automation investment.
Implementing automated testing is a process, and any metrics chosen to measure improvement (e.g. the number of manual versus automated tests) need to take into account unique aspects of the organization, market, or environment they are being used in. There is no set of "universal metrics" report that will work in every capacity all of the time.
The Agile Test Automation Pyramid
(Cody Blog)
Choosing When to Automate Testing
The testing pyramid is a popular strategy guide that agile teams often use when in planning their test automation strategy. As shown in the illustration above, the base or largest section of the pyramid is made up of Unit Tests--which will be the case if developers in your organization are integrating code into a shared repository several times a day. This involves running unit tests, component tests (unit tests that touch the filesystem or database), and a variety of acceptance and integration tests on every check-in.
If your developers are practicing Test-Driven Development (TDD), they'll have already written unit tests to verify each unit of code does what it's supposed to do. Writing unit tests is important because it forces the developer to take into account all possible inputs, errors and outputs. TDD allows an agile team to make changes to a project codebase and then quickly and efficiently test the new changes by running the automated tests. The result of using TDD is that agile teams will accumulate a comprehensive suite of unit tests that can be run at any time to provide feedback that their software is still working. If the new code breaks something and causes a test to fail, TDD also makes it easier to pinpoint the problem and fix the bug.
Unit and component tests are the least expensive to write and maintain, and arguably provide the most value because they allow agile teams to detect errors and conflicts as soon as possible.
At the top or eye of the pyramid are the last tests that should be considered for automation, the manual exploratory tests, which are tests where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. Exploratory testing is done in a more freestyle fashion than scripted automated testing, where test cases are designed in advance. With modern test management software, however, it's possible to semi-automate these kinds of tests, which entails recording and playing back the test path taken by an exploratory tester during a testing session. This helps other agile team members recreate the defect and fix the bug.
The middle of the pyramid consists of Acceptance and GUI integration tests, which represent smaller pieces of the total number of automation test types that should be created.
Following the Test Automation Pyramid too literally can create problems in implementing--and budgeting for-- an enterprise-wide software test automation project. The order of tests your company should attempt to automate should follow standard business logic, meaning you may need to spend more time and money automating GUI tests if your software users expect a fast, rich and easy user interface experience. If you're developing an app for an Internet of Things (IoT) device that primarily talks to other IoT devices, automation of GUI testing is less of an issue.
Once you've chosen the appropriate tests to automate, you then need to choose the right key performance indicators (KPIs) to measure or validate that the software you're building meets your customers’ expectations. QA metrics provide insight into the quality of in-development products as well as the effectiveness of current test methods. Everyone in your organization--from developers to testers to executives-- can view these metric reports and gain a greater understanding of how your test automation efforts are operating and to what extent they are actually succeeding.
Although agile teams have many options for QA metrics at their disposal, here are four of the most widely used project-level KPIs in use today:
1. Requirements Coverage
The quality of a piece of software quality is often defined by its ability to meet the detailed project requirements defined by business and project team members at the beginning of the development process. Metrics for requirements coverage measure your organization's testing effort and help answer the question, “How much of the application was tested?” Determining what requirements have test coverage is a straightforward calculation: you just have to divide the number of requirements covered by the total number of scoped requirements for a sprint, release or project. In addition to having a clear, prioritized set of requirements, your project should have a WIP (work in progress) limit, which is a strategy for preventing bottlenecks in software development. WIP limits are agreed upon by the development team before a project begins and are enforced by the team's facilitator.
2. Defect Distribution
The goal of QA test management on agile projects is to find and fix as many bugs as early in the process as possible. The defect distributions metric provides visibility into areas where defects are being found. The number of identified defects should gradually decline as the project progresses; areas that don't follow this trend need greater attention. Defect distribution metrics can be used to identify hotspots such as problematic requirements that are causing bottlenecks in the development process.
3. Defect Open and Close Rate
It's important that agile teams keep accurate records of the number of spotted defects to make sure they don't slip through the cracks and show up in the final release. The Defect Open and Close Rate metric is a ratio of the defects found after delivery divided by the defects found before delivery. In addition to helping ensure that projects run quickly and without major problems in the final version, this is a good metric for reporting how quickly developers and testers are collaborating toward resolving each issue.
4. Execution Trends
These QA metrics identify which tests have been executed by a given member of the QA team as well as trends related to the status of defects. QA managers can use these metrics to quantify the effectiveness of individual team members and the project team as a whole. Trends across a single development cycle or multiple projects offer insight into the ongoing ability of a given team to deliver on its promises.
Creating a Culture of Collaboration
The end goal when you're implementing automated testing is to create a culture of collaboration among the various teams involved in software delivery (developers, operations, quality assurance, business analysts, management, etc.). Zephyr has several tools to help companies do this including DevOps monitoring dashboards that allow agile teams to track and report different automation metrics in real-time. In addition to being able to create and reuse manual tests on agile projects, Zephyr's Vortex tool makes it easy to bring in and work with automation information from across your development stack, including from systems external to your organization. Vortex allows users--wherever they are in your organization--to integrate, execute, and report on test automation activities. By providing an intuitive screen that lets users access both manual and automated test cases at the same time, Vortex helps agile teams better monitor and evaluate the ROI of their overall automation effort (e.g. the number of manual vs. automated tests) from one release to another.
Capture for Jira helps testers on agile projects create and record exploratory and collaborative testing sessions, which are useful for planning, executing and tracking manual or exploratory testing. Session-based test management is a type of structured exploratory testing that requires testers to identify test objectives and focus their testing efforts on fulfilling them. This type of exploratory testing is an extremely powerful way of optimizing test coverage without incurring the costs associated with writing and maintaining test cases.
The Zephyr Platform and AI
The Zephyr platform integrates with Zephyr's predictive analytics product, which enables enterprises to identify problems with the release quality of products in the software delivery pipeline before they occur. This helps organizations implementing automated testing to quickly identify what QA tests can be automated versus what should stay manual.
Learn more about the benefits of automation testing and how to get started.