How to Approach Testing
Before we can get down to the nitty gritty of designing a test plan, it’s useful to understand the four basic test types. These types are White Box, Black Box, Manual and Automated, or unsupervised, testing. Each test type comes with a distinct set of benefits and constraints. Knowing how and when to use each is useful when designing an effective test plan.
White Box Tests
White box tests run against the source files directly. During a white box test all the lines of code being tested are available for inspection and measurement. Typically white box testing allows the developer or test engineer to resolve failing tests by using line by line step through, debugging techniques to inspect the value of variables and system state overall.
Back Box Tests
Black box testing, also known as functional testing, works through only the public interfaces of an application or service. Black box tests have no access to the internals of the artifact under test. Black box tests verify only what an application, service or system does, not how it does it. Thus, testers and test engineer conduct tests by entering input and analyzing the resulting output. Also, in the case in which input has no direct output, a black box test will assess some aspect of the system overall.
Manual Tests
Manual tests are performed by humans. Humans execute actions using a predefined script. Usually test results are recorded automatically and stored in a log or database. In some cases test results will be documented manually. Manual testing is appropriate in situations where the artifact under test has a variety of conditional test points that are hard to identify or predict. For example, testing that a web page supports a complex business process. Any action in the UI can result in one of a variety of execution paths.
Humans who understand the complexity of a business process are usually better at testing the process using manual techniques than a test engineer trying to create code to do the testing.
Automated Tests
Automated testing scripted intelligence that is run either by human invocation or as part of Continuous Integration/Continuous Delivery framework such as Jenkins or Travis CI.
Automated tests are created in two ways. The first way is to create scripts using recording technology that keeps track of a tester’s interaction with an application’s graphical user interface (GUI). The tester turns on the recorder and then navigates through the GUI performing actions such as data entry as well as clicking links and buttons.The tester turns off the recorder when finished. The recorder produces a test script that can be used by an automated test runner.
The second way is to have a test engineer write the script code directly. The script is used subsequently by a human tester or by a testing framework that invokes the script automatically as part of the Continuous Integration/Continuous Delivery (CI/CD) process.
Understanding Testing in Terms of Development Phase
Testing along each phase of the Software Development and Deployment Process (SDDP) is not a one size fits all undertaking. Different phases have different testing requirements. Having a clear understanding of the various phases of the SDDP and the tests that are typically performed at each phase is important when it comes to test planning.
Table 1 below describes the details of testing within each phase of the SDDP.
Development Environments |
Testing Conducted |
Artifacts Tested |
Team Member Performed By |
Development |
Unit Tests
GUI Tests |
Source Code |
Developers |
General QA |
Functional Tests
API Tests
GUI Tests
Mobile Device Testing |
Application Binaries
Test Websites
Test API Endpoints
Test Databases
Test Message Queues |
QA Personnell
Automated Test Scripts |
Staging |
Integration Tests
User Acceptance Tests |
Pre-Release Websites
Pre-Release APIs
Pre-Release Databases
Pre-Release Message Queues
Pre-Release Application Binaries |
QA Personnel
Business Analysts
Compliance Personnel
Automated Scripts |
Production |
Performance Testing
Penetration Testing |
Production Websites
Production APIs |
DevOps / Systems Testers |
Table 1: Description of testing environments in the Software Development Deployment Process.
Applying the right test at the right time, in a sequential manner, saves time and money. For example, it makes little sense to do performance testing on an application before the code as gotten past the rigors of functional testing.
The code may be lightening fast, but if the functions don’t behave according to expectation, that code will need to be fixed and then sent back to the start of the testing process. Going back a step in the SDDP is an acceptable cost. Starting over is considerably more expensive.
Designing the Test Plan
The purpose of creating a test plan is to have a documented way to execute consistent, measurable testing on an enterprise wide basis. A well designed test plan will address the following concerns:
- What is to be tested?
- When is testing to be conducted?
- Who or what will do the testing?
- How will test results be stored?
- How are test results to be evaluated as successful?
- How will test results be reported?
- How will tests be maintained and enhanced?
What is to be tested?
As described in Table 1 above, the items under test will vary with each phase of the Software Development Deployment Process. Each phase will have it’s own scope of testing. For example, in the Development Phase of the SDDP, developers will create and execute unit tests against source code. In the General QA phase, testers and test engineers will test UI of an application, as well as service and API endpoints.
In addition to describing the items to test at each phase in the SDDP, a good test plan will define the standard for adequate testing. For example, in terms of unit testing, a test plan might require that all public functions of a component to be tested and that all the functions tested must pass.
Also, it’s typical that a test plan will define the code coverage requirement that unit tests must satisfy. Some shops demand 100% code coverage while other are less strict. Having a well defined itemization of what is to be tested and a documented standard by which to determine adequate, successful testing is a critical part of the test plan, particularly when it come time to report on the results of testing activity.
When is testing to be conducted?
Testing can become a burdensome cost if not managed well. Testing everything all the time doesn’t make sense, both in terms of reasonable testing practices and efficient utilization of resources. For example, running a complete test suite upon a code base makes sense when the code base or operational environment has changed. Running a test just because you can is a waste.
Thus, a good test plan will clearly describe when testing is to be conducted. The time of test execution will vary according to the needs of SDDP phase. For example, it’s usual for unit tests to be run automatically by the CI/CD deployment tool whenever feature code is merged into a common branch. A failing unit test will stop the merge activity within the CI/CD from continuing.
Typically, the full suite of functional and integration tests are executed when the code base is escalated into the next deployment phase. The time and place of testing activity must be known to all involved in the software development process. Some shops will send emails to interested testing parties when tests are due to be executed. Others will keep the schedule of test events on the company wiki.
The important thing is that communicating testing times must be part of the test plan and must be made to known to all.
Who or what will do the testing?
Part of any test plan must be a clear declaration of who or what is to conduct a given test. In the case of automated testing, the test plan must describe the automation tools and agents that will do the testing. In terms of manual testing, the test plan will describe the group or individuals responsible for creating and executing tests.
Defining who or what will do testing might seem like obvious information. However, it’s not, particularly for shops that are trying to make the transition to supporting Test Driven Development. Who or what will do testing must be stated as a matter of policy in the test plan. For example, a policy declaration that all developers are responsible for testing the code they write leaves no ambiguity whatsoever about the relationship of developer to unit testing.
Declaring that all API testing in QA will be conducted using a tool such as LoadUI makes clear the toolset that will do the testing and the mastery required by staff to perform testing. Defining in the test plan who or what will do a particular aspect of testing creates a uniformity to the testing process which reduces costs and creates efficiency.
How will test results be stored?
How and where they will be stored needs to be part of the test plan. Some tests plans will rely upon the storage features of test framework to make sure that test data is saved and retrievable. Other times, saving test results might need to take place via logs or in a database.
As such, the test plan needs to describe the logging or database technology used, where the data will reside and how the data will be accessed. Many testing frameworks will store only the results of the last test suite run. However, some companies need to have a history of test reports in order to identify operational trends.
Should the enterprise need to keep track of testing history, this requirement needs to be accounted for in the test plan. Clearly defining how and where test results will be stored is an important part of any test plan. Otherwise the enterprise runs the risk of losing mission critical data when the people who do know where the information leave the company.
How are test results to be evaluated as successful?
A good test plan will articulate in a clear, quantitative manner how success is to be determined for any testing session in the software deployment process. For example, in terms of unit testing success, the test plan can define a pass/fail and code coverage standard as described earlier. In terms of performance testing, success can be measured by setting the maximum amount of time that can elapse when a given HTTP request executes.
The important thing is that the test plan must describe how success will be determined for any and all tests. Enterprises need to determine success according to a quantitative standard. Otherwise, there is no objectivity in testing. Consistent quality depends on an measurable standard for success.
How will test results be reported?
Test results need to be reported in order to be useful. A test plan needs to describe the information that will be gathered and evaluated for reporting. Also, the plan needs to describe how reporting will be made available. In terms of test data, there’s operational test reporting and project level test reporting.
Operational reporting describes the result of a given test and provides developers, test engineers and testers with information that can be used to fix bugs immediately. Project level test reporting is intended for management and project sponsors. Project level test reports contain summaries of test results, historical information, and analysis of test data.
Management and project sponsors use project level test reports to make business decisions relevant to the project and personnel associated with the project. Test results can be reported as part of the build process in a project dashboard. Also, they can be a set of executive summaries delivered to key stakeholders via email. These are but a few of the delivery options.
A good test plan will leave nothing to chance. When designing a test plan, make sure the plan includes a detailed list of reports to be issued, the intended recipients and the means by which reports will be distributed.
How will manual and automated tests be maintained and enhanced?
Test plans, like software itself, evolve over time. Thus, tests will need to be updated to keep in step with changes made to the software and in the organization. The test plan needs to describe how tests will be maintained and enhanced. Some tests, such as unit tests, will be part of the code base and can be stored in the source code repositories along with the code base.
Functional tests scripts might be stored in repositories too. Test reports intended for management and project sports contain sensitive information. Thus, these types of reports are best stored in the company's document management system. Storing sensitive information on a file server is risky unless the company has a well defined security protocol for storage on a network drive.
Having test plans comply with the standard practices of version control and change management will ensure that the plan can evolve in a controlled manner. At the least, the test plan should be subject to semantic versioning. Also, at the operational level, storing a test plan in a GitHub repo makes it easy to provide the review, acceptance and audit trail necessary for structured change management.
Tools such as TestComplete provide revision management out of the box. No matter what path you follow, a reliable, well known process for maintaining and updating tests must be defined in the test plan.
Putting It All Together
Testing according to a comprehensive plan saves time and money while increasing productivity throughout the enterprise. Effective testing requires a clear understanding of each environment in the Software Development Deployment Process.
Also, when creating a test plan, it’s important to define exactly the testing will take place at each phase of the SDDP, who or what will be doing the testing and the reports that will produced as a result of testing activity. In addition a good test plan will describe how test data is to be stored and how test reports are to be distributed.
Mature organizations understand the benefits of performing testing activity according to a structured test plan. Ensuring quality is not a make it up as you go along undertaking. Having reliable, comprehensive test plans in place provides the guidelines necessary to allow applications to scale up to meet user needs in a safe, reliable manner.
A good test plan provides all the information required to make sure that all key test points are covered and reported at each phase of the Software Development Deployment Process. Implementing effective test planning takes time and commitment. But, once in place the benefits become apparent in terms of cost savings, software quality and user satisfaction.