Introduction
There’s no silver bullet in software testing. Every test project has a different set of goals and solutions. To be successful, teams not only need the right tools, but the right skill sets, team dynamic, and an overarching test strategy that has organizational buy-in.
Many software development and QA issues will arise due to the technical capabilities of tool sets – their features, functionality, and integrations – but people are just as important. (And can be equally as detrimental.) A test project can fail at any step of the way and it’s frustrating to see lack of communication waste everyone’s time. This can destroy morale and trust in their work.
There are seven common software testing issues that we see time and time again, each of which can be easily avoided. And it all starts with outlining what a successful test project is.
What defines a successful test project?
The first step teams should take is to decide on success measurements. Is it the number of defects you identify? Is it the coverage percentage of your tests? Is it increase in customer satisfaction? Maybe you’re looking at the bottom-line development costs. These are all valid, but every team will measure success differently, so it’s important to establish.
For your testing project to be successful, you must:
- Start by outlining a set of clear, realistic goals that are agreed upon from the top down.
- Communicate project objectives to every stakeholder – decision makers, testers, the business analysts, and developers.
- When the project is completed, every stakeholder must agree that the goals have been achieved. There should be no gray area as to whether or not you’re done testing.
You’re probably wondering, “Isn’t this the goal of any successful project?”
Ideally, yes. All projects should be treated this way. From development, to quality assurance, to post-deployment monitoring. But test projects have historically been treated with less respect. They might be seen simply as a task or last-minute line item to cross off the release cycle checklist. This is at the root of our first two software testing project problems – how testing is defined and if the importance is clearly communicated.
With that, here are 7 common software testing problems and the ways to avoid them.
Problem 1: Your organization doesn’t know why it’s testing
Are you testing to, say, increase quality? Define what software quality means to the business. If the decision makers don’t understand why you need to run automated UI or API functional tests, and why that helps them, you’ll have a lot of trouble getting the support you need for success.
Here are some steps to avoid problem 1:
- Define the goals in a one-page project proposal.
- Illustrate the expected benefits of the project to the organization using simple metrics.
- Get buy-in from decision makers.
- Track the metrics.
- Follow up and share results.
Using data-driven numbers to show the benefits is key. Here are a couple of examples:
Example benefit 1: Estimated savings in development and support costs.
Your cost to fix bugs found by QA before release is $320/ bug. The cost to fix bugs found by customers is $1,084/bug. You’ll save more than $1,000 in development and support costs for each bug found and fixed by your testing team before release.
Example benefit 2: Increased user satisfaction.
A satisfied user is going to complain less, buy more, and cost less to support. How do you measure this? First, you need a baseline. Start by polling your user base with a short 3- to 5-question survey. Use net promoter score questions like, “How satisfied are you with the quality of the application?” Resend the survey 30 days after the new release to gauge the success of the product update.
Problem 2: Your organization hasn’t agreed on what problems to look for
Not all tests are equal. People will go into a testing project assuming they’re going to test everything. But do they have the time to test performance, usability, business logic, UI standards, all versions of windows, and all the browsers? No they don’t. And no one else does either. You always end up with a compromise. So if you set up expectations up front, then your team can be more efficient and succeed.
What steps should you take? Again, create a one-page summary that outlines what you can test and what you can’t. List the key categories your tests will fall into like business logic, installation / configuration, or performance, then share it with stakeholders and decision makers. You don’t need to outline specific test details here, but you should provide an executive summary that decision makers can agree upon before starting.
For example, a lot of organizations have service personnel that take care of installation and configuration so that testing’s lighter and the risk is lower. But performance might be very important to what’s happening. If you’re testing performance specifically, you’re not going to test installation / configuration. So when you go into the meeting, be prepared to negotiate and list out the details. Agree upfront on what to test, and you’ll be better off because you’ve agreed on your goals.
- Don’t agree to test everything.
- Spell out upfront what will be tested.
- Document what will not be tested.
- Get sign-off from decision makers.
Problem 3: You’re building test tools instead of testing
Smart testers are constantly looking for better ways to create and distribute manual test steps. Automation testers try to improve things by building new frameworks or utilities. That’s a good practice when you have the time, but if it wasn’t part of your plan then your resources are being diverted from completing the project. The problem isn’t that they’re working to improve your tools and processes, the problem is when the activity gets substituted for actual testing. If that’s happening, you’ve got to get the team back on track as quickly as possible.
- Put the tool work on hold or schedule time for tools in a separate project.
- Give them their own summary, goals and metrics. Estimate the resources separately.
- Require a minimum number of new issues.
Allow tool work later. It often leads to good automation, yes, but if you didn’t plan on it, then you need to keep that energy focused when you’re in the middle of a testing project.
Problem 4: Your team is testing the wrong things
There are an endless number of areas that can be tested. Like your organization, your test team (quality assurance team) needs to understand what’s important to test. A common issue is that priorities change based on what’s been uncovered in the recent builds, and on what changes development has made. Because the target changes, you have to continuously adjust what you’re testing.
Exploratory testing is great, but make sure you have direction. It needs time. Don’t just do scripted testing. Look at the priorities you’ve set, communicate them daily, and ensure the reports you bring back focus on the areas you’ve agreed on.
Scripted testing and automated testing are are evangelized everywhere, but can be redundant if unmonitored. Keep it lively. Chart what you’re getting out of it, and assess it next to your priorities. If you’re testing performance, make sure that you have performance tests. Seems obvious, but those are easy tests to write. You’ll see a lot of activity, but none of it’s aligned with your priorities.
Always evaluate the library of regression tests you’ve developed. One of the problems end users regularly see is the functionality that the regression tests are designed to validate are moot, have been deprecated, or are no longer relevant. We reduced effort needed during the regular cycle because we weeded garbage from our test plan. Make sure you do that for both your manual and automated tests.
Maybe you have scenarios that will be run and pass, but they may not be useful anymore. “Is this test useful?” “Is this a good test for our product at this point in time?” Keep those questions in mind.
- Set priorities.
- Track what’s been tested.
- Track what’s changed in the build.
- Review the issue reports you get back vs. your priorities.
- Triage daily or at least weekly and adjust.
Problem 5: Your test team doesn’t know how to test
You should have a full set of trained testers who know how to use all the tools, understand the application they’re testing, and are ready to go. It’s all too common that teams find themselves with testers that don’t know how to actually test. It can start from getting new testers, or having inexperienced testers thrown onto the team with the expectation that they can do professional-level testing.
The basics you need to have your team trained on should be:
- How to use the platform.
- How to perform tasks.
- How to verify the business process and logic.
- How to use the testing tools.
You can have people show up on site or do it online. It’s much less expensive to do the online training.
Set aside time for the team to learn and for the more experienced testers to teach the less experienced ones. Using the platform could be as basic as installing or copying a file. Many old school manual testers may not be familiar with the basic capabilities of the newer software. Give them the orientation. When it comes to performing basic software tasks correctly, get them any requirements or subject matter experts (SMEs) so that they can be successful.
But back to our initial “team doesn’t know how to test” problem, and how to avoid it:
- Testing training.
- Test tool training.
- Good requirement docs (though they may be hard to come by).
- Subject matter experts + testers.
- Developers + testers: get the developers involved with the testers.
Given the rarity of the 3rd point, and the obviousness of the first two, the last two items here are the most important. Your SME should be embedded with your testers. Ideally in the same room. Have the SME be on-call for the tester so he or she can get online to inform them how a certain test should go and how the business logic should go. Same thing for developers. The developer understands the software and should go over it with the testers.
In terms or what and how to test, always keep in mind the user’s mindset when using the tools. If you can sit with an end user, all the better. Without user insights, the testers may not understand how critical or trivial the issue is. It’s not always walking through menu items and button clicks so much as it’s understanding if a given scenario breaks the application. The more in-tune you are with your end user base, the better your tests will be.
Problem 6: Development doesn’t understand or can’t reproduce your problem reports
If the developers don’t understand your issue report, they can’t fix it. And if they can’t reproduce the bug, they can’t fix it.
How do we avoid this? There are a few ways, but the most important thing is this: Give them the details.
But here are some more broken-down steps to avoid it.
Report the problem. Not the solution.
First things first. Define the problem. It’s easy to say, “Make this button larger,” or “Adjust the text.” Before you give direct instruction, it’s important to first make sure the problem is described, then include the descriptions.
Include system configuration – Which browser, etc. With a tool like TestComplete, you can include this information automatically in the testing log.
Include the scripted test – In TestComplete, you can send and issue with just a few clicks.
Show the developer the issue (screenshot, video) – It will include any screenshots with the errors captured automatically. Having the developer see what the issue is, especially with UI testing, can be vital to a quick turnaround. Tools like TestComplete will also include videos. Record a few minutes of the issue to illustrate what’s going on.
Use VM to reproduce the issue – Use virtual machines to recreate the confurations that reproduce the problem. Save a snapshot and share it with the developers.
Embed testers in development
Consider enacting a program in which testers are immersed in the development phase for a certain amount of time each year. As in they leave QA and live with development for a week. They get to see what a developer has to go through and when they receive a problem they can’t reproduce. Giving them this perspective teaches them how to give developers the details they need to solve problems.
Pairs testing
Do pairs testing regularly, where a tester and developer sit side-by-side. Have the tester sit with the developer and walk them through the various test activities they’re performing, and the developer offers insight into how things work and why. Testers might have the end user in mind, but developers think about the nuts and bolts of how things actually work together. Sitting together and exchanging information lets them code fixes before they happen, and the tester can create more sophisticated tests moving forward. It’s great way to get testers and developers in sync.
Pro tip:
If you your testers struggle with writing good bug reports, walk them through the “Car Start” exercise. Ask them the question, “My car didn’t start today. Why?” They’ll look at you funny. But if you encourage them to think about solutions, this forces them to think of all the scenarios in which a car wouldn’t start. (“Do you have gas?” “Is it in Park?”) And you can use that example to teach the importance of detail in issue reports. This is a way to visualize the impact to the customer, which is how you successfully work with others to prevent it from happening.
Problem 7: Your tested system isn’t testable enough.
At first this statement doesn’t make sense, because anything can be tested. But if your developers work with you to make it more testable, the testing can be more robust.
Some tips on how to avoid this:
Include QA early in the cycle. Get in line with development as soon as possible. It saves time in the long run. This works double for developers when test time comes around – because if the requirements are unclear or haven’t been thought of from the tester’s perspective, they may not be testable because of the lack of clarity. For instance, having a requirement that simply reads, “The application should be responsive” is not quantifiable. Does that mean the application loads in 10 seconds? Does that mean every click needs less than a 5-second delay to the next screen? These specifics are essential for the tester’s job.
Include QA when choosing new components. Having developers loop them in lets you make sure new components are testable. It’s usually an afterthought. Make the case for looping them in, and again, it will save development time in the long run.
Test new software tools with automated testing tools. Some are more compatible than others, and figuring this out ahead of time saves a long of effort, because if you can’t run a report, then you’re in trouble.
Developers use same testing tools as QA. Whatever automated testing tool you’re using, make it available for developers. This way they can even run their unit tests with it, which you can do within TestComplete. It’s not expensive to add a license, especially if it saves the time and frustration between tests.
Automatic smoke tests. Along with using the same testing tool as the developer, communicate to development that you need a build server that always runs some type of simple automatic smoke tests on that application that they’re building. You’ll save countless hours, because this prevents development from shooting off a new build, sending it to QA, and wasting a day where the build wasn’t even truly testable. This keeps it usable at the most basic level and is one of the most important pieces of success on the automation side.
Use VMs for baseline configurations. Too many people still use their desktop computers for this and it can make things messy. It’s cheap and easy to use a virtual machine to test and QA.
Embed developers in QA. This is the mirror image or our earlier point of immersing QA into the developers world. Sitting developers in with QA lets them see how hard QA is working, what they’re up against, and it improves communication overall. Plus it boosts morale.
Pairs testing. Same as before. Ostensibly pairing up developers and testers is more beneficial for the testers, but developers get a lot out of it.
Here’s a recap of the 7 basic problems you’re up against.
- Your organization doesn’t know why it’s testing.
Communicate with a one-page summary and get buy-off ahead of time from all parties involved.
- Your organization hasn’t agreed on what kind of problems it’s trying to find.
This is up to management. Make a summary and touch base with the decision makers to make sure everyone’s on the same page.
- You’re building test tools instead of testing.
Test tools are important. But not in the middle of a deadline. Make a note for later and you’ll get the best out of it.
- Your team is testing the wrong things.
Set the priorities, and use a tool to track what you’re testing.
- Your team doesn’t know how to test.
Read the right books. Use online resources. TestComplete has a free book you can download right now, or you can look up the information on websites like www.stickyminds.com.
- Development doesn’t understand your reports.
Include more information. Include the reasons, and the story behind them, and how it impacts the user.
- Your testing system isn’t testable enough.
Communicate with developers and have them site side by side with testers so they know what the other is up against, and how to solve for it.
Don’t want to find out you’ve been wasting your time after the release.
https://support.smartbear.com/screencasts/testcomplete/avoid-7-software-testing-project-problems/
A tester’s guide to writing a good bug report.