First of all, before anyone’s head explodes, let me explain what I mean by “Best Practices.” I know that, particularly in the realm of context-driven testing, this term is looked upon as a major misnomer that is only spewed by the ignorant and uneducated. But I don’t think that has to be the case.
Not everything should be quantitative. Not everything has to be proven without a shadow of a doubt. Most of all, we shouldn't be afraid of being wrong. For years people have subscribed to this idea that "there are no best practices," and have genuinely feared the shunning that would come with stepping outside the lines of that very narrow, absolutist mindset. Particularly in an industry so tightly tied to quality and context, it seems bizarre that no one is willing to allow any gray area when it comes to a slightly subjective term like “best practices.”
For the sake of this blog post, let me briefly explain what I mean by “best practices.” I’ll do so by paraphrasing Cyrus Shepard, who I think has a good explanation for how we should look at this term, and what we should expect from the guidelines that fall under it:
- Best practices are a set of rules or guidelines that have consistently shown superior results for a practitioner. This doesn’t mean that best practices are the only way you could or should accomplish a task, just that they have generally shown consistently higher results than other techniques.
- Best practices can help to serve as benchmarks for the industry they’re applied to. They should be seen as goals for practitioners to strive for in order to raise the standard of work and enable an industry to mature over time.
- Best practices are temporary. Saying that something is a best practice today doesn't mean that I'm forever bound to that methodology. Rather, best practices should be expected to evolve over time. What is a best practice today likely won't be a best practice a decade from now. And that’s okay. That means practitioners have improved with time and have created a new standard for the expected quality of their work.
And finally, best practices can, and sometimes should be, be ignored. As is the fundamental element of context-driven testing, you have to decide whether or not they fit into each project you work on. Sometimes you don’t have the time or resources to do all of them. But just because they can’t be blindly applied to every situation doesn’t mean they don’t exist. They do. And here is my take on five of the best practices of context-driven testing.
Ask Questions
This has to be the one thing that I hear context-driven testers emphasize more than anything else. Ask questions of stakeholders. Ask questions of the development team. Ask questions of your fellow testers. Without asking a slew of questions it's extremely difficult to understand the context of a project, which in turn makes it very difficult to obtain maximum test coverage.
Asking questions can also be a major driver for improving your career beyond a specific project. Contantly asking questions and questioning the status quo allows junior testers to learn from their mentors, and it allows mentors to learn from junior testers. As Keith Klain explained to me during an interview in March, the most successful testers are generally the ones who have an unquenchable curiosity and are able to spread the knowledge they've gained over the years to those around them:
The people I’ve seen successful in the [tester mentoring] role are folks who are conduits to knowledge. So it’s like, "Let’s learn from my experience, and ask a lot of questions." I think the Socratic approach to mentoring is really important to get people to learn things on their own and help them be very self-reflective and figure out what you are contributing to this problem and how you can help them tease out their own solutions. Because that’s ultimately what you’re trying to do anyway.
Plan Ahead
What good is all that knowledge if you don't have an efficient way to put it to use? Creating and sharing your test plan with the rest of your team and project stakeholders not only makes you more efficient, it builds rapport with the rest of the company and spurns more meaningful conversations.
No, you shouldn't be expected to take feedback from every junior developer or project manager in the organization, but sharing your initial test plans with the most prominent stakeholders gives them real insight into the type of return they should expect to see. It shows them that their are some guidelines to what your team is doing, and that any changes in their own plan will impact the testing strategy.
As JeanAnn Harrison explained it during her most recent visit to the SmartBear office, “If you plan out your testing strategy and figure out what kinds of tests you want to do then, really, you will go ahead and create a very efficient process."
Adjust Your Plan Accordingly
Just because you've made a plan does not mean it 's set in stone. Rarely does a software project go entirely as expected, so you have to be ready to make adjustments as they come. Failing to be at least somewhat flexible with your test strategy will likely lead to all around frustration and less thorough test coverage. Schedules change, features are added, new priorities arise, and your strategy should adapt accordingly.
That said, if you've laid out your testing plan and shared it with stakeholders, you shouldn't be expected to bend over backward to make up for mistakes in the earlier stages of the project. Remember that your ultimate goal is to achieve the maximum test coverage you can achieve given the parameters at hand. If those parameters change through the course of the project, you will doing yourself and your organization a disservice by staying true to a plan that no longer serves that ultimate goal.
Stakeholders Decide when a Project is Over
This is for everyone's sake. It gives the stakeholders the power, and thus the responsibility, for deciding what timeline everyone on the project is working toward. That's, in theory, part of their job. It's also good for the testers who, as Dawn Haynes explains in the clip above, shouldn't be making the decision to hit or miss a deadline.
Your job is to test the software as thoroughly as you can within the limitations handed down by the stakeholders, and then to provide as much information back to them as you can.
Don’t Blindly Apply Any Practice
This is probably the most prominent and obvious pillar on which the context-driven ideology is based. As I stated in the introduction, best practices (including the ones listed in this article) will not work for every situation. Sometimes you simply won't gain much information from asking questions of project managers. Sometimes the project is in such dire straits that there really is no time to create a detailed test plan ahead of time. And sometimes it's better to stick with your guns and push back against stakeholders that are making a horribly detrimental decision. In the end, it's all about doing as much as you can with the information you have at any given time.
And it's exactly that kind of flexibility that will continue continue to elevate context-driven testers to the forefront of their field for decades to come.
See also: