Regulated Industries and Software Testing
How often do testers truly consider the repercussions of missing an important defect or not following a strict code of professionalism? Of course, we want to find important defects and help developers solve the problems, but if something gets missed we can (almost) always just fix it in the next patch.
In a non-regulated industry it’s a fair enough assumption that this kind of thing happens all the time. But there is another side to the software testing spectrum in regulated industries.
As testers, we have the responsibility to find and report important bugs, but what if finding those bugs were at the helm of a life or death situation? For example, if you’re a video game tester, missing a single bug isn’t the end of the world, but what if you were testing a medical device and that device malfunctions…
Griffin Jones had an informative presentation at AST CAST2013 called “What is Good Evidence?” that detailed how testers protect themselves in regulated industries. His presentation was very intricate and well thought out, so I cannot reach upon every aspect of it in a single blog post, but there were a couple of points in his presentation that resonated with me:
- How proper evidence can help protect your (and your company’s) credibility.
- Traps that must be avoided as a tester to maintain that credibility.
I Give you Exhibit A
First off, I never thought a tester could be sued for not testing correctly, but apparently that isn’t so. Forget about company credibility, if regulatory standards are not met, a company (or even the employee who made the mistake) will be or barred from doing business altogether if it’s found federal guidelines were breached. You must approach every scenario as if it could pertain to a lawsuit.
When it comes to software testing in a regulated industry, everything must be articulately documented as evidence. Even the person who created and executed a test must be logged somewhere. For instance, every person who logs into a system needs their own login to track changes to that system’s data. You must keep in mind that any evidence or lack thereof can be used against you and your company when an audit occurs. But even besides the audits, if a lawsuit would ensue due to a software defect, that evidence would come in handy fairly quickly.
Now an important aspect of evidence, as you may have guessed, is that there is a difference between evidence and opinion. For instance, say an auditor comes to you and asks how you tested a certain feature. You would not want to respond with how you tested it. You’d want to provide the evidence that shows exactly how you tested that particular function. Opinion is not evidence and it can be very easy to let an opinion slide, but this could be disastrous for the company as a whole. It is important to always keep your composure and never let your guard down.
So how do you document everything? In Mr. Jones’ work, he used video to document the testing process. Not only is this a quick and easy way to create evidence but unlike writing, screenshots and reports, video evidence is less likely to be misinterpreted and takes up less desk space. The downside to this approach is that you could record something that could work against you, such as, an employee being unprofessional or making a pivotal mistake.
As for integrity, once records have been produced, this documentation needs to be drop shipped to a vault. To go a bit further, it wouldn’t hurt also to digitize all documentation. Generally, the more backups you have the safer you will be, as long as those backups are kept in a safe non-disclosed location.
Refrain from Lullaby Language
Keeping a strict code of conduct, practices and documentation is extremely important but another part of testing in a regulated industry is communicating – whether it’s fellow testers, developers, or managers – within your organization or auditors. Anything you say or do can be held against you in a court of law, so you must avoid certain traps whether speaking to a colleague or an auditor. In order to uphold this mindset, Mr. Jones explained that he is always in a state of guard. He will always think before he speaks and will speak in a way that cannot be misinterpreted.
An example of poor communication would be, as Mr. Jones put it, “Lullaby Language.” This way of speaking shuts-down all conversation, discourages feedback and promotes shallow agreement, which in turn ruins the evidence. Using “lullaby language” does nothing more than “lull your mind into a false sense of security.” Rather than having a constructive conversation with your peers about a test result for example, this type of language “provokes a behavior.”
My favorite part of Mr. Jones’ presentation was definitely the mention of what he called the “Moon-Walking Bear problem” and I see this as the biggest trap a tester can fall into regardless of industry. Our biases can cause us to miss important defects that otherwise would have been found if we were aware of our bias. The problem with the biases of testing is that as humans we are all bias in some fashion. How we realize and cope with our biases determines whether or not we overcome them. Sometimes it takes someone else finding the defect with the same test case besides you to come to that realization.
Overall, I’d like to thank Mr. Jones for presenting because I now have a better understanding of the scrutiny the whole development process is put under in a regulated industry. I certainly wouldn’t be comfortable with testing a medical device that could potentially kill or save a fellow human being, which is similar to the reason why I never wanted to be a surgeon, but understanding that our actions or lack thereof has consequences is a fundamental fact of being a software tester.
See also:
- CAST 2013 – Arguing, Learning and Software Testing
- When Will Testing Actually become Mobile?
- Cat Flap Quality
[dfads params=’groups=937&limit=1&orderby=random’]