This weekend we installed a microchip controlled cat flap/door in our home. We were all very involved. Our children were bustling around us with tools, cat food (that is, bait) and a vacuum cleaner, and my wife and I were extolling the virtues of not being awakened at 4 AM by a cat biting our feet to get us to open the door. I even enjoyed drilling and sawing a large hole into one of our porch doors – and finally having a reason to own the corresponding tools. Everyone was excited and looking forward to the outcome. Everyone except one, that is.
The cat itself.
The manual for this product tried to warn me: “Most cats will learn to use the cat door almost at once, but there are a few who may be a little nervous at first." No kidding.
When all was done and the cat flap was ready, we waited for His Highness to arrive and give it a try. Finally he approached, but he hardly looked at the opening in the closed door and instead went straight to another door a couple of meters away. Then, after having eaten, his curiosity was aroused -- and we held our breath. Slowly he poked his head through the flap (which was open for learning purposes) but halfway through he backed out, looked at us contemptuously and bolted for the other door. Cat vs Cat Flap : 2-0.
This got me thinking. How would a tester approach a product whose target audience is like this:
- Not very technical
- Not very convinced that the product actually solves a problem for them
- Not very forgiving and easily scared away if the product doesn’t work in a way they expect or are comfortable with
You might think this is a contrived example, and pet products targeted at cats are a bad starting point for the discussion, but when you think about it there are many websites and apps whose target users’ initial disposition is just like this. They’re not very technical (they just got a tablet for Christmas), they’re just trying out stuff on the web to see what is out there (they clicked on a link in a search result) and if they don’t get how it works or they get some strange error, they just move on to the next website instead of trying again. Probably they’ll never come back – their initial “bad” experience was enough to put them off.
Another example actually has extremely technical users; developers (or testers!) trying out new tools that someone else has asked them to evaluate. They will often behave in a similar fashion. If the tool bars don’t work as they expect them to, or the editors lack some kind of functionality that they have in all of their other tools, or the workflow isn’t how they want to work – the product goes out the window and will be judged as “worthless” for generations to come. They are all cats trying out a cat flap and judging the experience based on its cat flap quality.
So how do you approach this as a tester? What can you do to ensure the success of your product from a QA perspective? My thoughts are as follows:
- As often in testing, I would argue that the most important thing to do is to really get under the skin of the target audience and understand their motives and dilemmas (including the three bullets above). Document this in a persona. It doesn’t matter if the product does all the things it advertises from a strictly functional point of view, or if performance is top-notch and scales to millions of users without a hitch - if the barrier to entry and initial experience aren’t in line with the users’ prerequisites, chances are high they’ll leave and never give it another shot.
- Once the persona is mapped out and understood, the natural next step is to test your product with the disposition of the user. This is a perfect match for Exploratory Testing, the practice of interactively and creatively exploring and testing a product at the same time. Automated tests can never put on the hat of the skeptical user and see those little things that might disturb the user experience, like unexpected sights and sounds, a slightly changed workflow and other differences from how things are done in other similar situations in other domains.
- Finally, make sure that those skilled testers are involved as early as possible in the product development process – preferably when gathering requirements and building backlogs. Having exploratory testers with a critical eye spurred by understanding of the user and their domain is key competence in your team that can make a huge difference for your business. Those testers can correctly identify subtle traits and nuances in your application that your target audience might be sensitive to.
Back to our cat and the pet door. When we activate the electronics that detect him coming in (to open for him only), the clicking noise made by the opening lock scares him away. Fortunately for us we have a method to make him try again (food) but to be perfectly honest, I can’t blame him; would you put your body through a device that you don’t understand and makes clicking noises when you stick your head into it? Probably not.
Are you testing software that has a suspicious target audience? What are your takes on "cat flap quality"? Feel free to share your experience with us.