Testing is Human Nature

  March 26, 2014

It’s no surprise (to me, anyway) that my 5-year-old twin nephews gave me pause once again this Christmas as I watched them play with one of their new toys. Each of them got a robot fish that turns on automatically when it senses it is in water (ah, the days of bathtub toys!). Of course, grandmas being what they are, my mother-in-law got them a bowl of water to try them out in, right there on the living room floor in the midst of the Christmas Eve bash. As the twins watched the fish swim in circles around the perimeter of the bowl, one of the adults casually questioned, “Do they swim in a straight line too?”

Without looking up, one nephew snatched his fish out of the water and answered quietly and seriously, “I’m going to test that.” He then proceeded to start his fish’s path at different points in the bowl and in different directions, knowing intuitively that “testing that” didn’t mean trying just one thing.

So, how did this idea fall so naturally through the lips of a 5-year-old in the throes of Christmas excitement? Simple: It’s human nature to test.

Long ago, testing was a survival instinct (Is this food? Will this hurt?), so we’re wired for it. We not only understand the nature of testing with very little prompting, we are also innately geared to design test cases in order to gain a more thorough understanding not only of the question we’re trying to answer but also of the answers we’re gathering. My nephew knew instinctively that he had to try different methods of placing the fish in the water in order to answer that simple question.  And in fact, it got more complicated from there…

Each robot resembles a different type of fish so they have different shapes, weights and tail size. Somehow, my nephew incorporated all of that data pretty much immediately and worked it into his testing. In order to test both fish, he had to enlist the help of his brother. Before you knew it, they had a paired testing process underway and were busily chattering about all the different things they had to try in order to find out if the fish could swim in a straight line.

They found that they could swim straight, but would settle into circles as soon as they hit the wall of the circular bowl, which only makes sense. They were constrained by the size and shape of the bowl and they knew that within minutes, demanding to fill the bathtub and test in a better-suited test environment.

(We didn’t, of course – as in most testing situations, there was a limit to what we could provide at that time and place)

Watching this unfold reminded me of something I knew when my children were small – children are naturally great testers. As Brian Gerhardt states, we test to learn and there is no question that the main focus of being a child is learning. So, what happens along the way that makes testing so complicated and hard to do? I’m sure there are more factors, but here a few I thought of that can limit our testing abilities:

Process.

It’s not that process is a bad thing – it’s that it can get very rigid and bloated in importance. While my nephews were just trying to answer one question for one person and had no enforced deadline, most testers have a long list of questions that need to be answered within a certain time and budget. That means decisions need to be made up front about where to focus a tester’s energy, which questions to tackle first, and what kind of test environment will be needed.

Why didn’t we fill the bathtub with water so they could take their testing to the next level? Because we didn’t have the time or conditions to do that – their mother assured them they could try the fish in the bathtub the next night - or in software development speak, “we’ll test that in the next sprint.”

I’ve been a practitioner who moaned about process and I’ve been an executive who valued process so, like Joni Mitchell, I can see both sides now. In my experience, it’s not the enforcement of process that’s wrong – it’s the enforcement of process despite information that should push you outside the process.

For example, there’s nothing inherently wrong with writing test cases – they are a means of communication between team members that allow you to figure out if you need a different environment, or additional diagnostics added to the code, or more time to test. But counting test cases as a performance metric or sticking to them beyond reason when issues come to light or project priorities change – those are restrictions that prevent you from following the natural path of your testing.

Philosophy.

One thing you can count on in the software industry is evolution (and occasional revolution). If you’ve been in this industry for long enough, you’ve seen it morph many times and you’ve seen approaches come and go. As the products and technology evolve over time, the methods we used yesterday to build and test may not work today. A trap many people fall into is becoming wed to a single approach and closing their minds to other approaches that may work better - methodologies tend to become philosophies in software development, with people preaching the One True Way and discriminating against other ways.

If we stay on top of the latest trends but also keep our innate testing sensibilities at the forefront, we may find ourselves inventing new ways of doing things. With each new approach we learn and master, we build our toolbox that lets us find the right tool and right approach for each project.

What differentiates a religion from a cult is the ability to question. As testers, we should never stop questioning – it is the essence of who we are and what we do. Your circumstances, your project’s context, the stakeholder personalities, your development process, your customers’ risk tolerance… all of these are variables that have to factor into your approach to testing. You may even find yourself approaching each project with a unique methodology.

If my nephews had been testing their new fish at a water park, for example, the options for them would have been different and would have shaped their approach. Likewise, if their next gift is a set of robot beetles and they want to test if they can walk in a circle, the thought process will be similar but not the same.

Micro Focus.

There’s a natural tension in the testing world between drilling deep and staying broad. The reality is there is no right answer to this and the best way to figure out which is most appropriate is by understanding the context of what you are doing in relation to the overall company/project goals. I’ve seen many a tester drill down into a tunnel that becomes dark and restrictive, keeping them from finding issues that lurk in the world above and to the side of them.

In addition to testing functionality, you need to also constantly test your assumptions about what you’re doing and why it matters. The older we get, the more we lose our tendency to ask why – but if we tap back into our natural instinct to test and question, we will find ourselves asking “why am I doing this?” more often. That’s a healthy place to be, because it forces you to constantly refresh the big picture and the priority of the work you’re focusing on.

There is no question that software testing is a complex activity that often has so many permutations, it can be mind-boggling to define them all. If it weren’t so complex, it wouldn’t have spurned an entire industry for testing tools and education.

But there is also, in my mind, no question that we have over-complicated aspects of the industry itself, which often leads our focus away from the task at hand. In this new era where software is as much a part of the infrastructure as electricity and roadways, we need to renew our focus on the quality of our testing and free ourselves from the baggage surrounding it.

See also:

 

[dfads params='groups=931&limit=1&orderby=random']

[dfads params='groups=937&limit=1&orderby=random']