Over the next few years, software testing techniques will need to rapidly to evolve as testers try to keep up with the ever-changing demands coming from throughout the software industry. Matt Heusser got a first-hand glimpse at the direction this evolution is heading while attending STPCon 2013. Here's his recap of the biannual testing event.
Last time I went to California was to attend the Conference for the Association for Software Testing, hosted last year in San Jose. Two weeks ago I found myself heading back to the Golden State for the Software Test Professionals Conference in San Diego.
As tempting as the all-day performance testing boot camp and the session by Dr. Kaner on qualitative methods for test design were, I decided to start the conference by attending Robert Galen's talk on the mind of an Agile tester. Unlike most tutorials, Robert runs this presentation like an Agile project itself - with timeboxed, one-hour iterations where he picks on the theme. This technique really makes the "slide deck" more of a guide, with a dozen different mini-lessons he might pull out at any time.
Bob Galen's adapted his class, incrementally, to adjust to the needs and interests of the students
As I walked into the class, Galen was already talking about "hardening sprints" and how some teams use the term as an excuse to do work that will be cleaned up later. He pointed out that was not the original intention of "hardening sprints," but, instead, those sprints were designed to be used on multi-team programs for integration when the pieces to be integrated could not be assembled until the end. He referred this process the "release train," and pointed out that it is to the program what a sprint might be to a team - a device to synchronize work and a timebox. Where a sprint might be two weeks, a release might be four sprints, plus a two week "hardening spring" where end-to-end regression occurs. This allows the program to focus on coordinating marketing, documentation and training.
Galen also described the traits that make up a "good" story from a tester's perspective:
- Negotiable in scope
- Estimated (at least the ability to be estimated)
These traits make up what he called the INVEST model. Stories that teams accept without these criteria lead to blown deadlines, arguments, wasted time and bugs. Testers who know the INVEST model can contribute by helping to define the story itself - rather than just testing it. Galen went on to suggest that testers do more than "just test" - they can drive improvement throughout the team. This is a theme that would come up time and time again throughout the conference.
The Future of Testing
Instead of the typical hour-long keynote, the conference did something different on Tuesday: Three speakers were each given 15 minutes to talk about how they expect testing will change in 2013 and beyond.
Rex Black started this session off by saying that he'd have questions about anyone that claims to know exactly what is coming next, but adding that it may be possible to cite trends. The trends Rex specifically mentioned were cloud computing, virtualization and the exploding number of configurations to support. In the end, Mr. Black did have one multi-million dollar tip: Figure out how to maintain relational integrity across database links when the data is private and needs to be anonymized. He cited this as a problem at several of his clients.
Next on the stump was Cem Kaner, a professor of software engineering at Florida Tech. Dr. Kaner suggested that statistical modeling - that is, real statistics skills - is coming to testing, along with an increase in qualitative methods for test design and system assessment. He described qualitative methods as not just telling stories, insisting that, "You drown yourself in data, then pick representative anecdotes to tell the story."
After Dr. Kaner, Bob Galen returned to the stage, again suggesting that the test role will change from finding bugs to driving continuous improvement, improving throughput and improving the customer experience. He gave a wonderful example of an exploratory tester who knew the product and customer well enough to add great value, and went on to explain that the tester of the future might not code - but they will need to find new ways to add value.
On The Transformation of Testing Groups
I've written before on test transformation and test transformation initiatives, but that was mostly on the subject of what - as in, what the next big thing will be or how testing has to change.
Lynn McKee, a former director of the Association for Software Testing, gave a keynote that was a on a slightly different topic - how to get a group of people to think differently about the way they work. Groups of people can't change overnight. Even if you could communicate what the change is, they likely wouldn't have the skills to make the change. Even if they had the skills, in just about any group of 10-or-more humans, some will resist change.
Drawing from the work of John Kotter, a professor at Harvard University, McKee explained that any successful human change requires a coalition. She went on to say that effective coalitions combine positional power, expertise, credibility and leadership. Now, this is more than just a random list. This tells me what I have to build in order to get a change to stick, and gives me the ability to figure out what a specific change is lacking. Once I know what is lacking, I can go recruit the right people, then get ready for the coffee breaks, the lunch meetings, and the seemingly endless rounds of re-explaining McKee's point about everything that has to happen to get a large group of people moving in the same direction.
Later on in the day, I ran into Dylan Lacey, and we caught up on the results of his session. Instead of taking notes, I decided to turn on the video recorder:
A Last Hurrah
After Lynn's keynote and my interview with Dylan, the big talks were all but over. Before running to catch my plane, I had just one more session to attend, "Where Do Bugs Come From? A conversation."
At least, the title was mine. The idea was slightly different: building a list of sources of defects with the audience. We could have talked about being off by one error, browser compatibility or vague requirements, but instead of giving the attendees a list, I wanted to build it with them. After that list, we came up with a list of risk management techniques, like pair programming, GUI testing or API testing, that we can do to limit those problems. Thankfully, Andy Tinkham offered to create a mindmap of our results, which I have simplified below:
(Click the image above to see the entire mindmap)
That section, "where do bugs come from," has 33 elements, including things like "missing pockets of information," "information lost in translation," "conflicting expectations," and "unanticipated customer use." This list of ways to mitigate risk is almost as long, with examples like "sit closer to programmers," "get to know the customer," "reduce the consequences of risk," reduce rollback time, or get customers used to the idea of defects - that is, if the bugs will be fixed extremely quickly.
What struck me about the risk management piece was how few of the sections are actually about testing. None, really. As moderator, I mentioned unit testing, which gives programmers confidence they built what was expected; system testing, which builds confidence in product conformance; and acceptance testing, which builds confidence in fitness for use.
The audience wasn't particularly interested in these as much as ways to work together to solve problems. This attitude lends itself to conversations, examples, demos and relationships more than a new style of testing.
The final piece of all this, the tough piece, is to actually use the information to change behaviors back at the home office. For that, I recommended to the audience an analysis with actual bugs escaping - at least to system test - combined with the techniques that should have found those defects. This kind of analysis leads to four categories of problems:
- Techniques that aren't discovering any defects. Which you might stop doing
- Techniques that should be discovering defects but aren't; the problem is escaping. These need to be refined.
- "Holes" in coverage -- root causes of bugs that are not addressed by any test approach. The team may need to find new approaches to catch these bugs
- Redundant Approaches that should catch the same bugs. May provide opportunities to do less.
The method I suggest here is to actually analyze the work from your own team, using empirical data - real bugs and real techniques - to decide what to improve next. That's the kind of thing you can do from the home office over a few lunch hours.
I left the attendees with the same challenge I'll leave you; to actually do this approach, and let me know how it goes.
The Big Take Away
As I reflect on the sessions, I am struck with the way that testing is melding into the overall development experience. Looking at the business cards in my pocket, I notice an "engineering associate", a "manager of business systems," a "sales & support engineer," and a few "pure" testers, but not many. Over the next few years, there may be less "pure" testers, but the conference is, if anything, getting stronger, attracting people who contribute to test and quality in other ways.
The test community is not going away; it is getting wider. If you haven't yet been to a test conference, you might want to give it a try.