If you’ve ever taken a train across the northern route of the U.S., you probably spent your time soaking in the beauty of the countryside and wondering at the odd little towns, never once thinking about the possibility of colliding with another train or that a career in software testing might start out making sure such a thing never happens.
But that was the kick-off for Mark Tomlinson in 1992, when he was tasked to run a two-year test analyzing the performance, design and execution a life-critical train dispatching system. If you think about it, it’s a perfectly suited metaphor for a long and distinguished career in performance testing, where you essentially make sure train wrecks don’t happen, so the masses—usually wholly ignorant of all that went into preventing disaster—can sit back and enjoy the ride.
Since then, he’s worked with the likes of Microsoft and HP. After decades of experience with real-world scenario testing of large and complex systems, he’s regarded as a leading expert in software testing automation with a specific emphasis on performance. Now-a-days, he gets to do things like consulting and even runs the popular podcast PerfBytes, which will come as no surprise once you hear his radio ready voice in the audio clip below.
Given Mark Tomlinson’s background, I thought he’d be a perfect person to ask about an ongoing inquiry I’ll be posting on for the next few months—how performance testing and the skills of a performance tester can go far beyond load testing. Here’s the first clip of a series with Mark Tomlinson on the subject.
Scroll down for the transcript.
MEG CATER: I'd love to hear-- I know that some of this question may be even beyond what you would typically consider testing-- but the concept of how performance is more than just load testing can get pretty broad. So I would love to hear your take on that. Why is performance testing more than just load testing? And what are the different elements that one would have to think of, even from a business strategy perspective, or getting down into the fundamental details of creating test plans?
MARK TOMLINSON: Right. So the easiest way, I think, to suggest this. Or the most common way that I observe people, when they first expand their view outside of the testing function in a traditional sense, they're going one of two ways. They're either, as performance testers, were either pulled towards production, or pulled upstream towards development in the business, or the business objectives.
So for this discussion, I would start by saying, the most common way people are being pulled now is to shift upstream. So Jim Duggan, from Gartner, actually, sort of, described to me the difference between performance testing, load testing, being a hypothesis-based research. Which is kind of based in testing to a criteria, pass or fail.
And you have some idea of what your success criteria are. And the code has already been written at the time you're evaluating and doing, what I would call, performance validation, actually, from from years ago. Performance validation would be their category.
And so the first step upstream would be performance engineering. And Jim's definition was, the one sure-fire way you can know that you're doing performance engineering, is if you're influencing the thinking of the design of the code before the code is written. And that could happen in an agile sprint while you're discussing with the business analyst, or you're there as a tester in a multi-party conversation about a particular story, and how fast that story should go. Again, that's time.
Or how many users it should support. That could be volume. So again, time and volume, these discussions that get after the business and also influence the thinking of the design of the code, if you're actually that early in performance-- at that point you're not really doing a test, you're more influencing the conversation between an engineer and maybe the product owner, or technical product owner, as to these two other important dimensions of a user's story.
That would be how you transition from being a load tester or a performance tester into what you would call a performance engineer. I think the next step you would take after that, going upstream even from there, is when you start analyzing what it means to deliver on increased volume, or faster response time.
So let's say you're in a business unit, and you have a competitor who can do some transaction faster than you, a financial transaction, or let's say, you're taking submissions, selling tickets online. If you're delivering static information, whatever it is you're doing as an organization, if your competitors can do it faster than you, then, as a performance engineer, I wouldn't take one step further to say, well, if we can figure out an architecture in our code that's what it would mean to the business to go twice as fast. Or process twice as many users, or handle twice the load, or three times the load.
At that point, you're going from performance engineering to being almost a performance architect, as a role. Or you're actually kind of connecting the meaning, or the real return on investment, if the company were to turn around and say, wow! You know, if we could do three times the throughput, or handle three times the users on our current system, we might actually be in a position to do an acquisition of our nearest competitor. We can just absorb their business right on our platform, and compete in the marketplace.
So from a business perspective, or capacity for any kind of organization, there's value in usually doing more of what you do. Or do the same thing for less cost. So that's kind of inverse of that equation. So those would be the first two steps, where performance steps up out of the lab and starts looking at joining a sprint and influencing the code in performance engineering. And the next step, which is sort of like performance architect, or even thinking about the real value of increased volume transactions or faster systems.
--Check in again in few days to hear from Mark Tomlinson on how performance testers can “move downstream” towards production and monitoring.--