One of the things we've really improved on in the new, soon to be released, loadUI 2.0, is the ability to create complex user simulations. This allows you to accurately generate
realistic load on different parts of your server, based on real world data. When doing these types of simulations, it is common to use something called a Markov model. In essence, what this is, is a set of states, and the likelihood of transitioning between them. This is better explained using an example, say a web site. Once a user arrives at the start page, he or she can do one of three things. The user can go to the news page, the about page, or leave the page. By using data which is available from the web server (using Google Analytics or something similar), or by simply estimating, we can assign probability to these actions. For example:
From the index page, a user can:
- Click on News (50% chance)
- Click on About (20% chance)
- Leave the site (30% chance)
So how does this relate to load testing? Well, imagine if we can create a virtual user scenario where each user makes these choices at each point in the test. Instead of coming up with a single flow that each user follows to the point (start here, click here, go there, do this, do that, and so on), we can create a model where each user will behave differently. Some will just hit the start page and then leave; some will browse the site for a while. The important thing that we get here, which traditional script based scenarios don't give us, is realistic fluctuations in traffic. We'll get peaks and valleys that accurately correspond to the real world, and are based on mathematical models. It should go without saying that the more accurate our load test is, the more we can depend on the results when it comes to real-world users.
For the new, soon to be released, version of loadUI, we've added a number of new components, and have much improved some existing ones. Several of these vastly improve the ability to create these types of tests, so let's go over some of them!
The (improved) Splitter Component
The Splitter component has been a part of loadUI since the beginning, but for loadUI 2.0 it's been improved to allow weighted distribution. Now, each output of the splitter has a knob, which allows us to specify the distribution, allowing us to have the aforementioned 50-20-30% split, instead of having the same probability for each.
The Condition Component
With the new release we've added a whole new way of creating assertions, eliminating the need for the old Assertion component. The old component, however, did have one feature which is missing from the new assertions. That is the ability to direct traffic based on some condition. Replacing it is the new Condition component, which gives the ability to do this in an even more powerful way. Besides having the ability to set the condition based of a numerical value and an accepted range for it, it also adds a new advanced mode, which allows you to write a Groovy expression. This means that the condition can be much more advanced than before.
The Loop Component
We got some feedback from our users asking for the ability to set up their test so that they can simulate a user performing some initial action, then running another action a certain number of times, before finally performing some final action. For example, this might be logging in to a service, then performing 1000 searches, and then logging out. For this, we added the Loop component. It has two outputs, one which directs the virtual user into the loop (which should loop back into the loop component again, eventually), and one which exits it. It also has an iteration count setting which defines how many times the user should run through the loop. Oh, and we heard you like loops, so we support nested loops, so you can put loops inside your loops.
The (unchanged) Delay Component
The Delay component hasn't changed at all actually, so I guess we got that one right from the start? It's still as useful as ever though, so I might as well include it here, for the sake of completeness. The Delay component has a single input, and a single output. It routes incoming users to the output, completely unchanged, though as its name implies, it delays them for some time. There is support for having an exact delay (specified in milliseconds), but also to randomly distribute the times uniformly, exponentially, or using a Gaussian distribution (bell curve). Again, since not all users are mechanical robots, I generally prefer to add some randomness into the mix.
Putting it all together
So how do we string these different components together to create the initially described type of scenario? If you're familiar with loadUI, you probably have an idea already. If not, here's an example to get you started. The starting point of any test is the VU generator (previously called “load generator”). Since were going for realism, the best one to use here is the Random generator, set to use an exponential distribution (this is the distribution which most accurately describes how real users arrive at a site). All VUs (Virtual Users) arrive at a single point, the Index Page. They stay for an average of 1500ms, then 50% of them visit the News Page, 30% visit the About Page, and 20% of them leave the site. From both of the other two pages there are similar probabilities for moving to the other pages, but eventually after clicking around a bit, each user will exit the site, which we show in the Table Log at the bottom.
The bottom line
So, when should you model complex realistic scenarios as described above? It really depends on what you are testing for. I'd say, yes, definitely try to model the simulation as accurate as you possibly can, if you're trying to predict how the system will behave under a realistic load. That's definitely something you should do before moving into production. However, it's also not the be-all and end-all to load testing. There are several times where you just need to generate a massive amount of load on a single point of your application to see where it breaks, or to locate a bottleneck. The good thing is, that loadUI allows you to do both of these things, and anything
in-between. We've also majorly improved the soapUI Runner component, making it much more useful in realistic load simulations. You can read more about that in another blog post, by my colleague Henrik, here.
The first beta of loadUI 2.0 is scheduled to be released on Feb 28, with the final version coming out roughly a month after that. Downloads will be available from loadui.org!