There are times when to illuminate an underlying concept of something, you simply must get basic and simple. While we can add accretions to a concept in order to more fully explain it, understanding these additional levels can only occur when the basics are grasped.
One of the relatively recent concepts going around in application performance circles are called the real/synthetic schools of monitoring. At its heart, it’s all rather simple. You can monitor what real users actually do in some situation, or you can monitor the results from efforts to synthesize what you think they will do in that situation. That’s it, in a nutshell.
This de-abstraction can help anyone (including non-technical people) understand what these technique concepts are all about when this level of simplicity is invoked.
So, what’s RUM got to do with it?
As is true with almost everything these days, monitoring users in order to capture data has its own acronym - RUM (Real User Monitoring). Very obvious, yet very descriptive. But RUM has come to have an implicit meaning beyond just the monitoring of users. It is usually meant to include the analysis and processing of the monitoring, the things that provide actionable insights to a situation. The monitoring of users as a complete closed-loop process is the view that RUM encourages.
RUM has some obvious functional problems if it is used alone as a method of taking a look at some situation. One of the most obvious problems is that by only monitoring users, you need to have some users there. The users must be actively involved in the specific area you want to monitor, as well. It’s impossible to test out a situation with the RUM methodologies before actual users (and their expectations) encounter it. However, it must also be realized that RUM provides a unique perspective on user behavior, The data from monitoring real users has the capability to highlight things that users do in actual situations that you may not have considered in the design phase.
One of the first classes of situations where RUM was used as a tool to generate data for analysis was the “abandoned cart” problem that occurred in early e-commerce websites. A user would come to a site due to some marketing effort, look around, and then make selections. The user would then abandon those potential purchases before they got to the checkout/pay phase of the experience. They filled the shopping cart, but never went through the checkout line.
The trail of the disappearing user
The page views of each user was noted and analyzed, including the “backwards” navigation that the user performed. What came out of the analysis of the paths that users were actually following on these sites was that certain page views had the most user abandonment associated with them.
Now, correlation is not always causation as is so well known. The problem that needed to be addressed was outcome-oriented. Analysis was needed to figure out how (and why) the user got to the bailout point. The data that came from the list of the user’s complete path of page views was the trail of bread crumbs to show the paths the users took.
The list of page views generated showed that the users were muddling about the sites, with no clear and direct path being evidenced by most of the users. This path behavior was at variance with what the website designers had expected to happen. The designers believed that once on the site, users would do some sort of search, then land on a product view page. That product view page would lead to a selection by the user and on to the checkout from there.
But users seemed to select items, and then leave them un-bought. Sometimes they changed their minds after they made additional selections. The inference here was that users were envisioning a multi-item purchase before they bailed. Further, the data made it seem that the website confused users and how they navigated when more than one item had been selected.
It may be that the users could find no way to review the potential purchases in different arrangements or see them all together. But the confusion stopped the purchasing. The major bank card players were worried about this (since they had high hopes for e-commerce and the financial payments that it would generate) and came up with ideas for best practices in these kinds of situations.
What were the actionable insights that were generated?
The upshot was to convince site designers to rethink the simple interface that was characteristic of the early e-commerce efforts, and directly address what their users wanted from the sites. That included making the navigating experience a much less daunting process.
Watching the user’s actions served well here. But there’s another way to do things. It’s the “synthetic” method. There is no user action involved in this kind of a test. It’s all generated by sysadmins.
The relationship between these two complementary approaches is not quite as simple as is defining them. It’s easy to see how out of the universe of possible user-related events each approach can excel in getting usable information in one particular situation, such as finding out what the user really does or trying to simulate and predict what the user will encounter when they go and do something.
They can be made to be dependent, for example. A certain sequence of synthetic events can be generated by the occurrence of specific user-facing events. User events can serve to bring focus to the best of the available synthetic event choices needed in order to comprehend a problem.. Hybridization is a very fertile area, with many ideas sprouting up.
If the network is a road, then the user is driving along it in a car. You can watch how fast the user goes by having sensors in the road. But traffic lights, like pre-programmed synthetic events, can affect how a car (or your data) will flow into your driveway.