There are different ways to measure the performance of an application while it runs. Depending on the method used, profiler results will vary; this can affect your ability to optimize your projects. Profiling methods fall into two broad categories: Instrumenting and Sampling. Let's take a look at each.
Instrumenting profilers insert special code at the beginning and end of each routine to record when the routine starts and when it exits. With this information, the profiler aims to measure the actual time taken by the routine on each call. This type of profiler may also record which other routines are called from a routine. It can then display the time for the entire routine and also break it down into time spent locally and time spent on each call to another routine.
Currently, two types of instrumenting profilers are available on the market: source-code modifying and binary.
Source-code modifying profilers create several problems. They tend to conflict with source code control systems. They do not always reliably parse the source they are supposed to instrument. In fact, since the addition and removal of instrumentation can be bit tricky, source-code modifying profilers often suggest that users work with a copy of the project source to avoid possible corruption.
Also, at best these profilers can only insert their instrumenting code at the start of a procedure in source. At this point, procedure setup has already run (for stack frames, local variables, parameters). In small procedures, setup can be a significant portion of execution time. Yet, there is no way to time the setup itself using a source-code modifying profiler.
Binary profilers (or hierarchical profilers ) work strictly at runtime. They insert their instrumentation directly into an application's executable code once it is loaded in memory. Source code is not required in any way, and thus there is no risk of corrupting it. Since a binary profiler works anew on each execution, it is also very easy to find some slow code on one execution, try an improvement in source, recompile and test again - incremental optimization is supported in real time.
And of course a binary profiler inserts its instrumentation just at the first assembly instruction of each routine. This insures that routine setup is counted in the timing.
Pitfalls of Instrumentation
The timer calls which an instrumenting profiler inserts at the start and end of each profiled routine take some time themselves. To account for this, at the start of each run instrumenting profilers measure the overhead incurred from the instrumenting process - they calibrate themselves - and they later subtract this overhead from performance measurements. This usually works out very well.
However, when a routine is very short, another effect due to the instrumentation becomes important. Modern processors are quite dependent on order of execution for branch predictions and other CPU optimizations. Inevitably, inserting a timing operation at the start and end of a very small routine disturbs the way it would execute in the CPU, absent the timing calls. If you have a small routine that is called millions of times, an instrumenting profiler will not yield an accurate time comparison between this routine and larger routines. If you ignore this, you may spend great deal of effort optimizing routines that are not the real bottlenecks.
To help address the limitations of instrumenting profilers, sampling profilers let applications run without any runtime modifications. Nothing is inserted, order of execution is not affected, and all profiling work is done outside the application’s process.
The operating system interrupts the CPU at regular intervals (time slices) to execute process switches. At that point, a sampling profiler will record the currently-executed instruction (the execution point) for the application it is profiling. This is as short an operation as can possibly be implemented: the contents of one CPU register are copied to memory. Using debug information linked into the application's executable, the profiler later correlates the recorded execution points with the routine and source code line they belong to. What the profiling finally yields is the frequency with which a given routine or source line was executing at a given period in the application's run, or over the entire run.
As the profiler operation is executed less often, and as it is so much simpler than a time measurement, the overhead is negligible, and the application runs practically at its real speed.
A sampling profiler is the perfect tool to isolate small, often-called routines that cause bottlenecks in program execution. The downside is that its evaluations of time spent are approximations. It is not impossible that a quite fast routine should regularly be executing at the sampling interrupts. To make sure a given routine is slow, it is recommended the application be run twice through the sampling profiler.
Another limitation of sampling is that it only tells what routine is executing currently, not where it was called from. A sampling profiler cannot give you a parent-child call trace of your application. Nor can it show you that a routine is actually running slow, when the time is not spent in its own code, but in routines it calls, either because it makes many calls, or because the called routines are slow. By contrast, the code which an instrumenting profiler inserts at the beginning of each routine traces out and records what other routine it is being called from.
How To Insure Proper Performance Analysis
There are many other differences between profiling methods. But our point is already made: not all profilers are alike, each has strengths and weaknesses and each is properly applied only to specific aspects of application testing during development. Read how to select the best profiling tool to get a better grasp on a few different tools. To accurately and successfully isolate bottlenecks in your code, you must use a combination of profilers.
To learn more about Performance Profiling, visit our Code Profiling Resource Center.
Why is API monitoring critical?
You need to think of a continuous cycle of development, testing, deployment and monitoring. As engineering organizations embrace Agile practices (read more here) and other modern techniques for software development, testing and deployment, it is critical that monitoring becomes a required extension of that continuous integration cycle and we update our monitoring scripts with the correct and latest versions of the functional tests to ensure uninterrupted availability for the consumers of the services.
When APIs fail, your applications fail: if you are using APIs to provide services for the functioning of your app and they are either not available, functioning improperly or not being responsive, it impacts the performance of your app and the end user experience.
Once again, it is critical that you not only monitor the APIs you provide but also the ones you consume from 3rd parties.
How do I monitor an API/Web service with Ready! API?
The latest version of Ready! API already has the AlertSite Connector plugin for Ready! API pre-installed and ready for use. This allows you to re-use your existing test cases and create monitors out of them that can be run from AlertSite’s global network of nodes or from inside your firewall.
You can monitor your APIs in 3 easy steps as shown in the graphic below:
To get started, click the Monitor APIs icon on the SoapUI NG toolbar to start the AlertSite Connector.
Step 1: Connect to AlertSite with a new or existing account.
To do this, click on Getting Started in the upper right corner.
A. Connect to AlertSite using a new account that is Free. To do so, enter your email address, a password that is at least 8 chars long and confirm the password. Click on the Create Account button to connect to AlertSite.
B. Connect to AlertSite using an existing AlertSite account. To do so, first click on the ‘I already have an account’ link. Then enter your AlertSite email address and password and click on Connect.
C. If you have forgotten your password, click on the forgot password link to help reset your password.
D. You can check the ‘Reconnect when Ready! API starts’ to automatically connect with AlertSite and update the status of your running monitors.
Step 2. Add your existing API test cases to create monitors
All your test cases are available to you when you click add monitors. Select your project, test suite and test case and give it a monitor name (this is what monitoring tests are called in AlertSite). You may choose to create a new one or drop the list down and overwrite an existing monitor.
Name your API monitor and select the frequency you want to monitor from 1-120 minutes. Once this is done, Click on the Add button.
If you wish to create more than 1, you can check the Add another check box.
Once you are done adding monitors, you can click on the Done button to close the Add Dialog box.
Step 3. Monitor your API’s for availability, correctness and performance
How do I install a newer version of the plugin?
To install a newer version of the plugin:
1. Follow the instructions above to uninstall the older version of the plugin if an older version exists.
2. Click on the Plugin Manager icon on the toolbar.
3. Browse to the location on disk where the plugin (.jar) file is located. You will see the AlertSite Connector plugin listed in the list box.
4. Close the plugin manager.
5. Restart Ready! API if needed.
6. At this point you will see the Monitor APIs icon on the toolbar to the right of the “New LoadTest” toolbar button.
Additional API Monitoring Resources