Why the Cloud Can Mean Storms for Quality Control
While the cloud offers many benefits, it’s more like a black box when it comes to deploying software. If you are running your application in the cloud, there is less visibility than you might have in your own data center. That puts quality at risk!
So, how do you manage a black box? You manage the inputs and outputs. To do this you need an end to end testing and monitoring framework for your cloud-dependent application. This framework needs to measure functional accuracy as well as application response time, both in the lab during development and test, and at the end-user after deployment. A complete software quality management approach must include activities both before and after deployment.
Many of the quality-related activities during pre-deployment are well-known. Code quality, unit and API testing, application testing, performance testing and test management are all critical elements of building good quality software. The higher the confidence of the delivered software, the lower the risk of surprises after deployment.
Post Deployment Quality
However, once software is deployed, surprises sometimes occur, particularly in cloud deployments where the exact environment may not be precisely known. You need to know what your end user is experiencing, and automated monitoring of your application is the key to understanding accuracy and performance that your user experiences. A holistic view of quality must include the accuracy and performance of the application at the end user. After all, when all is said and done, this is how the quality of the application will be judged. This is exactly why we brought End User Monitoring into the SmartBear family.
Tying the Two Together
When we can tie pre-deployment and post-deployment activities together, we truly have a test and monitor framework. One step to accomplish this would be to use some of the same recorded transactions that we used for testing in the lab, as monitors after deployment. In the lab we can understand how the product performs under load and where problems occur, and we can set monitors to alert us as we approach those thresholds. Likewise, if a problem is detected by the monitor, we can bring the same transaction back in the lab and recreate the problem. This loop also helps “calibrate” pre-deployment activities, enabling us to better connect them to the actual performance in the field. The quality feedback loop needs to extend from pre-deployment all the way to the end user.
Test management and support ticket management integrated across the entire lifecycle help manage communication of status and quality across the entire lifecycle. Development, Test, and IT/Operations have a view of the state of quality at each step.
Testing and Monitoring Framework for Cloud Dependent Applications
Here at SmartBear, part of our vision is to provide an integrated, pre- and post-deployment quality framework that enables Development, IT/Ops and e-Commerce teams to work together. Our recent acquisitions complete our approach to code quality and profiling, API test and performance, Application Test and Performance , End user monitoring, and quality management. Upcoming releases will begin to integrate the pre-deployment activities with the post-deployment activities, helping you keep the unknowns of the cloud under control.
SmartBear Webinar: Featuring Guest Speaker Tom Murphy, Gartner Inc.
Join us on Wednesday, May 16 at 1:00 p.m. EDT for an informative Webinar with Ole Lensmar , Chief Architect, SmartBear Sweden, featuring guest speaker, Tom Murphy, Research Director, Gartner Inc., to learn how you can Ensure Quality APIs from Development through Deployment.