DevOps has become synonymous with empowering developers to move faster and deliver more software, but has unintentionally moved software quality and testing into a corner. The pendulum that swings back and forth between very slow testing and staying far away from overly technical solutions has swung hard with DevOps, completely ignoring the value that testers can bring by forcing customers to test new code and creating teams that rely monitoring systems to find things that could have been caught before customers noticed.
DevOps has moved from being something based on speculation about what development could be, to a real part of the developer role. Let's see how we can swing this trend back to a more responsible place, still using DevOps to release software faster but with a level of quality that customers will be happy with.
Continuous Integration And Delivery
Continuous Integration (CI) and Delivery (CD) are two of the most fundamental concepts built into DevOps. They have helped us talk about tools that will build new products every time a new line of code is added to the repository and then as soon as that build is done, deploy it to production. Some of the more technology focused companies like Github have pushed this idea as far as it can go and are deploying new software to production many times every day.
The side effect of this style of fast paced development and release is that any time for a real live person to try the software out before delivering is squeezed out. As soon as there is software that can run, it goes into production. Paying customers are the new testers, and monitoring and reporting systems are the new bug reports.
If we dial this back a little bit, we get a strategy where developers do CI on their own environment, getting a new build every time they commit to their local source code repository. And after checking in to the main repo, code is continuously deployed to a staging environment.
I have had a lot of success with continuous deployment to a headless API server on smaller teams. One team I worked on had two people working on the API platform that the rest of our product was built on. Each time they would commit code, a new build would go to that server. Normally before that would happen, we would sit together and talk about the changes and their concerns. I might start stubbing out a few automated checks, and writing down some test ideas.
We were still using DevOps concepts to build an API quickly, but we were doing it in a way that didn't force customers to deal with our problems.
Monitoring Is Only Part of the Solution
A few large companies are using complex API and Web monitoring systems running in the background, sucking up large amounts of log data from the API looking for a few important keywords like Exception, and Error. Every time these words are found, emails and text messages are sent to let the development staff know that something has gone wrong.
Maybe you have a completely componentized product where you can flip bits to turn features on and off quickly to reduce exposure to faults. Some companies have bits and pieces of product that are built as components, a handful have a large amount. Most have little or none. For smaller companies, this would mean spending equal amounts of time developing architecture as on building product. That isn't a ratio most founders and investors would want to see.
Monitoring and rollback systems are great for API products, but are usually not feasible for young companies that urgently need product to sell.
Landmines for Consumers and Holes in Your Revenue
Using monitoring without pre-production testing is dangerous for you and your consumers. Imagine you are releasing a new version of your API that will help customers to use Geolocation information on their phone to find stores nearby. The code under the hood of the API has some automated checks using faked data to see if stores were returned correctly, but there are only a few and they are pretty simple.
One of your first customers to use this feature live in the island paradise of Hawaii and it just so happens that there are lots of cities there with special characters in the name. A few hours after deploy, the server log files are blowing up with errors from people in Kāne‘ohe and ‘Ewa Gentry trying to find the closest Dunkin Donuts.
Emails start flying through the development office and the feature gets shut off minutes later, but the damage is done. By the time your support person gets in touch with a developer, and that developer investigates and takes action, your customers have given up. About half of those users uninstalled the app and are searching with Google instead.
“Testing new API changes can expose your product to black swan problems in a way that DevOps never will.”
There are real consequences for using your customers to test new code. Placing a skilled tester in front of that API probably would have brought up questions like "What happens for cities with really long names, or special characters, or very small cities that might not be in all mapping systems". Testing new API changes can expose your product to black swan problems in a way that DevOps never will.
Cases like this actually happen and are good examples of where using DevOps concepts to deliver internally, rather than to production, would have saved a few customers and company money.
DevOps presents a powerful set of ideas that can help to deliver code to customers faster than we used to. They can also be a dangerous way to deliver bad code and buggy product much faster than we would like to. If we slow down a little bit by pairing DevOps themes with skilled testers, these ideas and tools can help to deliver software and API updates faster than would be possible without them without exposing our customers to new kinds of risk.
Do you have experience integrating DevOps with testing? We would love to hear your story.