Quality Issues in a Plug-In World: What’s a Poor Browser To Do?
If you're a member of the open source community - as a contributor, a consumer or both - you know the constant challenges presented by the communal approach to coding. Nobody takes the rap more than Mozilla’s Firefox browser, which has been criticized often for instability and security holes.
Just this past week, Mozilla had to pull Firefox 16 so they could fix a security vulnerability (they re-released it a day later with a fix). But before we point fingers, we should remember that this version also includes a number of security fixes, including an important one that safeguards the user against wayward plug-ins.
The reality of today’s software market is that using and producing APIs is the best way to build inexpensive full-featured software products that resonate with consumers. Most developers incorporate defensive coding measures as a best practice, especially when they are integrating with 3rd party code that could cause instability to their own applications. But the current environment has introduced a proliferation of apps for multiple devices, plug-ins for everything from debuggers to stock tickers, and a rich selection of public APIs available to coders… all of which make defensive coding difficult at best.
One very real issue Mozilla faces is with the multitude of plug-ins that integrate with them, which is true for all browser providers, really. Plug-ins can be unpredictable in their code quality, as anyone can tell you who has 80-year-old parents who click every “Install the Plug-In!” link they see. So, what is a poor browser to do? Well, certainly the big players are doing everything they can to plug the holes as they find them and write as defensively as possible, but perhaps more of the onus should be pushed to the plug-in developers who are compromising the very platform they are depending on for their business.
Profiling is under-utilized
When do you run a performance profiler? When things "seem" slow? When you run it and detect a memory leak, do you continue to run it defensively? No, don’t answer. Most people download and run profiling tools when there’s already a problem that can be detected in manual testing. Once the issue has been rectified, they run it again to validate that the bug is fixed. Typically, that’s the extent of it. That leaves a lot of application bottlenecks undetected unless your QA cycle is deep enough and long enough to detect even minor optimization issues. Now imagine multiple plug-ins that haven’t been optimized properly all running in the same browser instance. All of a sudden, those "minor" optimization issues have combined to form one big optimization problem, despite the current trend to separate plug-in processes from browser processes.
A little API monitoring goes a long way
It’s happened to the best of us. We code away and test away – QA blesses the feature you built using 3rd party APIs and you’re ready to release to production. And then suddenly that feature doesn’t work. Hold the presses! It looks like the API changed from underneath you, but because you were coding and didn’t keep up with all the API news, and because QA had already passed the feature so they weren’t testing it... well, nobody saw that the feature was broken until the pre-deployment regression testing.
If you're writing a plug-in, an even more critical danger is the performance of the APIs you rely on, especially if you support multiple browsers with different APIs. What happens when a plethora of plug-ins are all hitting the same API? Can it handle the load? Are you just crossing your fingers that the API provider tested under load? Your plug-in depends on that functionality – it’s worth getting some benchmarks with and without load to start with. Then put some API monitoring in place for your test and production environments so you know if things change before your users tell you (or worse, forum posts start popping up telling people to uninstall your plug-in).
Share your tests
Got QA? Then drink it. You’ve spent the time, energy and money on automated tests. If you haven’t hooked those to your build scripts and shared some of basic ones with your Operations team to run against your production environment, shame on you. If you’re building plug-ins, you most likely have already spent too much time building the various browser flavors you need. Maximize your productivity by leveraging each other’s work – grab those test scripts from QA or hand them the ones you’ve run against your own code, then run over to the Operations team and figure out which ones make sense to run on a regular schedule in production as a safeguard.
And ultimately, share responsibility
It’s easy to let browsers like Firefox and Chrome take the rap for things like memory leaks and security holes. And yes, they do have a responsibility to code defensively. But plug-in providers also have a responsibility to ensure high quality and to validate that they optimize their code and then monitor not only their own code but also the APIs they are relying on from other providers. Plug-ins don’t live in their own eco-system; the quality of one can impact the performance of another. So be careful out there, folks, and test like you mean it.