We spend most of our time testing software using four or five browsers -- Chrome, FireFox, Internet Explorer, and maybe Safari. There are a few outliers, like Avant Browser mixed up in there, but for the most part that is what we use to surf the web, to do our taxes, and to do our normal work every day.
When I'm working, or booking tickets to a concert, I just want things to work regardless of what computer I'm on and what browser I'm using. Preferably the first time. Sometimes I'm not that lucky though and I'll have to open different browsers till I can do what needs to be done. And that's where cross-browser testing should have been performed.
There are a few things we can do to make developing and testing software across browsers a little faster, and a little more effective.
Why Browsers Matter
Monopolies are bad in almost every way I can think of. They stifle competition and stop new product development, removing the ability for the buyers to make decisions for themselves, and they lock pricing. Microsoft had to uncouple the Internet Explorer web browser from their Windows Operating System for this very reason.
Right now, there is absolutely no risk of monopoly on browsers. There is plenty of variety for web browsers and more importantly, the technologies that make the browsers work -- javascript engines, and to a lesser extent Flash and Silverlight. Each browser has their own special way of interpreting javascript. Google Chrome uses the V8 engine, newer versions of Internet Explorer use Chakra, and FireFox relies on SpiderMonkey.
The general idea here, is that browser builders can optimize for performance and get a more fully featured system for display by building their own javascript interpreters. That makes things nice and clean for the rare cases when we develop websites that support only one browser. And, since each engine interprets javascipt in slightly different ways, occasionally very complicated for making features work on different browsers.
Sometimes when I am trying to book a hotel room for a conference, a data picker will open and work perfectly on Chrome, but might have a bug in the month navigator on FireFox so that clicking the left and right arrows don't move you forward and backwards in time, and maybe the datepicker throws an error in the javascript console and doesn't open at all in Internet Explorer. Worst case here is that I get frustrated and try to book with a different hotel.
We have to be careful about how our products work in different browsers.
Combinations
The browser problem seems really simple when I look at it from a superficial level. There are three browsers that matter, maybe four if we are being generous and include Safari. Every time we develop a new feature, we will probably test it deeply in one browser to get an idea of what the important moving parts are, and after that go through and do more cursory testing in the other two browsers.
That strategy doesn't always work out.
In addition to each browser, there are also versions. Chrome and FireFox update at least once a month so there are gradual changes that most developers can cope with. See some of the new changes in Chrome 45- that is just the beginning. But, the world runs on legacy software, especially in industries like banking and healthcare. The latest version of internet explorer today is 11. Even today there are companies running software through IE6, a product released in 2001. Most people don't own cars that old.
This leaves with a lot of potential combinations, and a lot of repetitive work trying to discover problems caused by changing javascript engines and supporting new rendering technology every few versions.
One solution to this problem, and one a few companies that mainly support to other tech companies use, is to support only the latest modern browsers. Usually that means relatively recent versions of Chrome, probably FireFox, and the latest two versions of Internet Explorer. If you can get away with this, and not many can, you are back to just a couple versions in a few browsers. This moves us from 20 or more browsers, to somewhere around 8.
The other way I simplify the combinations of browsers and versions to test is through sampling. After your product has been in production for some time, you can use tools like Google Analytics or monitoring tools like Splunk to learn about what your clients are really using -- specific browser data on brand and version, what pages get hits and how frequently, and the Operating System they are working from. With this information, you can make educated guesses about what is important and what is at risk and create a test strategy around that. Over time, your team can move from testing the variations you think will be used, to a risk based strategy on what is used the most.
Emulators And Virtual Environments
Let's say that you need coverage across a few different browsers, multiplied by a few different browser versions. At some point you will have a cross platform problem. Running single versions of Internet Explorer, FireFox, and the like can all live happily on a Windows machine. Only single versions though, and if you want to do anything more than working on simple short term projects, you will probably need more than that. If you're on a mac, IE is out of the question without a little extra effort.
On long term projects, chances are a new version of at least one browser you depend on has come out. And, you've probably discovered at least one problem that is only reproducible on a specific platform.
If you have a great IT department with a little money to spend, maybe you can get a few Virtual Machines provisioned each with it's own version of Windows and a special set of browsers. For me, this has been a time consuming process where I submit tickets and then hope I get a response quickly. Going to happy-hour after work with the right people can help a little. More often than not, a healthy Mac with hard disk space to spare, and a lot of RAM in combination with the free and legal VMs from http://dev.modern.ie/ will do the trick.
You can download these pre-configured VMs on your personal computer and be up and running in a little over an hour.
Emulators and virtual environments can partially answer the question of "How am I going to get access to all of these browser and system combinations."
Mobile
It seems like every new software company is claiming to be mobile-first, and companies that have been around for a while are reinventing themselves and adopting that slogan as fast as they can. Mobile problems like cross-platform considerations, and responsive design are the new cross-browser problem.
The brute force way to approach this, and one I have tried many times, is to buy a sample of devices for the test group. This isn't a big deal for iOS if you don't mind buying a new test device every time they release a new product, which can be pretty often.
Collecting devices for Android testing is more difficult since they don't control the device market. With the Android operating system, we have to think about which screen sizes are important (there are many to choose from), which hardware manufacturers are representative of what the customers might use, and also which Operating System versions are relevant enough to care about. Apple gets high saturation for upgrades when a new OS version is released. Android on the other hand could have 10 different versions of the OS in use at any point in time.
If you have hundreds of options, starting by selecting a few environments at the extreme ends of the spectrum -- very new, very old, big screens, and small screens -- and then adding a few right in the middle might be a decent place to start. That will get us some decent information about how the product works on a range of environments.
There is a decent set of tools available for responsive testing. If you open developer tools in Google Chrome, there is a set of buttons that will approximate what your software will look like on a phone, or a tablet. Alternately, you can grab the corner of the browser and re-size it to whatever makes sense for your test.
Automating What Makes Sense
After selecting the tests you want to run by finding out what is important to your users, collecting some test devices, and then getting the rest through emulators and virtual environments, there are still tests that have to be performed.
At some point, you might perform a same test in one environment or browser and then perform something to discover platform related issues on a few other environments. If you have to do this more than one time like we often do right before a release, this repetition can be a time sink. Depending on your ability to write a script or record a scenario quickly, it might make sense to create an automated version of your test to check for the problems you think might occur in different browsers or devices.
Some problems around basic functionality -- buttons being visible on a page, drop lists expanding correctly, and tabs being selectable -- are fairly easy to check for. Others, like layout differences, can be difficult. Writing a set of checks in WebDriver and creating a loop to run through that on each important browser or mobile platform can give fast information after the initial development. Spending time to observe the running script with a notepad and pen in hand to take notes on strange things to investigate later is a powerful technique I have used to find problems that automated checks didn't.
Cross browser and cross platform testing can feel scary when we first look and can only see hundreds of combinations and only a little bit of time. Usually, the testing we actually need to perform is smaller and with careful selection of environments, tools, and devices, we can get a manageable problem to work with.
Do you have any tips that aren't mentioned here? We would love to hear about them.