Why Containers Instead of Hypervisors?

Why Containers Instead of Hypervisors?
Steven J Vaughan-Nichols
  April 08, 2014

Our cloud-based IT world is founded on hypervisors. It doesn’t have to be that way – and, some say, it shouldn’t be. Containers can deliver more services using the same hardware you’re now using for virtual machines, said one speaker at the Linux Collaboration Summit, and that spells more profits for both data centers and cloud services.

I confess that I’ve long been a little confused about the differences between virtual machine (VM) hypervisors and containers. But at the Linux Collaboration Summit in March 2014, James Bottomley, Parallels‘ CTO of server virtualization and a leading Linux kernel developer, finally set me straight.

Before I go farther I should dispel a misconception you might have. Yes, Parallels is best known for Parallels Desktop for Mac; it enables you to run Windows VMs on Macs and yes, that is a hypervisor-based system. But where Parallels makes its real money is with its Linux server oriented container business. Windows on Macs is sexier, so it gets the headlines.

So why should you care about hypervisors vs. containers? Bottomley explains that hypervisors, such as Hyper-V, KVM, and Xen, all have one thing in common: “They’re based on emulating virtual hardware.” That means they’re fat in terms of system requirements.

Bottomley also sees hypervisors as ungainly and not terribly efficient. He compares them to a Dalek from Dr. Who. Yes, they’re good at “EXTERMINATE,” but earlier models could be flummoxed by a simple set of stairs and include way too much extra gear.

Containers, on the other hand, are based on shared operating systems. They are much skinner and more efficient than hypervisors. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This means you can “leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application,” says Bottomley.

That has implications for application density. According to Bottomley, using a totally tuned-up container system, you should expect to see four-to-six times as many server instances as you can using Xen or KVM VMs. Even without making extra effort, he asserts, you can run approximately twice as many instances on the same hardware. Impressive!

Lest you think this sounds like science fiction compared to the hypervisors you’ve been using for years, Bottomley reminds us that “Google invested in containers early on. Anything you do on Google today is done in a container—whether it’s Search, Gmail, Google Docs—you get a container of your own for each service.”

To use containers in Linux you use the LXC userspace tools. With this, applications can run in their own container. As far as the program is concerned, it has its own file system, storage, CPU, RAM, and so on.

So far that sounds remarkably how a VM looks to an application. The key difference is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.

LXC’s entire point is to “create an environment as close as possible as a standard Linux installation but without the need for a separate kernel,” says Bottomley. To do this it uses these Linux kernel features:

  • Kernel namespaces (ipc, uts, mount, pid, network, and user)
  • AppArmor and SELinux profiles
  • Seccomp policies
  • Chroots (using pivot_root)
  • Kernel capabilities
  • Control groups (cgroups)

The one thing that hypervisors can do that containers can’t, according to Bottomley, is to use different operating systems or kernels. For example, you can use VMware vSphere to run instances of Linux and Windows at the same time. With LXC, all containers must use the same operating system and kernel. In short, you can’t mix and match containers the way you can VMs.

That said, except for testing purposes, how often in a production environment do you really want to run multiple operating system VMs on a server? I’d say “Not very damn often.”

You might think that this all sounds nice, but some developers and devops believe that there are way too many different kinds of containers to mess. Bottomley insists that this is not the case. “All containers have the same code at bottom. It only looks like there are lots of containers.” He adds that Google (which used cgroups for its containers) and Parallels (which uses “bean-counters” in OpenVZ) have merged their codebases so there’s no practical differences between them.

Programs such as Docker are built on top of LXC. In Docker’s case, its advantage is that its open-source engine can be used to pack, ship, and run any application as a lightweight, portable, self sufficient LXC container that runs virtually anywhere. It’s a packaging system for applications.

The big win here for application developers, Bottomley notes, is that programs such as Docker enable you to create a containerized app on your laptop and deploy it to the cloud. “Containers gives you instant application portability,” he says. “In theory, you can do this with hypervisors, but in reality there’s a lot of time spend getting VMs right. If you’re an application developer and use containers you can leave worrying about all the crap to others.”

Bottomley thinks “We’re only beginning to touch what this new virtualization and packing paradigm can mean to us. Eventually, it will make it easier to create true cloud-only applications and server programs that can fit on only almost any device.” Indeed, he believes containers will let us move our programs from any platform to any other platform in time and space… sort of like Dr. Who’s TARDIS.

See also:

 

[dfads params=’groups=937&limit=1&orderby=random’]

You Might Also Like