A Taste of Salt: Like Puppet, But Less Frustrating

A Taste of Salt: Like Puppet, But Less Frustrating
Corey Quinn
  April 25, 2013

Have you gotten frustrated with Puppet? Corey Quinn offers an impassioned argument for his favorite open source alternative.

If you’re responsible for the care and feeding of multiple servers, and you haven’t heard about configuration management yet, you have not been paying attention. CFengine was one of the first configuration management systems that was deployed in anything approaching widespread use, and was followed later by Puppet and Chef. A bit over two years ago, Salt Stack‘s “Salt” entered the market, and took a radically different approach to the problem of “configure all of my servers to do X.”

Salt started life as a remote execution system: a class of software applications written to address concerns of the form, “I have this command I want to run across 1,000 servers. I want the command to run on all of those systems within a five second window. It failed on three of them, and I need to know which three.”

Other systems were designed to do this, of course, but they failed in several ways. MCollective (which Puppet Labs acquired several years ago) was (and remains!) fiendishly complex to set up. Chef works atop ssh, which – while the gold standard for cryptographically secure systems management – is computationally expensive to the point where most master servers fall over under the weight of 700-1500 clients. Salt’s approach was far simpler.

Salt leverages the ZeroMQ message bus, a lightweight library that serves as a concurrency framework. It establishes persistent TCP connections between the Salt master and the various clients, over which communication takes place. Messages are serialized using msgpack, (a more lightweight serialization protocol than JSON or Protocol Buffers), resulting in severe speed and bandwidth gains over traditional transport layers, resulting in in the ability to fit far more data quickly through a given pipe. This translates into a non-technical statement of, “Salt establishes a persistent data pipe between servers in your environment that’s extremely fast and low-bandwidth.”

Getting up and running is almost embarrassingly simple compared to other configuration management systems. Setup consists of three steps:

  • Install the package on the master server (clients are referred to as “minions”), and start the salt-master service. (apt-get install salt-master on Debian-based systems, though there are packages for virtually every operating system in widespread use, including an agent for Windows servers.)
  • Make sure that the host “salt” resolves (via DNS hacks or editing the hosts file) on the minion to the IP of the master server; then install the package and start the salt-minion service (apt-get install salt-minion).
  • Accept the minion’s key on the master. (salt-key -a minion_hostname)

At this point, you’re done; you’ve achieved remote execution.

From the master you can run commands across any or all of your minions in a massively parallel manner:

[root@salt]# salt '*' cmd.run "date"

{'log': 'Sat Mar 9 23:18:27 PST 2013'}

{'irc': 'Sat Mar 9 23:18:27 PST 2013'}

{'git': 'Sat Mar 9 23:18:27 PST 2013'}

{'code': 'Sat Mar 9 23:18:27 PST 2013'}

{'mail': 'Sat Mar 9 23:18:27 PST 2013'}

{'www': 'Sat Mar 9 23:18:27 PST 2013'}

[root@salt]#

Once this system worked, it didn’t take much for the Salt creators to make the logical leap from “running arbitrary commands across an entire server fleet” to realizing they had the makings of a superior configuration management system on their hands. Its use for a configuration management system follows the same model as above, but instead of “run this command,” Salt extended it. Now you can make sure that Apache is installed, /etc/sudoers has these contents, or postfix is running.

The syntax for this is written simply in YAML, an intuitively obvious “list” format.

For instance, to ensure that your ssh server’s configuration is uniform across all of your systems, you would drop a copy of what that file should look like onto your salt master, then call it from the state tree:

/etc/ssh/sshd_config:

  file:

    - managed

>    – source: salt://base/files/sshd_config

The next time the system is run, that file is pushed out to all of the listening minions.

What made Salt compelling for my use was that, following the steps in the tutorial, all of what I just described took less than 20 minutes to get working. Someone new to configuration management can be productive with Salt before lunchtime, and someone who’s familiar with configuration managers can perform complex tasks relatively easily. Simple is good.

The folks at SaltStack seem to be unwilling to settle for merely “doing what other configuration management solutions can do.” Part of the value that Salt is adding is in their ecosystem of other offerings tied into Salt.

For instance, Salt-cloud serves as an orchestration layer that can spin up instances in EC2, Openstack, Linode, or other cloud computing providers. The idea is that Salt lets you not only define configuration within running instances, but also handle the initial provisioning, ongoing maintenance, and deprovisioning of the entire instance swarm.

Salt-virt is a newly released module that serves as a well-architected wrapper around the libvirt, a popular virtualization library for KVM, Xen, OpenVZ, and others. This is similar to salt-cloud, but rather than making API calls to a third-party provider, it makes libvirt calls to hypervisors on your behalf to instantiate new virtualization guests, as well as providing monitoring data around running instances.

Salt-vagrant is a salt extension that lets you use Salt to manage Vagrant instances. Vagrant is an established project that wraps around a number of existing virtualization providers to allow for extremely quick provisioning of disposable, consistent environments. This is targeted specifically at developers, to solve the fairly common “Develop on a Mac to deploy into a Linux environment” library problems that sometimes crop up.

Salt-monitor is also in the works as one of the next major milestones. This is intended to serve as a soup-to-nuts monitoring solution that scales well, by replacing serial checks iterating through an environment with broadcast conversations, such as, “All servers: Tell me if you don’t have enough free disk space.”

Lastly, salt-ui is currently in early alpha stages. While it’s still in early development, it’s designed to be a Web interface to Salt, ideally making Salt administration even easier for users who have a bit of trepidation around using the command line. While not yet ready for prime time, it’s worth keeping an eye on as development progresses.

Perhaps the most compelling aspects to SaltStack is its vibrant community.  Despite Puppet’s nearly six year head start, Salt boasts more contributors to its code base (as per Ohloh.net), a superior comment-to-code ratio, an increase in year-over-year commits, and a lower barrier to entry for new contributors.  Community support is provided both via an active IRC channel (#salt on irc.freenode.net) and on the salt-users mailing list.

Having stood up a number of different configuration management systems across a wide variety of environments, I’ve yet to find a solution that’s as rapid to deploy, simple to scale, or as well architected as Salt.

Ideally I’ve given you a bit of a taste of Salt. I hope it whets your appetite.

UPDATE: Since this post was published, I’ve been in contact with a number of Chef aficionados who have informed me that my understanding of the Chef client <-> server communications model was severely flawed. Chef uses https to communicate, not ssh, and its bottlenecks have previously been cited as being based in implementation details of Ruby / CouchDB.  To my understanding, this has largely been mitigated in recent versions; my apologies for the error.

About the author:

Corey Quinn is a contributor to Salt. When not slagging Puppet, he’s a Puppet Labs trained senior systems engineer and sometimes-consultant based out of Los Angeles.  He volunteers for the freenode IRC network, where he believes that no project is so well run that it can’t be complained about endlessly. Get off his lawn.

See also:

You Might Also Like