A little while ago we ran into an odd salt bug which was inconvenient to try debugging in production. What I really wanted, since I don’t have a private cloud nor free Rackspace instances, was a way to build and run a cluster of 100 or so salt minions on my desktop for testing. One of the developers suggested using docker containers for this, and so my odyssey began.
Docker has been generating a lot of buzz, and a lot of questions, starting with “Why is this any better than running a virtual machine?” Docker, in a nutshell, runs processes in a chroot, using your host’s kernel. This makes it lighter weight than a VM, and if you want multiple containers running the same process with minor configuration changes, they can all share the same base image, saving disk space. LXC (Linux containers) and devicemapper are used under the hood; docker itself consists of a build system that allows you to write a config file for generating a linux container with specific contents and running specified processed. It also implements a REST -ish API that provides information about images (the base chroot) and containers (the thin copies of the image) as well as allowing for their creation, configuration and deletion.
Docker is very much in development so anything that follows may be superceded by the time you try to use it yourself.
Docker drops most capabilities for processes running inside the container, though there is still morework to be done on this front. SELinux was the first problem I encountered; running Ubuntu precise images with sshd under F20 fails to do anything useful because sshd thinks that SELinux is enabled after checking /proc (mounted from the host running the containers).
After putting together a hackish workaround involving a local build of libselinux, I needed a way to start up salt master to start up first, collect its key fingerprint, and then get that information onto all the minions before they start up. I also needed to get hostname and ip information into /etc/hosts everywhere, which can’t be done from within the container because in-container processes do not have mount capability and /etc/hosts is a read-only file mounted from the host for security reasons. Thus was born another hackish workaround, which relies on puppet apply and a tiny python web server with a REST-ish API to add, apply and remove puppet manifests.
There were and are a few other fun issues, but to keep a long hackish story short, I can now spin up a cluster of up to a few hundred minion of whichever flavor of salt in just a few minutes, do my testing and either save the cluster for later or toss it if I’m done. [1]
Besides its use for generating test environments, another use for docker and perhaps the thing that most people are talking about, is its use in PaaS services/cloud hosting. [2] Before you decide to replace all your VMs with Docker however, you should know that it’s really intended as a way to package up one or two processes, not as a way to run a full LAMP stack, although that’s been done too and more, [3] using supervisord to do the work of process management.
[1] https://github.com/apergos/docker-saltcluster
[2] http://www.rackspace.com/blog/how-mailgun-uses-docker-and-contributes-back/
[3] https://github.com/ricardoamaro/docker-drupal and https://github.com/ricardoamaro/drupal-lxc-vagrant-docker
Image derived from https://commons.wikimedia.org/wiki/File:Baby.tux-alpha-800×800.png and https://commons.wikimedia.org/wiki/File:Different_colored_containers_pic1.JPG
0 Responses to “Test clusters via docker containers”