Docker Swarm is a great tool to get clustered Docker environments up and running quickly and efficiently, and in this post we’d like to show how you with Docker Swarm and tools such as REX-Ray or other volume plugins run not only stateless containers but also create real persistence layers for your stateful containers.

First off, how do you create a Docker Swarm cluster? There is great documentation on that over at Docker’s docs, but we’ll use AWS as an example since it has a supported persistence backend (EBS volumes) for the volume plugin we’re going to use.

Now to create a Docker Swarm cluster, we first of all need a Swarm discovery token. There are two way to do this, either by using curl or by using a Docker container:

curl -X POST https://discovery.hub.docker.com/v1/clusters

or

docker run swarm create

The end result will be the same and will be an output like this:

93fff8be39de520e03b62301c4f2577e

Make a note of the token as we’ll use this to create your Docker Swarm environment. First we need to create a Docker Swarm master which we’ll use as out endpoint for all future docker commands. This means that every time we’ll run a new container the Swarm master will look at the cluster’s collected resources, carve out the needed resources for that container on one host, and then start the container on that host. Pretty cool!

As we’ll be using AWS we need some more information as well. You can use the following as a template for your Swarm deployment, and it will make it very simple for you as all the steps below will be simple copy&paste:

export AWS_ACCESS_KEY_ID="YOURKEY" export AWS_INSTANCE_TYPE="t2.micro" export AWS_DEFAULT_REGION="us-east-1" export AWS_ROOT_SIZE="16" export AWS_SECRET_ACCESS_KEY="YOURSECRETKEY" export AWS_SECURITY_GROUP="default" export AWS_SUBNET_ID="YOURSUBNET" export AWS_VPC_ID="YOURVPC" export AWS_ZONE="a" export SWARM_TOKEN="YOURSWARMTOKEN"

Now, let’s create our Swarm master:

docker-machine create --driver amazonec2 --amazonec2-access-key $AWS_ACCESS_KEY_ID --amazonec2-instance-type $AWS_INSTANCE_TYPE --amazonec2-region $AWS_DEFAULT_REGION --amazonec2-root-size $AWS_ROOT_SIZE --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY --amazonec2-security-group $AWS_SECURITY_GROUP --amazonec2-subnet-id $AWS_SUBNET_ID --amazonec2-vpc-id $AWS_VPC_ID --amazonec2-zone $AWS_ZONE --engine-install-url "https://get.docker.com" --swarm --swarm-master --swarm-discovery token://$SWARM_TOKEN swarm-master

You should now map your terminal session to the Swarm master like so:

eval $(docker-machine env --swarm swarm-master)

Now you can run commands like docker info and docker version to see that you’re connected to a Swarm environment and not just a single Docker host.

Alright, let’s create two more nodes to make it a real cluster!

docker-machine create --driver amazonec2 --amazonec2-access-key $AWS_ACCESS_KEY_ID --amazonec2-instance-type $AWS_INSTANCE_TYPE --amazonec2-region $AWS_DEFAULT_REGION --amazonec2-root-size $AWS_ROOT_SIZE --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY --amazonec2-security-group $AWS_SECURITY_GROUP --amazonec2-subnet-id $AWS_SUBNET_ID --amazonec2-vpc-id $AWS_VPC_ID --amazonec2-zone $AWS_ZONE --engine-install-url "https://get.docker.com" --swarm --swarm-discovery token://$SWARM_TOKEN swarm-node1

docker-machine create --driver amazonec2 --amazonec2-access-key $AWS_ACCESS_KEY_ID --amazonec2-instance-type $AWS_INSTANCE_TYPE --amazonec2-region $AWS_DEFAULT_REGION --amazonec2-root-size $AWS_ROOT_SIZE --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY --amazonec2-security-group $AWS_SECURITY_GROUP --amazonec2-subnet-id $AWS_SUBNET_ID --amazonec2-vpc-id $AWS_VPC_ID --amazonec2-zone $AWS_ZONE --engine-install-url "https://get.docker.com" --swarm --swarm-discovery token://$SWARM_TOKEN swarm-node2

Now if you run docker info you should see 3 nodes with a total of 3 CPUs and 3GB RAM as the collected resources for your containers:

$ docker info Containers: 5 Images: 4 Role: primary Strategy: spread Filters: affinity, health, constraint, port, dependency Nodes: 3 swarm-master: 54.152.94.118:2376 └ Containers: 2 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.018 GiB └ Labels: executiondriver=native-0.2, kernelversion=3.13.0-53-generic, operatingsystem=Ubuntu 14.04.2 LTS, provider=amazonec2, storagedriver=aufs swarm-node1: 52.91.253.227:2376 └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.018 GiB └ Labels: executiondriver=native-0.2, kernelversion=3.13.0-53-generic, operatingsystem=Ubuntu 14.04.2 LTS, provider=amazonec2, storagedriver=aufs swarm-node2: 54.209.132.18:2376 └ Containers: 2 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.018 GiB └ Labels: executiondriver=native-0.2, kernelversion=3.13.0-53-generic, operatingsystem=Ubuntu 14.04.2 LTS, provider=amazonec2, storagedriver=aufs CPUs: 3 Total Memory: 3.053 GiB Name: 4d2600b3fa97

Yes! You now have a 3 node Docker Swarm cluster at your disposal! You can now start to run a bunch of stateless containers and have fun, but you’ll probably want to try out running a few stateful containers as well. REX-Ray to the rescue!

Let’s install REX-Ray on the 3 Docker Swarm nodes you have:

for each in $(docker-machine ls -q); do; docker-machine ssh $each "curl -sSL https://dl.bintray.com/emccode/rexray/install | sh -" ; done

Now we’ll add the AWS credentials into the Docker Swarm hosts so they can take care of storage provisioning themselves, no more pesky clicking in an AWS interface!

for each in $(docker-machine ls -q); do; docker-machine ssh $each "echo REXRAY_STORAGEDRIVERS=ec2 | sudo tee --append /etc/environment"; done

for each in $(docker-machine ls -q); do; docker-machine ssh $each "echo AWS_ACCESS_KEY=$AWS_ACCESS_KEY_ID | sudo tee --append /etc/environment"; done

for each in $(docker-machine ls -q); do; docker-machine ssh $each "echo AWS_SECRET_KEY=$AWS_SECRET_ACCESS_KEY | sudo tee --append /etc/environment"; done

for each in $(docker-machine ls -q); do; docker-machine ssh $each "sudo service rexray start"; done

Now to verify that REX-Ray is running on all the Docker Swarm nodes:

for each in $(docker-machine ls -q); do; docker-machine ssh $each "sudo service rexray status"; done REX-Ray is running at pid 30648 REX-Ray is running at pid 30528 REX-Ray is running at pid 30528

Awesome! Now let’s see how we can run a stateful container in this environment:

docker run -d --volume-driver=rexray -p 6379:6379 -v redis-data:/data redis redis-server --appendonly yes

This command will automatically talk to AWS using the REX-Ray volume plugin, create a standard-sized volume (16GB), mount that volume to /data in the container and start Redis in a persistent mode.

We can verify that the volume has been created:

docker-machine ssh $(docker inspect -f '{{ .Node.Name }}' $(docker ps -q)) "rexray volume" - name: redis-data volumeid: vol-cb5a6f26 availabilityzone: us-east-1a status: in-use volumetype: standard iops: 0 size: "16" networkname: "" attachments: - volumeid: vol-cb5a6f26 instanceid: i-5a803fe5 devicename: /dev/xvdb status: attached

You can now connect to your Redis container and start filling it with data 🙂

redis-cli -h $(docker inspect -f '{{ .Node.IP }}' $(docker ps -q)) YOURIPHERE:6379> ping PONG YOURIPHERE:6379> set hello world OK YOURIPHERE:6379> get hello "world"

There you go! You now have a fully formed Docker Swarm cluster that can run both stateless and stateful applications, all managed by you. I think that’s pretty cool 🙂

Happy containerizing!