How To Build a Custom Docker Swarm Node using VirtualBox with working Bridged Network.

Written by: Michael H Fahey
March 24, 2019

This article will show How To:

  • Create a custom virtual machine for use as a Docker Swarm node
  • Configure networking so that the Swarm node can be accessed across the LAN
  • Configure and Provision the new machine into your Swarm with docker-machine create
  • Appendix A: Configure the correct Docker Storage Driver
  • Appendix B: Setting a Swarm advertise address

If you’re like me one of the first things that you noticed while learning Docker Swarm is that the default swarm node hosts (virtual machines) that docker-machine create generates can only be accessed from their host workstation, they don’t expose a network interface to your LAN, and you can’t access these swarm nodes unless you’re at the workstation where they are hosted.

About The Docker Node Hosts created by docker-machine

These default Docker Swarm nodes are Tiny Core Linux based, and use a hybrid ISO image/ virtual disk setup.

Take a look at the networking setup of the standard docker-machine Tiny Core networking setup under VirtualBox, login to a command prompt and doing an “ifconfig” reveals several networks:

eth0		10.0.2.15/24		VirtualBox Network Adapter 1
					Attached to NAT

eth1		192.168.99.101/24	VirtualBox Network Adapter 2
					Attached to Host Only network vboxnet0
											
docker0		172.17.0.1/16		Network created by docker daemon, 
					not an external interface

One thing that I did try on the Tiny Core node was to set the VirtualBox Network Adapter 1 to Bridged, and set a static IP on eth0. The bridged interface did come up on the LAN and I was able to ssh to it, but docker commands on that machine were not able to download Docker images, just stalled eternally. Also, Tiny Core howtos on making configuration changes persistent did not work with the docker-machine created nodes. This was about the time I decided to try to create a custom swarm node.

I decided to create this Swarm node prototype using Arch linux for a couple reasons:

One reason is that there is no real “installer” for Arch, no vast estates of extraneous software installed en-masse. Each new Arch system prototype is built by hand, and this allows the instance to be kept as small and simple as possible.

Also, I just plain like Arch linux.

If you don’t like Arch, I will still assume you are the sort of person who builds custom Docker Swarm nodes and can probably apply these instructions to whatever distro you prefer.

Creating the new Swarm node

Not going to bore you with the details of doing the Arch install, but the resultant Swarm machine prototype contains little more than the linux kernel, the bare essentials of networking and GNU, ssh, and the “docker” package with the docker engine. I would recommend your node be as minimal as possible whatever distro it’s created from.

While setting up VirtualBox for this node instance, I reproduced the network interfaces that the Tiny Core machine had used:

  • Network Adapter 1 attached to NAT
  • Network Adapter 2 attached to Host Only network vboxnet0 (which was already existing, set up prior by the Tiny Core instances)

Launching the Swarm machine instance and logging into this new swarm node at the console, the network interfaces appear as follows (you might have to activate DHCP on these interfaces in your distro to get IP addresses.)

enp0s3		10.0.2.15/24		VirtualBox Network Adapter 1
					Attached to NAT

enp0s8		192.168.99.101/24	VirtualBox Network Adapter 2
					Attached to Host Only network vboxnet0
											
*docker0	172.17.0.1/16		Network created by docker daemon, 
					not an external interface

(*Note that there are two other network interfaces on the Arch Swarm node, “docker_gwbridge”, and “br-b8510986b6a3”, but these like the “docker0” interface are not VirtualBox Adapters and this writeup doesn’t cover them.)


Side-by-side you can see how the network interfaces can be made similar on the custom node host prototype.

Setting the VirtualBox Network adapter 1 to Bridged still results in docker commands that are not able to download or pull Docker images, and docker commands that try simply hang forever. Switching Adapter 1 back to NAT mode solves this problem, for unknown reasons the docker commands seem to require a NATted outbound interface to be able to pull or download Docker images.

The Solution

What worked for me was adding a third VirtualBox Network Adapter 3, and setting it to Bridged mode.

To get the network addressing set up right, you will want DHCP on the NAT and Host Only adapters, and you will want to set the Bridged adapter to use a static address.

Also, in order to force Docker to send outbound traffic through the NAT interface and not the bridged interface, I removed it’s default gateway (outbound from Swarm node.)

Here is what the resulting configuration files look like in my case :

/etc/netctl/ethernet-static is for the VirtualBox Adapter 3 bridged interface, IP address is statically addressed, and note that the gateway is commented out!

/etc/netctl/ethernet-static   
	
	Interface=enp0s9
	Connection=ethernet
	IP=static
	Address=('192.168.1.210/24')
	#Routes=('192.168.0.0/24 via 192.168.1.2')
	#Gateway='192.168.1.1'
	DNS=('192.168.1.1')
    
/etc/netctl/ethernet-dhcp is for the VirtualBox Adapter 1 NAT interface, IP address is DHCP

/etc/netctl/ethernet-dhcp

	Interface=enp0s3
	Connection=ethernet
	IP=dhcp
	#DHCPClient=dhcpcd
	#DHCPReleaseOnStop=no

    
/etc/netctl/ethernet-vboxnet is for the VirtualBox Adapter 2 Host-Only interface, IP address is also DHCP

	
/etc/netctl/ethernet-vboxnet

	Interface=enp0s8
	Connection=ethernet
	IP=dhcp
	#DHCPClient=dhcpcd
	#DHCPReleaseOnStop=no
    
Arch linux uses netctl to start and set these network interfaces to enabled with the following commands, the syntax is just like systemctl. Or do the equivalent on your distro. Whatever it takes to get your interfaces configured and enabled as in the graphic above.

sudo netctl start ethernet-static
sudo netctl start ethernet-dhcp
sudo netctl start ethernet-dhcp-vboxnet
sudo netctl enable ethernet-static
sudo netctl enable ethernet-dhcp
sudo netctl enable ethernet-vboxnet
    

Configuring and Provisioning the Swarm node with docker-machine

Working on the new swarm node host system (which you should be able to connect to now using ssh to the Bridged Adapter address), create a “docker” group. (Note that anyone who is a member of this group has a backdoor to root, but don’t worry things are going to get even wonkier security wise).

sudo groupadd docker

Now need to add a user for the Swarm to connect with, in this case I just used “docker”.

sudo useradd -d /home/docker -m -G docker docker
passwd docker 

Grant the new user docker the ability to use all sudo commands without a password (I know this seems like a huge security hole).

/etc/sudoers

	# (add the line)	
	docker ALL=(ALL) NOPASSWD: ALL	

Working from your development station (not Swarm node), do some setup for provisioning the new Swarm node into your docker-machine nodes.

In your home directory, create a new ssh key with no password (I called this one swarmkey)

ssh-keygen -N '' -f .ssh/swarmkey

Copy swarmkey to the new Swarm node’s docker user (this swarm node is called swarmhost1, remember the password set earlier for the docker user)

ssh-copy-id -i .ssh/swarmkey docker@swarmhost1

Check to make sure that you can now ssh to the swarmhost1 as user docker with no password, and once you are on the swarm node host you should be able to execute sudo without a password:

ssh -i .ssh/swarmkey docker@swarmhost1
		
sudo cat /etc/sudoers

Exit back to your workstation, and execute the following command (in your home directory) to add the new swarm node to your docker-machine nodes (use the IP address of the Bridged adapter)

docker-machine create -d generic --generic-ssh-user docker --generic-ssh-key .ssh/swarmkey --generic--ip-address 192.168.1.210 swarmhost1

Execute the command docker-machine ls to make sure that the new machine was successfully added.

Form here you can execute the following to control the new swarm host.

eval $(docker-machine env swarmhost1)

Or start a swarm

docker-machine ssh swarmhost1 "docker swarm init"

Appendix A: Configure the correct Docker Storage Driver

To find out what Docker Engine is using for Storage Driver, execute the following command on your new Swarm Node machine.

docker info | grep -i storage

The Storage Driver that you want is “overlay2”.
There are some system requirements, but as far as I can tell most new distros will satisfy them easily.

Normally you can set Docker engine to use “overlay2” by creating/editing the following file:

/etc/docker/daemon.json
	{
		"storage-driver": "overlay2"
	}

But this does not work in Arch, the daemon won’t start.
In Arch, set overlay2 thusly:

/etc/systemd/system/docker.service.d/10-machine.conf
	[Service]
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver overlay2 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic 
	Environment=

I am running two twin Arch nodes, one with /var/lib/docker on XFS, and the other with /var/lib/docker on EXT4, will load/performance test both and see if there is any difference.

Appendix B: Setting a Swarm advertise address

When you run a docker swarm init you will be asked which IP address to choose. I have had success using the Bridged address, and not with the vboxnet0 Host Only address, so that I can access the swarm from the LAN.

I have a feeling that it’s possible to have the swarm advertise address on the host-only network and still have the swarm work on the bridged interface.

These Swarm Nodes Are NOT SECURE

Please note that these Swarm node machines have NO security hardening, and should be ONLY used on a safe, trusted network behind a firewall.

I hope this writeup was helpful!
Michael

Leave a Reply