Generally speaking, Docker containers should have everything they need access to baked into the image. There are times, however, that it may be necessary to have additional files or directories provided to the container to persist information. These can include, but are not limited to:
Data persistence (usually only for local databases for development)
Application package hotswap during development
Saving artifacts generated by the application
Docker has two ways to provide such storage: bind mounts and volumes.
Bind mounts are used to provide access to a directory on the host machine. On a Linux host, Docker allows you to bind a user defined directory into the root filesystem of the container, effectively allowing you to do the equivalent of mount –bind for your directly to link it directly to the container’s filesystem. This is ideal for providing custom configuration files or saving off build artifacts to your host directory. To mount a directory into a container, execute the following on an example container:
This creates a simple dummy site and then pulls down the
nginx image, running it with the content of our simple site. If you open your web browser to http://localhost, you will see the “Hello, world!” message that we left in our sample directory. Alternatively, instead of the
--mount option, you can use the older style -v syntax:
It is recommended that you use the
--mount option as it is more precise in its definition. The
-v option is only still available for legacy purposes.
We can inspect the container and see that the mount is defined for our container by using
docker inspect test :
We can specify bind mounts in a compose file as such:
- type: bind
You can also map an individual file onto a container, but it is rare to do so. If using the -v syntax, note that if the file is missing, a directory will be created with the name of the file that you specify. This can be confounding if you use this in a Compose file. More can be found on the Docker website:
Bind mounts on Docker for Mac do not use native bind mounting, but instead uses osxfs to attempt to provide a near-native experience for bind mounts. It is still slower than a native bind mount running on Linux, but should still work seamlessly with local HFS+ filesystems. By default, it only has access to the /Users, /Volumes, /private, and /tmp directories. See official documentation details on Docker’s official website:
Docker volumes are file system mounts that are managed completely by the Docker engine. Historically, these have been called “named volumes,” just in case you see a reference to it in literature or in command line help or error messages. When a Docker volume is created, the directory is stored in the
/var/lib/docker/volumes/ directory. The typical use case for a named volume would be for something like data persistence or sharing data between containers. Let’s dig out the Pastr app from the first tutorial. We’ll add the mount in the
- type: volume
The top-level volumes directive (at the bottom of the snippet) denotes that a datastore shall be created via this compose file. After starting the container with
docker-compose up-d the Docker engine will create the pastrdatastore volume.
Note that on a Linux machine, this volume exists on the native filesystem. However, on a Windows or Mac system, this volume exists within the virtual machine; you can’t access it directly, nor should you try to, even on a Linux machine. If you need to mount the data store to inspect its contents, you can run it with
docker run-it--rm--mount source=pastr_pastrdatastore,destination=/mnt ubuntu/bin/bash.
For more details, please see the official Docker documentation:
Before I start delving further into Docker tutorials, I feel that I should go over the differences between Docker running natively on Linux versus running Docker on virtual machines on Mac and Windows.
Docker for Linux (Native)
Natively, Docker runs on Linux, taking advantage of direct access of the host Linux kernel. You can prove this by running the following:
Linux myhost4.4.0-127-generic#153-Ubuntu SMP Sat May 19 10:58:46 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Linux1a00a25712424.4.0-127-generic#153-Ubuntu SMP Sat May 19 10:58:46 UTC 2018 x86_64 Linux
The second line is just a weird quirk of the entrypoint option (read more here). But it still shows us that the kernel that your application thinks its running on is actually the kernel of the host machine.
I stripped out a lot of extraneous information but kept in the important bits. When you look at your network interfaces, you’ll see your normal loopback and ethernet, but you’ll also notice a veth device that wasn’t there before. This device has the same MAC address as the one assigned to the Docker container, as well as the same IP address. You can actually ping this IP address or reach the open port (172.17.0.2:80) in your web browser without having to do any port forwarding.
Docker for Mac
When running Docker on macOS, if we try to look up the kernel, we get the following:
Darwin myhost17.6.0Darwin Kernel Version17.6.0:Tue May815:22:16PDT2018;root:xnu-4570.61.1~1/RELEASE_X86_64 x86_64
Linux30c60b56067d4.9.87-linuxkit-aufs#1 SMP Wed Mar 14 15:12:16 UTC 2018 x86_64 Linux
Well, that’s not what we were looking for. You can clearly see that the kernel isn’t the same. What’s actually happening is that Docker for Mac is spinning up a virtual machine. It uses the built-in macOS Hypervisor framework, allowing an application to run virtualized processes with a rather lightweight overhead. The hypervisor runs, as you can see, LinuxKit, which was created by the folks at Docker to build lightweight Linux distribution to run the Docker engine. As such, you can set, via the notification indicator menu preferences, set the VM settings, allocating the appropriate number of cores and amount of memory.
What this means is that with Docker for Mac, you do not have access to the network stack, nor do you have native file mounts. If you mount a local host directory to your container, you can expect your application to run about four times slower than if you baked the contents of that directory into the image or used a named volume.
The advantage that Docker for Mac has over the older Docker Toolbox method is that instead of having to pass commands via a TCP connection to a port on the VirtualBox instance, information is passed along a much speedier and more reliable Unix socket. See Docker’s official documentation on Docker for Mac for more details: https://docs.docker.com/docker-for-mac/docker-toolbox/
Docker for Windows
Docker for Windows operates much the same way as Docker for Mac. It utilizes Hyper-V to spin up a hardware virtualization layer and run LinuxKit. It has similar limitations to the Docker for Mac installation. Additionally, you will also have to enable file sharing in the Docker for Windows settings for the drives you want. You will also need to make sure your firewall will allow connections from the Docker virtual machine to the host Windows system. See the following links for more detail:
Alright, I wish I could take back my previous Docker entry. It was pretty useless, so I’m going to take another shot at this at do it right. I’ve given this talk about Docker about a dozen times in person, done a recorded (sadly, proprietary) teaching session on it, but I still find myself giving it over and over again, so I thought it might be best to just start writing it down. The target audience for this is people who have only really heard of Docker without knowing what it is. At the end of this guide, you should be able to write your own Dockerfile for your project and deploy it locally for testing purposes.
What is Docker?
You can think of Docker as yet another layer of virtualization, one that’s not as heavyweight as full hardware virtualization or paravirtualization. It’s a level known as “operating-system-level virtualization,” where the guest machine shares the same kernel as the host, but gets its own file system to itself and network stack. This allows you to run your application as a process on the host operating system while fooling the guest application into thinking that it has all of its own resources to use.
What should I use it for?
Docker makes it easy to spin up multiple stateless application services onto a cluster. If anything requires storage, e.g. a database, it is much better to use a standard virtual machine with dedicated mounted storage. Docker is not designed to manipulate stored data very efficiently.
Installation and Example
The first step, obviously, is to install Docker. Follow the directions here and find your platform.
After you have it installed, we’ll get a quick “Hello, World!” going. We’ll execute two lines, docker pull hello-world and docker run hello-world.
The first line pulls down an image from hub.docker.com and the second instantiates a container from that image and runs it. Now, this could all be done with the run command, but I broke it out into different steps to show two separate steps. The first is to obtain the image, while the second is to create a container from that image.
We’ll take a look at the two separately with docker images and docker container.
REPOSITORY TAG IMAGE IDCREATED SIZE
hello-world latest e38bc07ac18e2months ago1.85kB
$docker container ls-a
CONTAINER IDIMAGE COMMANDCREATED STATUS PORTS NAMES
79530d8a293chello-world"/hello"38minutes ago Exited(0)38minutes ago nervous_joliot
We see that we have an image that we downloaded from the hub. We also have a container created using said image. It’s assigned a randomly generated name nervous_joliot because we didn’t bother naming it. You can name your containers when you run them with the –name directive, e.g docker run –name my_hello_world hello-world.
Images vs. Containers
Let’s go into more detail on what images and containers are as they pertain to Docker.
Images are immutable data layers that contain portions of the filesystem needed to run your application. Typically, you would start with the base operating system image, add language/library support, then top it off with your application and ancillary files.
Each image layer starts by declaring a base image to inherit from. Notice that earlier, when you were pulling the hello-world image, it was downloading four images. It was downloading not only the hello-world image layer, but also all the layers that it depends on. We’ll cover this more in depth later on.
Containers are instantiated instances of images that can support execution of the installed application. You can think of images as a class definitions in Object Oriented Programming, an containers are analogous as objects. You can create multiple containers from the same image, allowing you to spin up a cluster of processes with a few simple commands.
When a container is created, a new read/write layer is introduced on top of the the existing image layers. If a change is made to a file existing in an image layer, that file is copied into the container read/write layer while the image is untouched.
A Dockerfile is a build descriptor for a Docker image, much like a Makefile is used to build your application (if you still write C-code). You would typically include you Dockerfile inside your project, run your regular project artifact build, and then run, either manually or via a build target (makedocker or mvn -Pdocker, etc) to produce your Docker image.
For this example, we’ll take a look at Pastr, a quick and dirty PasteBin clone I wrote with Python and a Redis storage backend. You can clone the project from here: https://gitlab.com/ed11/pastr.
The project uses Flask and Flask-Restful to serve up data stored from a connected Redis database presented with a VueJS UI front-end. (At the time of this writing, it’s still… very much lacking in quality; this was just the quickest thing I could slap together for a demo). The application just spins up a Flask WSGI development server for simplicity’s sake.
Let’s take a look at the Dockerfile to see what we’re building:
We’ll break this down line by line, remembering that each line creates its own image layer, as visualized earlier in the Images section.
This line tells the Docker engine to start our image off by pulling the python base image from the official repository (hub.docker.com). The 3.6 after the colon tells it that we want specifically version 3.6 of Python. This is a tag for the image. You can specify tags as a point release for your application or combine it with other text to mean variants (e.g.myapp:1.0-debug to indicate that the image runs your application in debug mode).
This command copies the contents of the pastr directory (in the current project working directory) into the image at /opt/. Note there are special rules on what ADD does. I recommend reading the documentation on the official Docker website:
This command copies a single file (the requirements.txt file) into the /opt directory. If you’re still in doubt on what to use, USE COPY instead of ADD.
This command starts up a temporary container from the previous image layers, pops open a shell inside the virtual file system, and then begins executing commands. In this case, it simply runs the pip install command, which, in a Python project, downloads all the required libraries needed to execute the application. You would normally use this to download third party dependencies, extract tarballs, or change permissions of files to grant execute privileges. After the command is done, it takes the mutable file system layer created by the container and saves it off as an immutable image layer.
Be very mindful of the layer saving when using the RUN command when dealing with large files. For example, if you use this to download a large executable from a third party resource and then change the permissions, you will end up with two layers of the same size. Example:
Dockerfile RUN example BAD
RUN wget http://my-file-server/large-binary-executable
Say our large-binary-executable is 500MB. The first command will save off an image where the file is not executable, taking up 500MB. The second command will take the 500MB file, change the permissions, and save another image where the 500MB file is executable, essentially taking up 1GB of disk space for anyone who has to download it. Instead, you should run them in one command, like so:
Dockerfile RUN GOOD
RUN wget http://my-file-server/large-binary-executable && chmod +x large-binary-executable
The CMD directive specifies the command that is to be executed when the container starts up. In our example, we run the python command and point it to our application. The DB_SERVER=$DB_SERVER is an environment variable that we pass to our application as a rudimentary form of configuration management.
There are actually two ways to specify the container startup command: the CMD and the ENTRYPOINT directives. In most cases, these might be interchangeable, but there are nuanced differences on which to use, which are more suitable for a more advanced topic. For now, I will say that semantically, ENTRYPOINT is generally used to specify the executable and CMD is used to pass in parameters. The latter can be overridden on the command line prior to starting up.
Building the Image
Using the Dockerfile, we can build the image manually with the following command:
What this command does is build the image using the current working directory (specified by the trailing dot) and naming it pastr as indicated by the -t directive. We can validate that the image is created by checking the image list.
REPOSITORY TAG IMAGE IDCREATED SIZE
pastr latest6635be8bc0834seconds ago941MB
Typically, this would be handled by your build script using a build target plugin, as mentioned earlier.
Running the Container
We run the container much like we did with our hello-world example above.
–detach run the application in the background and return the console back to the user.
–rm When the container exits, remove it so it does not linger.
–name The name to assign it. If omitted, a random one is generated and assigned.
–publish Expose the port on the container, binding it to localhost. In this case, localhost:5000 on your computer will forward to port 5000 of the container.
pastr The name of the image to base the container off.
From here, we can open a browser up to localhost:5000 to view the application.
Of course, if you try typing in anything into the text area and submit, you’ll get an error indicating that it can’t connect to the database. So we’ll have to run a separate Redis database. Let’s kill off our existing container.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
adf918f846ef pastr"/bin/sh -c 'DB_SERV…"About an hour ago Up About an hour0.0.0.0:5000->5000/tcp pastr1
$docker stop pastr1
Now, we’ll start our Redis database back end server, using the official redis image.
docker run--detach--rm--name pastedb redis:latest
With the Redis instance running, we can create our Pastr application and point it to the database.
You’ll note that we added a few things to the argument list.
–link directs the Docker engine to allow communication between this container and the paste_db container, which is the Redis instance we started earlier.
–env sets the environment variable the application uses to specify the database server. This is what we specified in the CMD line in the Dockerfile.
From here, we can try again, this time actually pushing the save button.
It works, end to end, now! Refresh the page and click on the drop down again to see your stored text (bugfix forthcoming).
The problem is, how do we keep track of all the flags that we had to use to get it running?
Docker Compose is an orchestration tool that allows you to create and run Docker containers using a pre-configured YAML file. Let’s look at our compose file.
The version field is just so that the docker-compose command knows what API set to use. Our application can be found under services. You’ll notice that we have two, the pastr app and the backend database. You also may recognize the fields underneath as things we put in the command line to run our containers.
We are already familiar with image, ports (which we called publish), environment, and links. We’ll focus on some of the newer things.
build the directory to use to build the image if the image does not exist. The build will name it the same as the service, which in this case is pastr.
depends_on this directive will instruct the Docker engine to launch database before it starts up pastr. Note that it will only affect the orders which containers start, not necessarily wait until the other container application has fully started.
If you haven’t already, now would be a good time to bring down the other containers, as they will conflict with what we are about to do.
docker stop pastr1 pastedb
We’ll start by building the pastr image using the docker-compose command.
docker-compose build pastr
From here, we can start up the entire application, including the database.
Creating network"pastr_default"with the defaultdriver
Again, we use the -d flag to detach and run all of our containers in the background. If you ever wish to see the log output of a container, simply run docker-compose logs <container-name>.
pastr_1|WARNING:Donotusethe development server inaproduction environment.
pastr_1|Useaproduction WSGI server instead.
pastr_1|*Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
pastr_1|*Restarting with stat
To shut it all down, issue the stop command.
Removing network pastr_default
You can also stop and remove individual containers as well as restart containers with the stop, remove, and restart commands. Give them a try!
We have seen what Docker virtualization is and how to run containers manually and through orchestration. In the future, we will learn other things we can do to make local development easier, such as using network bridges and proxies to access multiple containers via the same port.
I remember when I started my first job, I had to go through the new-guy day 1 setup fish bowl. You know, where the other developers say, “whoa-ho-ho, you’re going to have a fun first week,” and then proceed to take bets on how long it will take you to set up your own local test environment. First step, install your database system, create your database, and provision users. Then, it’s installing all the prerequisite libraries: Apache, PHP, Python, etc. Then, clone your repository, and edit all your configuration files to point instances of your local code. Edit all your configuration files to use the right database, and pray that it works. Of course, it doesn’t ever work on your first try, and you have to go ask the guy next to you to stop what he’s doing and come over to help you out. This could take anywhere from 3 days to two weeks.
Then, we had virtual machines. Someone builds a virtual image with all the libraries you need and the database pre-installed, and you could just get it up and running. It was nicer to get you started up with, but it ate up all your system resources. It’s also a huge thing to download on day one, so you usually get it via sneaker-net, which the IT department frowns upon. Also, syncing your code for testing quick fixes is non-trivial. You have to either copy it on to the system or work out some sort of hypervisor-specific shared mount.
A co-worker of mine suggested that we run the development environment in its own chroot jail. It allows you to run the application server on a production like userland environment (as opposed to your native laptop experience) like you would on a virtual machine, but without having the overhead of running a whole different operating system. It works well once you get it up and running, but getting it up and running is the difficult part. The base filesystem can be distributed as a tarball, but the bootstrap script would need to be tailored to your specific Linux distribution for bind mounts. Oh, and your host operating system MUST be Linux.
Docker containers operate much like a chroot jail, where your images are your userland filesystems, and you don’t need special scripts to bootstrap the container; it’s all built in to the engine. On top of that, it’s even paired up with its own virtual network stack, so it behaves like its own virtual machine. To test quick fixes, your code is mounted on a volume handled by the Docker engine. Also, to test multiple services interacting with each other, the containers can be put onto the same virtual network bridge.
The best part? You can then take these Docker containers and ship them off to your production deployment.