Efficient Way to Use Docker in Development

Photo of Mateusz Kluge

Mateusz Kluge

Jan 17, 2018 • 14 min read
avi-richards-183715

Have you ever spent a lot of time setting up your project after system update?

Or maybe your operating system is too old and libraries contained within it don't allow you to use newest programming libraries? Finding exact versions, manually resolving dependencies and compilation from source sounds familiar to you? Maybe your projects require different versions of database or you need different database settings for different projects.

In these cases switching between projects can be cumbersome and time-consuming. Why doesn't it just work? Do I really need to fight with configuration all the time? I'd like to configure everything once and for all.

In fact, there are some tools that can help us with these issues and docker is one of them.

What is docker?

Docker is a software that allows us to run applications in separate environments called containers. Each container has unique process tree, network resources, and disk storage. Storage can be shared across multiple containers (more on that later). Docker container is not the same as a virtual machine, we don't have virtual devices here. All applications run natively on the host system (at least on Linux) which means that there is no overhead in using that technology (or no visible overhead). Docker is used on a large scale for running server applications but nothing stands in the way to use it in the development process. Additionally, there’s a high probability that our code will work in the same way on server environment if we test it first on a local machine using same technologies.

Application image

The first step is to create an application image.

The image is a fundamental concept in docker, it's nothing more than the frozen state of files (in maximum simplification). Most of the time an image contains a set of installed libraries used by our application to run properly.

An image can be used to create any number of containers. As you probably guess it would be inefficient to copy data from image to container every time we create a new container. With a huge number of containers, we will run out of free disk space pretty quick. Docker resolves this problem with a concept of layers. Instead of copying, docker creates a read-only layer out of image data and then creates another writable layer on top of it. So basically everything we do in a container will not affect image data.

In order to create an image, we need to write a special file called DockerFile. This file is a recipe to create an image. For simple rails application the file may look like this:


FROM ruby:2.3.3
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app 

Starting from the top, we choose which image we want to be our base. In our case, it's ruby:2.3.3. There is public access to many images for any technology you need and in different versions. You can find more at https://hub.docker.com/.

The RUN directive lets us run any Linux command and the result will be stored in the image. In our example, we install additional packages with apt-get command. Next, we create a new directory for our application and we set that directory as a working directory. It means that if we start the container the starting point will be set to the /app directory.

If you saw some other Dockerfiles before you notice that we miss adding source code and gems installation. This is on purpose because we'd like to have in Dockerfiles only things that don't change too often. Building an image every time we change a line in Gemfile can take a lot of time, it's better to spend that time coding. Please save that file in the root directory of your project.

Docker compose

The next step is to create a configuration for docker-compose. It's a tool that lets you define relationships between docker containers.

It’s possible to use docker without it but it simplifies and manages a lot of tasks for us. Besides our main container that we've just created a DockerFile, we'll also need an additional container for the database (MySQL in our case). Thanks to docker-compose we can add additional services pretty easy. Just create a file called docker-compose.yml in your project's directory and paste the code below. I’ll explain it line by line.


version: '3'
services:
  web:
    build:
      context: ./
      dockerfile: Dockerfile
    command: bash -c "bundle exec rails s -p 3000 -b '0.0.0.0'"
    volumes:
      - ./:/app
      - bundler_data:/usr/local/bundle
    ports:
      - "3000:3000"
    links:
      - mysql
    stdin_open: true
    tty: true
  mysql:
    image: mysql:5.6
    volumes:
      - mysql_data:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: root
volumes:
  mysql_data:
  bundler_data:

In the first line, we define the version of the file format.

Next, under services key we define our containers. First one - web - is our main container that we use to run rails application.

Under build section, we define context key with the value of ./. Basically, we want to tell docker-compose about where it can find Dockerfile. And Dockerfile is just a file that we want to use to build the image (the same file we’ve created earlier).

Next, we define startup command for the container, so by default, it starts rails application on port 3000.

command: bash -c "bundle exec rails s -p 3000 -b '0.0.0.0'"

With volumes key, we can share directories between containers and also between the container and a local machine. Notation ./:/app means that we want to map files in the current directory to the /app directory inside docker container.

The second entry under volumes defines the mapping between volume named bundler_data and container directory /usr/local/bundle. Basically, if you want some data to be persistent and also you don't want to specify paths yourself, you can ask docker to create that directory for you. That’s why we used named volume. The formula is some_volume_name:/path/in/container/. You also need to remember to put your volume names at the end of the file like that


volumes:
  mysql_data:
  bundler_data:

We'd like to access our site on the host machine, in order to achieve that we need to map ports.


ports:
  - "3000:3000"

The application should be accessible from your host machine at address http://0.0.0.0:3000.

Next, we specified that we want to link against the mysql container. Without that our main container couldn’t access the database via IP.


links:
  - mysql

Remember that you also need to set database host in your database.yml to mysql. These options allow us to run interactive programs and see results just like in a normal shell.


stdin_open: true
tty: true

Ok, so that's the end of the definition for web service. We also need to define the mysql service. It's different because instead of writing Dockerfile we ask docker to find an appropriate image and use it as a base for the container.

Under image keyword, we have specified name and version of image we want to acquire.


image: mysql:5.6

Next, we make sure that database data is persistent. The easiest way is to create named volume and map it to the directory where mysql stores data.


volumes:
  - mysql_data:/var/lib/mysql

The last thing is to set the password for the root user of the database. We can achieve that by setting a special environment variable. You can find more about available options for the mysql image at https://hub.docker.com/_/mysql/


environment:
  MYSQL_ROOT_PASSWORD: root

Running containers with compose

We have all pieces required to run the application. But before that, we need to install all dependencies manually. In order to run interactive bash session within docker container, you need to execute this command


docker-compose -p first_docker_app run --rm web bash

It’s important to always add --rm argument to the command because it removes the container when we exit. After you issue the above command you'll see just a normal bash prompt. Basically, you can run everything you normally run like rails console, bundler, migrations etc.

Let's install all dependencies via bundle install. Type bundle install and hit enter. If you see just a normal bundler output that means you have everything set up correctly.

Next thing is a database schema. Issue the above command (I’m sure you know it well).


rake db:setup

When it's done you can exit the shell by calling exit command or pressing CTRL+D.

It's time to start rails application and test if everything works as expected. Just type:


docker-compose -p first_docker_app run --rm --service-ports web 

If we don't specify any command at the end it'll pick the command specified in docker-compose.yml file under command key. In our case, the command starts rails server.

Now, you should be able to access http://localhost:3000 from your browser.

Workflow

Basically, that's the minimum you need to know to use Docker for you development so we could end here but I'd like to share with you some ideas on how you could improve the workflow. Writing those long commands can be painful so instead, I advise you to create bash aliases or simple functions.

Take a look at this bash script:


function rails() {
  docker-compose -p first_docker_app run --rm web rails $@
}

function rake() {
  docker-compose -p first_docker_app run --rm web rake $@
}

function bundle() {
  docker-compose -p first_docker_app run --rm web bundle $@
}

As you can see we defined three handy functions that will run most used commands in docker container without us even knowing that there is docker involved.

Just save this script under your project's directory e.g set-env.sh and run it like that source set-env.sh.

Now anytime you write rake, rails or bundle it'll pass all arguments to docker-compose.

Next thing if you use the terminal multiplexer like tmux.

There is a beautiful piece of software called tmuxinator that can help you with complex projects that has multiple services and you want to run all of them in a separate tmux pane. With proper configuration, you can start working on your project after just one command similar to this


tmuxinator start your_project_name

Check the project's GitHub page at https://github.com/tmuxinator/tmuxinator and learn about how to configure tmuxinator. I’m sure you’d love it.

Summary

This tutorial shows only basic setup that I use in my daily work. There might be other ways to achieve the same results, different people use different workflows. Learning about docker takes a lot of time. Please don’t rush and start with something small. For example, you can migrate your services first. Move your redis server to docker, then move mysql and so on.

Good luck!


Photo by Avi Richards on Unsplash

Photo of Mateusz Kluge

More posts by this author

Mateusz Kluge

Mateusz is a self-taught programmer who is not afraid of solving difficult problems. Over about 6...
How to build products fast?  We've just answered the question in our Digital Acceleration Editorial  Sign up to get access

We're Netguru!

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency
Let's talk business!

Trusted by:

  • Vector-5
  • Babbel logo
  • Merc logo
  • Ikea logo
  • Volkswagen logo
  • UBS_Home