Does using docker to reduce disk clutter and time spent on setting up a new machine sound good? It definitely did to me. In this article, I’ll show you how to get started with a dockerized development environment, so you can get a taste of what it’s like. I’ll also give you some suggestions near the end on how to learn what you need to know about this tool.
After a few years of development, I’ve accumulated a number of projects to maintain. At the beginning, each one of them had a slightly different set of dependencies than the previous one. Now they not only use different database versions — they use different kinds of databases. They even span across multiple technology stacks: there’s some Python with Django, multiple Rails applications at different life stages and even an old WordPress site. I prefer not to keep all that on my machine for two main reasons:
I do not need most of these tools 80% of the time;
I use different machines depending on where I am.
Docker solved the dependency problem very well in my case. However as a user of multiple computers and many operating systems I was looking for a way to store my entire development environment. Keeping dotfiles in a repository was helpful until I needed to setup a new machine. A question came up in my mind — what if all that I ever needed to do was install Docker, pull my development image and be ready to work? I had that experiment in the back of my head for quite some time now.
This setup won’t be as simple as fetching a container, but it gets pretty close. The main reason for that is the fact that I do not want to store my SSH keys inside the container or the repository and I want to distinguish between my machines using different keys. This also may be helpful when there’s a need to revoke access for a compromised key or computer. I’m going to assume that you have some very basic knowledge of Docker, have it installed and that you have configured SSH keys to work with GitHub.
Getting The Basic Tools
Let’s create a catalog called `development` and start a simple Dockerfile containing just git and a text editor of choice. I based my image on the latest version of Ubuntu, simply because I’m most familiar with it.
To build the image run `docker build . -t development` which will give our image a tag named `development` to help us find it among other containers. To confirm that everything went fine run `docker images`:
And poke around inside it by `docker run -it development`.
Commit and Push
This was easy, but now we have to add some configuration and to do this we need access to the host filesystem. Let’s run our container again, but this time we’ll mount the project’s catalog as `/development`. The command should look like:
Git still doesn’t know anything about us, so let’s configure it:
And initialise a new repository:
Now all the changes persist outside the container, but if we run it again, the file will be in a wrong place. Let’s copy it by adding the following line to the Dockerfile:
and commit the changes:
CAUTION! The next step is very questionable and should be performed at your own risk. Keep in mind that Docker containers include other people’s code and not all of it was created to make the world a better place.
Now, that you’ve been warned, we can setup a new repository on GitHub and push the code there. To bring our SSH setup into the container we need to quit it, then rebuild with the new config:
And run with SSH directory mounted in the correct place:
With all that we’re ready to push the entire setup on GitHub:
Configure the Editor
Editor setup is very similar, however there’s one more step to perform, which is installing the plugins during the container build. Otherwise we’d have to install them each time we run the container. Let’s configure Vundle to manage our plugins. First step is to add the `.vimrc` file:
To make it work we need to copy the `.vimrc` file into the correct place, install Vundle and install plugins. To do this let’s add the following to the end of our Dockerfile:
After rebuilding the container we can commit our changes as usual or use the vim-fugitive plugin.
Run an App
Now that we’re ready to start coding, we should have a project. But how are we going to achieve that? Run a container inside a container? No, we should run them side by side. To keep things simple, we’ll run the whalesay image from the Docker Tutorial, but for any dockerized project the idea stays the same.
First, let’s install Docker inside our container, by adding it to the installed packages:
Rebuild the image:
And run it connecting the Docker socket:
Now every container run inside our development box will also be accessible to the host operating system. Let’s verify that:
You should see the greeting in your container, but the image should also be accessible outside of it, so if you run:
in your host machine you should get the same output.
I hope I got you interested in this topic. There are many ways you can go from here. As an exercise you should configure the container not to use the `root` account, try to preserve shell history or run a GUI application of your choice. The current setup has project repositories inside the development repository, so try to separate them. Tmux users can automatically attach to a session on container start. Experiments like these are my favourite way to learn new tools.
It this solution perfect? Definitely not. Although the resource penalty is lower than while using a virtual machine, it’s still there and there are other drawbacks. The obvious one is the lack of integration with the host operating system, which you may need. The problem with shell history mentioned earlier is easily solvable, but none of the solutions I found satisfied me. If you switch projects frequently you may get tired of it and decide to go back to “the old way”. Will I use this setup for my day to day work? No, but I’ll definitely revisit it with new ideas, learn more about Docker while at it, and maybe some day I’ll switch. That being said, there are people using this concept and they’re perfectly happy with it, so I encourage you to try, form your own opinions and share them with the world.