A quick introduction to Docker containers for Django web developers
Docker is a platform for running applications in isolation. Using Linux containers makes the software layer isolated from the base system. The overhead needed for hardware virtualization used by for example Virtualbox is removed. Docker can help in development and deployment of web based applications and services - let us see how and when.
Why do I need Docker?
Docker can be used in various way. It may provide a service working in the background, like a PostgreSQL server in given version. It can also contain our application and we could use images of such container do deploy the application to production.
For Python developers Docker can be described as virtualenv for any application - wherever it will be a Python interpreter or some server at some given version. It's handy when you want to test some new versions or to run legacy application on much recent host system.
Docker should be available in repositories of various Linux distributions. OSX and MS Windows users will have to try Virtualbox aided solution to provide a Linux system for the Docker to run.
It's also a good idea to get the latest version available. I used PPA for Ubuntu to get newer version. When you have it installed you can add yourself to the
docker group so that using it won't require sudo.
When we have Docker ready we can run many containers from registry.hub.docker.com, for example Python. To run Python 3.4 container just execute:
At first run Docker will have to download some files. When that is done we will get by default a Python interpreter console. The -it runs the container in interactive mode and binds it to the console so that we can interact with it. --rm will remove the instance when we quit it. The
python is the container name and after
: we have the tag, which usually is used to indicate version - in this case Python version available in the container.
We can also change default behavior of the container to for example run our script. To override the command from the command line just add the command to execute on the end of the line, like so:
Which will run
ls listing all files and filters in the main directory.
The Dockerfile file is used to make a recipe for our container. If we have a script that we want to be able to execute within a Python enabled container here is what we have to use:
FROM indicates base container, in this case Python 3.4 image. Next we can use ADD, RUN, ENV commands to configure the container. The RUN command will execute tasks when the container is build. The CMD command will be run at the start of an container instance. Operations of building the image are cached so if beginning of the instructions set didn't changed then they will be cached and skipped on subsequent build attempts (so the RUN command from the example above will execute at first build but not on the second).
Using a terminal we can now build and run the image in Docker:
Dockerfile with Django
django-ckeditor has a basic demo application that can be run with manage.py and runserver. Let us try to make a Dockerfile that will create an image running the demo application. Let us starting with something simple but not very perfect:
So I use Python 3.4 here, and all the code from the repository is added to
ckeditor folder within the container. I also set DJANGO_SETTINGS_MODULE env variable, install dependencies and the editor itself. Next some validation, collectstatic and in the end CMD to run the dev server. It's also handy to make that server available outside the docker, that's why it's set to use 0.0.0.0 IP.
Build and run:
--publish allows mapping public IP/Port addressed from a running container to local IP/Port. In this example 192.168.0.110 is the IP of the host machine. Publishing the port mapping allows me to open the application on the 8080 port of my localhost. Without the publish option the server would only be available from the running container IP address.
The Dockerfile configuration I showed up isn't perfect as it will only work if SQLite database was available with the code. Docker rule is to run separate services within separate containers. So for example let us use PostgreSQL database running in a second container.
So let us launch a PostgreSQL instance:
This will run in in the background and we can check it's status or name with docker ps. The names by default are quite random, like
clever_ptolemy in my case. Now we have to create a database on that server, but first we need it IP address. We can get it from docker inspect INSTANCE_NAME which will list various informations about the container, including IP address. Then we can create the database:
We have the database and now we have to pass it to the application container. The Docker way of doing this is by using env variables. For Django it's handy to use dj_database_url to have database configuration as a string:
So now we have to pass an env variable called DATABASE to the container to make it work. It can be done like this:
The name of the database server instance we can get from
docker ps and
OUR_NAME is just a label we can use later on in -e value. In my case it looked like so:
First line executes syncdb - creates tables in the database. The next one starts the development server.
Such simple example as above already requires a lot of naming and linking. To make it easier there are tools like fig to help us out. In a YAML (fig.yml) file we can specify all the steps and linking needed:
Next we build it with fig build and run it with fig up which should result in working application. We will need some tables so we can run syncdb with the help of fig run NAME COMMAND, where the name is the name from the fig.yml file for given instance. When fig launches all containers you can check them with
You can find more about fig.yml syntax on the project page. It allows to for example mount a folder from the host machine. There is also a Django tutorial in which an other way of Postgres configuration was presented.