Hardware, programming and astronomy tutorials and reviews.

A quick introduction to Docker containers for Django web developers

Running database servers or applications in software isolated containers for development or production with Docker.

Docker is a platform for running applications in isolation. Using Linux containers makes the software layer isolated from the base system. The overhead needed for hardware virtualization used by for example Virtualbox is removed. Docker can help in development and deployment of web based applications and services - let us see how and when.

Why do I need Docker?

Docker can be used in various way. It may provide a service working in the background, like a PostgreSQL server in given version. It can also contain our application and we could use images of such container do deploy the application to production.

For Python developers Docker can be described as virtualenv for any application - wherever it will be a Python interpreter or some server at some given version. It's handy when you want to test some new versions or to run legacy application on much recent host system.

Installing docker

Docker should be available in repositories of various Linux distributions. OSX and MS Windows users will have to try Virtualbox aided solution to provide a Linux system for the Docker to run.

It's also a good idea to get the latest version available. I used PPA for Ubuntu to get newer version. When you have it installed you can add yourself to the docker group so that using it won't require sudo.

First containers

When we have Docker ready we can run many containers from registry.hub.docker.com, for example Python. To run Python 3.4 container just execute:

docker run -it --rm python:3.4

At first run Docker will have to download some files. When that is done we will get by default a Python interpreter console. The -it runs the container in interactive mode and binds it to the console so that we can interact with it. --rm will remove the instance when we quit it. The python is the container name and after : we have the tag, which usually is used to indicate version - in this case Python version available in the container.

We can also change default behavior of the container to for example run our script. To override the command from the command line just add the command to execute on the end of the line, like so:

docker run -it --rm python:3.4 ls

Which will run ls listing all files and filters in the main directory.


The Dockerfile file is used to make a recipe for our container. If we have a script that we want to be able to execute within a Python enabled container here is what we have to use:

FROM python:3.4
ADD ./test.py /
RUN ls -al
CMD python test.py

FROM indicates base container, in this case Python 3.4 image. Next we can use ADD, RUN, ENV commands to configure the container. The RUN command will execute tasks when the container is build. The CMD command will be run at the start of an container instance. Operations of building the image are cached so if beginning of the instructions set didn't changed then they will be cached and skipped on subsequent build attempts (so the RUN command from the example above will execute at first build but not on the second).

Using a terminal we can now build and run the image in Docker:

docker build --tag=foo .
docker run  -it --rm foo

Dockerfile with Django

django-ckeditor has a basic demo application that can be run with manage.py and runserver. Let us try to make a Dockerfile that will create an image running the demo application. Let us starting with something simple but not very perfect:

FROM python:3.4
MAINTAINER Piotr Maliński <riklaunim@gmail.com>
ADD . /ckeditor
ENV DJANGO_SETTINGS_MODULE ckeditor_demo.settings
RUN pip install -r /ckeditor/ckeditor_demo_requirements.txt
RUN pip install /ckeditor
RUN python /ckeditor/manage.py validate
RUN python /ckeditor/manage.py collectstatic --noinput
CMD python /ckeditor/manage.py runserver

So I use Python 3.4 here, and all the code from the repository is added to ckeditor folder within the container. I also set DJANGO_SETTINGS_MODULE env variable, install dependencies and the editor itself. Next some validation, collectstatic and in the end CMD to run the dev server. It's also handy to make that server available outside the docker, that's why it's set to use IP.

Build and run:

dockebuild --tag=django-ckeditor .
dockerun -it --rm  --publish= django-ckeditor

--publish allows mapping public IP/Port addressed from a running container to local IP/Port. In this example is the IP of the host machine. Publishing the port mapping allows me to open the application on the 8080 port of my localhost. Without the publish option the server would only be available from the running container IP address.

The Dockerfile configuration I showed up isn't perfect as it will only work if SQLite database was available with the code. Docker rule is to run separate services within separate containers. So for example let us use PostgreSQL database running in a second container.

So let us launch a PostgreSQL instance:

docker run -d postgres:9.4

This will run in in the background and we can check it's status or name with docker ps. The names by default are quite random, like clever_ptolemy in my case. Now we have to create a database on that server, but first we need it IP address. We can get it from docker inspect INSTANCE_NAME which will list various informations about the container, including IP address. Then we can create the database:

createdb -h IP_ADDRESS DATABASE_NAME -U postgres

We have the database and now we have to pass it to the application container. The Docker way of doing this is by using env variables. For Django it's handy to use dj_database_url to have database configuration as a string:

from os import environ

import dj_database_url

DATABASES = {'default': dj_database_url.parse(environ.get('DATABASE', 'postgres:///'))}

So now we have to pass an env variable called DATABASE to the container to make it work. It can be done like this:

docker run -it --rm --link=POSTGRES_INSTANCE_NAME:OUR_NAME -e DATABASE=postgres://postgres@OUR_NAME/DATABASE_NAME --publish= django-ckeditor

The name of the database server instance we can get from docker ps and OUR_NAME is just a label we can use later on in -e value. In my case it looked like so:

docker run -it --rm --link=clever_ptolemy:db -e DATABASE=postgres://postgres@db/ckeditor --publish= django-ckeditor python /ckeditor/manage.py syncdb 
docker run -it --rm --link=clever_ptolemy:db -e DATABASE=postgres://postgres@db/ckeditor --publish= django-ckeditor

First line executes syncdb - creates tables in the database. The next one starts the development server.


Such simple example as above already requires a lot of naming and linking. To make it easier there are tools like fig to help us out. In a YAML (fig.yml) file we can specify all the steps and linking needed:

  build: .
  command: python /ckeditor/manage.py runserver
   - db
   - "8080:8080"
  image: postgres:9.4

Next we build it with fig build and run it with fig up which should result in working application. We will need some tables so we can run syncdb with the help of fig run NAME COMMAND, where the name is the name from the fig.yml file for given instance. When fig launches all containers you can check them with docker ps.

You can find more about fig.yml syntax on the project page. It allows to for example mount a folder from the host machine. There is also a Django tutorial in which an other way of Postgres configuration was presented.

About Docker on the web:

Continuous Integration using Docker & Jenkins from B1 Systems GmbH

Docker-hanoi meetup #1: introduction about Docker from Nguyen Anh Tu

Docker Tips And Tricks at the Docker Beijing Meetup from Jérôme Petazzoni

21 December 2014;

Comment article