2013-08-06

One-liner Instant Postgres for your development environment

When working on web applications it's very important to have a setup as similar to the production environment as possible.

The major component here is using the same database locally, for development, as you use in production. I myself work mostly with Django and, as most of you Django devs know, it's very easy to get started using a Sqlite database.

Now, I'm not bashing on Sqlite, it definitely has it's purpose but it's not on the main dev environment. It's nice to be able to set up the designer's development environment without too much of a hassle but you should be developing on a database you use in production.

I use Postgres on all of my production servers. I come from a strong Sysadmin background and I set up most of this on my laptop as well. But it's one thing to setup everything on my laptop and a whole different story setting up Postgres on each workstation of every colleague. It's exhausting.

Of course you can have a staging server and all that but I really want that everyone has a local Postgres instance even then (you just catch way to many bugs this way even before the code hits the staging server).

Enter docker. Docker is an open-source engine which automates the deployment of applications as highly portable, self-sufficient containers which are independent of hardware, language, framework, packaging system and hosting provider. [3]

Under the hood Docker uses LXC for running isolated containers. Now, I've used LXC by itself, and have some production environments that use it, but Docker is a huge deal. Not only is it a higher level wrapper on top of LXC but it really takes so much hassle out of the picture and just enables you to run virtual containers, of your app, or of other processes. Docker is going to be a huge deal in a lot of production environments and the way we do deployments in the coming months, but for now, in this post at least, we are just going to use it for setting up our development environment quickly and easily.

Docker requires a 64 bit Linux distro (kernel 3.8 or higher). Ubuntu has a PPA (link) and it's fairly easy to install it. For Debian there is a package in the works but you can just download a pre-compiled binary and put it somewhere on your PATH. This goes for other distros as well. For Mac and Windows users, you can run docker inside a Vagrant box (link).

Linux Containers deployed with Docker have some advantages over the full virtual machines like VritualBox. A Linux Container looks just like a real virtual machine form the inside. It has it's own filesystem, it's own network interface and so on. In reality it's just a group of processes totally isolated form the host operating system but running on the same kernel as the host. This in turn means that it's much easier to emulate I/O devices and hardware, that it has far less CPU overhead and memory consumption and still maintains the portability of a full virtual machine. Last but not least, the most important feature is that it's lightning fast to boot up. Containers sometime take milliseconds to bring up.

Now, before we continue any further we need to explain the concept of docker images. Docker images are basically snapshots of a system that are used for firing up new containers. So let's say I have a 'postgres' image you could just do:

docker run -i -t postgres /bin/bash

Which would bring up a new container and attach you to it's bash prompt from which you could run your Postgres process.

Docker images are hosted on a public Docker Index from where you can browse and download all kinds of prepared images, ranging from base images from the docker team (ubuntu, base) to specialized user images like denibertovic/postgres. There's also an option to run your own local index but that's another blog post :).

I've prepared a Postgres container denibertovic/postgres (versions 9.1 and 9.2) which are uploaded to the docker index and can be used freely by anyone.

Now, I've promised you a one-liner solution and I wish to deliver on that. Go ahead and download the Makefile I've prepared here. Put it into your project folder or integrate it into your existing Makefile if you already have one.

All it takes now is to just type:

make postgres

This will bring up a new container and run a Postgres 9.2 instance in it. For 9.1 just change the POSTGRES_VERSION variable in the Makefile. If the above command is run for the first time it will first download the "denibertovic/postgres" image from the docker index and then run the container. All subsequent runs will use the already downloaded image.

You can list the images you have locally using the below command:

docker images

Once you have your Postgres instance up and running you can connect to it using:

psql -Upostgres -h localhost

Use this to create your database and user

postgres=# CREATE DATABASE my_database;
postgres=# CREATE USER my_user WITH PASSWORD 'myuserpassword';
postgres=# GRANT ALL ON DATABASE my_database to my_user;

You have now successfully created your database and the user for that database.

Now you can set up you project settings files to point to the given database. For Django projects I like to use dj_database_url and then you end with something looking like this:

DATABASES = {
    'default': dj_database_url.config(
        default='postgres://my_user:myuserpassword@localhost:5432/my_database')
}

Docker containers are ephemeral which means all of the changes done to the container are gone once you stop the container. With our Postgres container this means that once we stop the container our changes to the database, and even the database itself, would disappear. For some cases this might be fine, but most of the time we want out development database to have persistent data.

To accomplish this what we do is tell the containerized Postgres instance to write all the database stuff in a folder called "__data" located in the same folder as the Makefile on the host system. So if at one point you wish to start from scratch and delete all the database data you can do so simply by deleting the whole "__data" folder and everything in it and repeat the procedure above.

Another thing to note is that the containerized Postgres instance is set up to run on the default "5432" port which is in turn NAT-ed to localhost on the same port. This means that if you run one Postgres instance for one project you first need to stop it before you can run another instance on another project (don't worry the "make postgres" command will warn you about this).

So first grab the container ID from the running container with:

docker ps

ID                  IMAGE                       COMMAND                PORTS
000f0a07e49a        denibertovic/postgres:9.2   /usr/local/bin/start   5432->5432

And then use the ID to stop the container:

docker stop 000f0a07e49a

And that's it folks. Hope you find this useful and feel free to leave comments/tips/improvements.

Next up will be a post about App deployments.

docker lxc postgres virtualization


Did you like this post?

If your organization needs help with implementing modern DevOps practices, scaling you infrastructure and engineer productivity... I can help! I offer a variety of services.
Get in touch!