Getting Docker containers talking to Postgresql on the host

January 22, 2016

I’ve been running Django projects in Docker containers over the last new months, as it gives a much clearer separation of app and host system than using virtualenvs alone. Whilst this improves deployment, it has caused me some issues with connections to Postgresql databases.

I am currently running Postgresql on the host machine, as performing WAL backups etc. of containerised Postgresql adds a layer of complexity not necessary for a side project. In order to facilitate containers connecting to the host Postgresql instance, I needed to make Postgresql listen on the docker0 interface as well as localhost. docker0 is firewalled to prevent external access.

I set Postgresql to listen on the docker0 IP address:

listen_addresses = 'localhost,'

I then allowed authenticated connections to be made from IP addresses in the docker0 subnet in pg_hba.conf.

host    all     all       md5

Postgresql will fail to start if the Docker daemon has not started before, because the docker0 interface will not yet exist. To get the ordering right, the systemd configuration needs some overrides.

Systemd unit files can be overridden by one of two methods, either copying and modifying the entire .service file to /etc/systemd/system/, or by using named directories in /etc/systemd/system/service-name.service.d/*. The former has the disadvantage that you completely step away from the vendor supported unit file and any provided updates, and the latter that the vendor supplied version in a future update may be incompatible with your changes. The choice is up to you. I went for the latter.

In the case of Postgresql, there are two existing files on Debian:


The latter is a template file which, when expanded, becomes [email protected]: in my case this is [email protected].

Overriding the postgresql.service file alone doesn’t actually make Postgresql obey the new Requires= and After= declarations we are keen to make. For this to work, both needed to be overridden.

This is facilitated by adding:


To both:

[email protected]/override.conf

The reason for both Requires= and After= is made clear by the Systemd Unit docs:

Requires= If this unit gets activated, the units listed here will be activated as well. If one of the other units gets deactivated or its activation fails, this unit will be deactivated. … If a unit foo.service requires a unit bar.service as configured with Requires= and no ordering is configured with After= or Before=, then both units will be started simultaneously and without any delay between them if foo.service is activated.

Reloading the config sudo systemctl daemon-reload, should set everything up, and when the system restarts, Postgresql will not start unless Docker has. This ensures docker0 will always be available for postgres to listen on.

Disabling neocomplete for Markdown

March 03, 2015

I have been using neocomplete a lot recently for vim autocompletion in a variety of different languages, and it works really well. One issue I’d run into was with Markdown files, where editing led to really painful autocompletion attempts on paragraphs of text. Disabling neocomplete for Markdown files appeared to be the best solution, and can be achieved by:

autocmd FileType markdown nested NeoComplCacheLock

The docs for NeoComplete can be found here, and clarify the role of NeoComplCacheLock.

DigitalOcean and kernel inconsistency

August 28, 2013

When I logged into my DigitalOcean VM today I noticed that the advertised kernel on the motd was GNU/Linux 3.5.0-17-generic x86_64, something which came as a bit of a surprise given the fact I’d updated to 3.5.0-39 only a matter of days before. The other slightly concerning fact was that the autoremove function had definitely uninstalled that kernel from my machine at the same time as I’d updated to 3.5.0-39 …

The newer kernels were indeed installed as expected, and as Ubuntu usually does, the autoremove cleanup had removed the older kernels, including the ‘current’ one of 3.5.0-17. Cleanup had also removed the /lib/modules for 3.5.0-17, and consequently there were no modules on the server the loaded kernel - why Digital Ocean does not check for this on boot I have no idea.

Removal of the /lib/modules directory prevents things such as iptables from working (as you’d expect), and consequently other services which depend on iptables (read fail2ban among others) therefore fail.

To test the theory, I started a new Digital Ocean droplet. I updated to the latest packages, and removed the 3.5.0-17 as expected via the autoremove function. Unsurprisingly, iptables is gone:

bash $ sudo iptables -L FATAL: Could not load /lib/modules/3.5.0-17-generic/modules.dep: No such file or directory iptables v1.4.12: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)

So, we’ve lost iptables, the kernel directory exists for the new kernel (i.e. /lib/modules/3.5.0-39-generic). We’re on an old kernel, without the modules on the VM … what has gone wrong here?

Having taken a look online, it looks like DigitalOcean provide their kernels via the hypervisor, not on your VM, and consequently, even if you’ve updated the machine and removed the old kernel, the machine will still boot on the one they select, even when the modules have been removed from the machine.

I haven’t come across this behaviour before with VMs I’ve used in the past (Hezner, OVH, Retrosnub), though I am aware that some providers load kernels from the hypervisor - clearly those that do normally ensure an appropriate /lib/modules is imported on boot.

As for DO, to update the kernel on your machine, you can do so via their control panel, select the new kernel via the settings page, and then reboot from the console. This means you get a newer kernel, but still not the most recent one, the most recent for Ubuntu 12.10 I’ve been able to find on their website is 3.5.0-32, rather than the -39 suffix which appears to have gone into general release.

Overall, this is quite concerning that by usual updating processes, kernel modules can be removed, and the boot process does nothing to make sure a consistent kernel environment is loaded. This affects both the security of the system by preventing iptables loading, and potentially other services which rely on kernel modules. In addition, it does not appear the current kernels are always available in line with the Ubuntu update cycle - with only -32 being available, and not the latest -39. I think I’ll have to consider a switch away from Digital Ocean at this time to an alternative provider.

Ensuring broken builds fail

February 23, 2013

As a correction to Jenkins configuration for Django projects, it should be noted that the ‘one-build-step’ method previously described in that page actually fails to correctly detect failed builds, due to the fact the final step in the script exits 0.

Edit: A neat addition from @richardwhiuk via github is to use the -xe flags to bash, which ensure that broken builds exit with a non-zero status, as well as giving more verbose output for debugging.

Consequently, a better solutions is to use the following multi-step approach:

#!/bin/bash -xe
virtualenv /tmp/$BUILD_TAG
source /tmp/$BUILD_TAG/bin/activate
pip install --use-mirrors -q -r requirements.txt

And then:

#!/bin/bash -xe
source /tmp/$BUILD_TAG/bin/activate
python jenkins

Which will fail correctly when asked. You can then use the Post Build Script plugin to handle cleanup, which will run regardless of whether builds pass or fail. This can then be used to remove the virtualenv for that build:

#!/bin/bash -xe
rm -rf /tmp/$BUILD_TAG

This correction has also been applied to the prior post.

Clean Django and Jenkins integration

February 01, 2013

When using Django and Jenkins together, something I’ve mentioned in the past here and here, I’ve been bugged by the untidy extra stanzas which get imposed on your code to link the two together.

Why would my production deployment require the django-jenkins INSTALLED_APP?

Essentially, it shouldn’t. And consequently, there are a few tricks which can reduce the extra code which gets included in production, whilst keeping it present for testing and continuous integration. There are two main areas of code which aren’t needed between the two environments.


Adding django-jenkins to the INSTALLED_APPS tuple is required to enable the functions for the jenkins integration. However, like test-related requirements (covered later), this isn’t needed in production. You can isolate out the use of your django-test app by wrapping it in an environment variable-if statement:

JENKINS = bool(os.environ.get('JENKINS', False))
if JENKINS == True:
    from jenkins_settings import *
    INSTALLED_APPS = INSTALLED_APPS + ('django_jenkins',)

This is in keeping with 12 Factor App principles, and also helps to keep your code cleaner. An alternative to setting a custom JENKINS environment variable would be to consider tying this into the DEBUG environment variable, which one would hope isn’t activated in production!

The requirements.txt file

Storing all your requirements in requirements.txt keeps things simple, however, your production deployment is unlikely to require the presence of WebTest, PEP8, etc. This can be solved with a simple:


There you have your test-related dependencies clearly isolated from the main build code, and with relatively little extra, you can include it into your Jenkins/CI build by installing these after the main requirements. A great feature for requirements.txt includes would be to have the ability to read environment variables and install requirements based on those - I’ve raised Issue #785 on the Pip project in the hope of garnering some opinion/support for this feature.