Aeropress Coffee maker

I recently met up with a friend for a beer and we got talking about the aeropress coffee maker which he raved about (and has featured on his blog. My old De'Longhi drip coffee filter machine has served me well for the last few years, but I tend to waste a lot of coffee as a result by using it to brew for one. I've tried using Moka pots in the past, but often end up with quite a burnt tasting brew.

So, I bought the Aeropress and have put it into action. Must admit, I've been really impressed so far - smooth, great tasting coffee, without the mess of a filter machine, less wastage and much quicker! Money well spent.

DigitalOcean and kernel inconsistency

When I logged into my DigitalOcean VM today I noticed that the advertised kernel on the motd was GNU/Linux 3.5.0-17-generic x86_64, something which came as a bit of a surprise given the fact I'd updated to 3.5.0-39 only a matter of days before. The other slightly concerning fact was that the autoremove function had definitely uninstalled that kernel from my machine at the same time as I'd updated to 3.5.0-39 ...

The newer kernels were indeed installed as expected, and as Ubuntu usually does, the autoremove cleanup had removed the older kernels, including the 'current' one of 3.5.0-17. Cleanup had also removed the /lib/modules for 3.5.0-17, and consequently there were no modules on the server the loaded kernel - why Digital Ocean does not check for this on boot I have no idea.

Removal of the /lib/modules directory prevents things such as iptables from working (as you'd expect), and consequently other services which depend on iptables (read fail2ban among others) therefore fail.

To test the theory, I started a new Digital Ocean droplet. I updated to the latest packages, and removed the 3.5.0-17 as expected via the autoremove function. Unsurprisingly, iptables is gone:

$ sudo iptables -L
FATAL: Could not load /lib/modules/3.5.0-17-generic/modules.dep: No such file or directory
iptables v1.4.12: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)

So, we've lost iptables, the kernel directory exists for the new kernel (i.e. /lib/modules/3.5.0-39-generic). We're on an old kernel, without the modules on the VM ... what has gone wrong here?

Having taken a look online, it looks like DigitalOcean provide their kernels via the hypervisor, not on your VM, and consequently, even if you've updated the machine and removed the old kernel, the machine will still boot on the one they select, even when the modules have been removed from the machine.

I haven't come across this behaviour before with VMs I've used in the past (Hezner, OVH, Retrosnub), though I am aware that some providers load kernels from the hypervisor - clearly those that do normally ensure an appropriate /lib/modules is imported on boot.

As for DO, to update the kernel on your machine, you can do so via their control panel, select the new kernel via the settings page, and then reboot from the console. This means you get a newer kernel, but still not the most recent one, the most recent for Ubuntu 12.10 I've been able to find on their website is 3.5.0-32, rather than the -39 suffix which appears to have gone into general release.

Overall, this is quite concerning that by usual updating processes, kernel modules can be removed, and the boot process does nothing to make sure a consistent kernel environment is loaded. This affects both the security of the system by preventing iptables loading, and potentially other services which rely on kernel modules. In addition, it does not appear the current kernels are always available in line with the Ubuntu update cycle - with only -32 being available, and not the latest -39. I think I'll have to consider a switch away from Digital Ocean at this time to an alternative provider.

Ensuring broken builds fail

As a correction to Jenkins configuration for Django projects, it should be noted that the 'one-build-step' method previously described in that page actually fails to correctly detect failed builds, due to the fact the final step in the script exits 0.

Edit: A neat addition from @richardwhiuk via github is to use the -xe flags to bash, which ensure that broken builds exit with a non-zero status, as well as giving more verbose output for debugging.

Consequently, a better solutions is to use the following multi-step approach:

#!/bin/bash -xe
virtualenv /tmp/$BUILD_TAG
source /tmp/$BUILD_TAG/bin/activate
pip install --use-mirrors -q -r requirements.txt
deactivate

And then:

#!/bin/bash -xe
source /tmp/$BUILD_TAG/bin/activate
python manage.py jenkins

Which will fail correctly when asked. You can then use the Post Build Script plugin to handle cleanup, which will run regardless of whether builds pass or fail. This can then be used to remove the virtualenv for that build:

#!/bin/bash -xe
rm -rf /tmp/$BUILD_TAG

This correction has also been applied to the prior post.

Clean Django and Jenkins integration

When using Django and Jenkins together, something I've mentioned in the past here and here, I've been bugged by the untidy extra stanzas which get imposed on your code to link the two together.

Why would my production deployment require the django-jenkins INSTALLED_APP?

Essentially, it shouldn't. And consequently, there are a few tricks which can reduce the extra code which gets included in production, whilst keeping it present for testing and continuous integration. There are two main areas of code which aren't needed between the two environments.

The INSTALL_APPS tuple

Adding django-jenkins to the INSTALLED_APPS tuple is required to enable the manage.py functions for the jenkins integration. However, like test-related requirements (covered later), this isn't needed in production. You can isolate out the use of your django-test app by wrapping it in an environment variable-if statement:

JENKINS = bool(os.environ.get('JENKINS', False))
if JENKINS == True:
    from jenkins_settings import *
    INSTALLED_APPS = INSTALLED_APPS + ('django_jenkins',)

This is in keeping with 12 Factor App principles, and also helps to keep your code cleaner. An alternative to setting a custom JENKINS environment variable would be to consider tying this into the DEBUG environment variable, which one would hope isn't activated in production!

The requirements.txt file

Storing all your requirements in requirements.txt keeps things simple, however, your production deployment is unlikely to require the presence of WebTest, PEP8, etc. This can be solved with a simple:

test-requirements.txt
WebTest==1.4.3
pep8==1.4.1

There you have your test-related dependencies clearly isolated from the main build code, and with relatively little extra, you can include it into your Jenkins/CI build by installing these after the main requirements. A great feature for requirements.txt includes would be to have the ability to read environment variables and install requirements based on those - I've raised Issue #785 on the Pip project in the hope of garnering some opinion/support for this feature.

Testing Jenkins SSH login

I use Jenkins to handle continuous integration for my own projects, coupled with Bitbucket for private repos (most recently mentioned here). I found an issue with Bitbucket occasionally failing on SSH key logins, and wanted to check that Jenkins was able to successfully authenticate.

This could be achieved by shell access on the Jenkins server, running commands as the Jenkins user. However, this can also be achieved through the Jenkins script console, meaning you can quickly run the command from within your Jenkins browser tab. Essentially, navigate to the console, and execute:

println new ProcessBuilder('sh','-c','ssh -T hg@bitbucket.org').redirectErrorStream(true).start().text

This line of Groovy starts a new process, using 'sh' as the shell, and executes the 'ssh -T user@server' command. It then collects the output, producing text, which is then printed by println. This gives you the output of ssh -T, which if the login was successful, looks something like this:

conq: logged in as username.

Which confirms the login was successful.