Changing a website using the developer console

If you need to quickly change a website, you can use a combination of CSS/XPath selectors and a function to hide/remove DOM nodes. I had to find my way through a long list of similar items which was really hard to go through by simply looking at it.

For example, you can simply delete all links you’re not interested in by a simple combination of selector and function:

$x('//li/a[contains(., "not-interesting")]').map(function(n) { n.parentNode.removeChild(n) })

If you’ve made a mistake, reload the website.

(Locally) Testing ansible deployments

I’ve always felt my playbooks undertested. I know about a possible solution of spinning up new OpenStack instances with the ansible nova module, but felt it to be too complex as a good idea to implement. Now I’ve found a quicker way to test your playbooks by using Docker.

In principal, all my test does is:

  1. create a docker container
  2. create a copy of the current ansible playbook in a temporary directory and mount it as a volume
  3. inside the docker container, run the playbook

This is obviously not perfect, since:

  • running a playbook locally vs connecting via ssh can be a different beast to test
  • can become resource intensive if you want to test different scenarios represented as docker images.

There is possibly more, but for myself in small it is a workable solution so far.

Find the code on github if you’d like to have a look. Improvements welcome!


(lxml) XPath matching against nodes with unprintable characters

Sometimes you want to clean up HTML by removing tags with unprintable characters in them (whitespace, non breaking space, etc). Sometimes encoding this back and forth results in weird characters when the HTML is rendered. Anyways, here is the snippet you might find useful:

def clean_empty_tags(node):
    Finds all tags with a whitespace in it. They come out broke and
    we won't need them anyways.
    for empty in node.xpath("//p[.='\xa0']"):

Patching tzdata and What I Learned From it

The problem

I’ve never paid much thoughts when dealing with dates, times and timezones until I moved to Australia. I should have.

As it happened, if you’re running a server on the east coast of Australia you’re facing three issues:

  • Queensland does not have daylight saving (yay)
  • Tasmania, New South Wales and Victoria have daylight saving
  • but, all of them are in the same timezone.

That means, if you run date on your Linux box you usually get something like that:

 Tue Jun 5 14:04:27 EST 2012

Looks okey, doesn’t it? Not if you start a Zope server:

>>> import DateTime
>>> DateTime.DateTime()
DateTime('2012/06/05 14:06:49.964253 US/Eastern')

Wait, what? I’m in Australia not in the US? This is not New York? It’s sunny Brisbane?

The solution…

Apparently Australians also use AEST instead of EST as an appreviated form to specify the timezone. Ted Percival – facing the same problem – created a patch, which introduces these appreviations in the tzdata package. Just pushing this patch upstream seams not to happen for whatever reasons.
With this patch, I also had to compile a new tzdata rpm for CentOS for Mooball servers. This was a very manual and error prone process. And because I’m personally using Ubuntu I had to catch up with the documentation each time I had to compile and patch a new source rpm.

This lead me to create a package with a Makefile:

  1. It pulls down the source rpm,
  2. applies the supplied patches and
  3. builds a new rpm.

This package registered with Jenkins and you have always an updated RPM for each CentOS release.
Copying the zonefile from e.g. Australia/Brisbane to /etc/localtime resulted in this:

 Tue Jun 5 14:04:27 AEST 2012

and Zope reports:

>>> import DateTime
>>> DateTime.DateTime()
DateTime('2012/06/05 14:06:49.964253 GMT+10')

… is still only a workaround

The problem seems to be application specific. Confirming with the man page on tzname seems to suggest, that applications need to check the timezone and the tzname tuple in order to correctly determine time with daylight saving.

Zope seems to have workarounds added to the DateTime package. Setting a TZ environment variable like:

TZ="AEST" bin/instance fg

will set the correct timezone. Unfortunately it means, that you need to adjust the timezone environment variable on daylight saving (in our case Sydney or Melbourne) and restart your application. Using a patched tzdata helps to avoid restarting applications.

What I learned

So what I now got out of this is:

  • Be more date/time/timezone aware when building applications
  • How to patch source rpms
  • A nice package
    which I can pop into Jenkins which compiles a custom patched rpm.
  • You can adjust the time/date with a simple environment variable, e.g. TZ=”AEST” bin/instance fg
  • Some problems appear as an easy fix, but reveal a more complex situation underneath


Talking to Dylan Jay from Pretaweb, there is more to the environment variable (I should have read the man page of tzname more carefully). Setting the TZ variable to more than just AEST. You can specify beginning and end of daylight saving, eg:

TZ="AEST-10AEDT-11,M10.4.2,M4.4.3" bin/instance fg

The format is extensively documented in the man page of tzname.

If acquisition comes in the way

I’ve witnessed a very strange error today.

The problem

I had a custom content-type which was partly indexed. When viewing the content type, parts of the edit form showed raw HTML code instead of widgets:

What was going on?

The solution

I hunted around for a while but checked the contents of the index. Strangely, it had a whole page indexed. After investigating further: it’s another object in the portal with the same id as the attribute on the content-type. The error happens only, if the attribute is missing on the content-type and an object in the hirarchy above has the name id.

So – if you encounter a problem like this, check if the portal_catalog may grab a different object for indexing with the same name of your attribute.

If cookies come in the way…

I really had a strange error when I was recently browsing to my web site. The nginx web server returned a 400, which only happened to me with my main browser: Mozilla Firefox.

400 bad request

The error message I saw, although I knew that my site was fine.

After investigating, I figured, that it was a cookie which was sent by my browser and apparently set by Kupu during editing. If you find this weird behaviour for your site as well, check the cookies. The cookie name which led to my problem was: initial_state. Delete it and you should be able to browse to wherever you run into this error first.

How to setup a browseable source code repository in minutes.

Browsing source code repositories (such as git, subversion, etc) just by using the command line client is very cumbersome. Fortunately, there are web-repository browsers available, which – once set up – allow the user to browse the source code with his web-browser. The company haven’t had one, I needed one badly, so I was looking for solutions.

Bella and Thomas

What I needed, was actually very simple: a read-only checkout of a selected set of repositories, which are constantly updated and therefore browse-able.

I first tried tailor and … failed. The documentation is not very extensive and if you try to solve an authentication issue with three components involved (ssh, tailor and git) you’re burning time.

I thought a better way is possibly by using git-svn. I wrote a little bash script, which clones a list of source code repositories and updates them constantly. Because the whole thing took me about an hour to setup, I thought it would be prudent to share the little piece of code with the outside world. Maybe someone feels lucky and needs something similar:

# the directory in which all repositories are mirrored
# repository urls
# list seperated by newline with the names of all repositories
REPOSITORIES=`cat $SCRIPTROOT/svnrepositories`

rebase_only() {
 if test -d $1; then
 cd $1;
 echo "Rebase in " `pwd`;
 case $2 in
 git svn rebase;
 git pull --rebase;
 cd $ROOT;

cd $ROOT;

for repo in $REPOSITORIES; do
 rebase_only $repo "svn";
 if !(test -d $repo); then
 echo "creating new repo" $repo
 git svn clone -A$AUTHORSFILE $SVNROOT/$repo;

for repo in $GITREPOSITORIES; do
 rebase_only $reponame "git";
 if !(test -d $reponame); then
 echo "creating new git repo" $repo;
 git clone $GITROOT/$repo;