Changing a website using the developer console

If you need to quickly change a website, you can use a combination of CSS/XPath selectors and a function to hide/remove DOM nodes. I had to find my way through a long list of similar items which was really hard to go through by simply looking at it.

For example, you can simply delete all links you’re not interested in by a simple combination of selector and function:

$x('//li/a[contains(., "not-interesting")]').map(function(n) { n.parentNode.removeChild(n) })

If you’ve made a mistake, reload the website.

(Locally) Testing ansible deployments

I’ve always felt my playbooks undertested. I know about a possible solution of spinning up new OpenStack instances with the ansible nova module, but felt it to be too complex as a good idea to implement. Now I’ve found a quicker way to test your playbooks by using Docker.

In principal, all my test does is:

  1. create a docker container
  2. create a copy of the current ansible playbook in a temporary directory and mount it as a volume
  3. inside the docker container, run the playbook

This is obviously not perfect, since:

  • running a playbook locally vs connecting via ssh can be a different beast to test
  • can become resource intensive if you want to test different scenarios represented as docker images.

There is possibly more, but for myself in small it is a workable solution so far.

Find the code on github if you’d like to have a look. Improvements welcome!

 

(lxml) XPath matching against nodes with unprintable characters

Sometimes you want to clean up HTML by removing tags with unprintable characters in them (whitespace, non breaking space, etc). Sometimes encoding this back and forth results in weird characters when the HTML is rendered. Anyways, here is the snippet you might find useful:


def clean_empty_tags(node):
    """
    Finds all tags with a whitespace in it. They come out broke and
    we won't need them anyways.
    """
    for empty in node.xpath("//p[.='\xa0']"):
        empty.getparent().remove(empty)

Patching tzdata and What I Learned From it

The problem

I’ve never paid much thoughts when dealing with dates, times and timezones until I moved to Australia. I should have.


As it happened, if you’re running a server on the east coast of Australia you’re facing three issues:

  • Queensland does not have daylight saving (yay)
  • Tasmania, New South Wales and Victoria have daylight saving
  • but, all of them are in the same timezone.

That means, if you run date on your Linux box you usually get something like that:

 Tue Jun 5 14:04:27 EST 2012

Looks okey, doesn’t it? Not if you start a Zope server:

>>> import DateTime
>>> DateTime.DateTime()
DateTime('2012/06/05 14:06:49.964253 US/Eastern')

Wait, what? I’m in Australia not in the US? This is not New York? It’s sunny Brisbane?

The solution…

Apparently Australians also use AEST instead of EST as an appreviated form to specify the timezone. Ted Percival – facing the same problem – created a patch, which introduces these appreviations in the tzdata package. Just pushing this patch upstream seams not to happen for whatever reasons.
With this patch, I also had to compile a new tzdata rpm for CentOS for Mooball servers. This was a very manual and error prone process. And because I’m personally using Ubuntu I had to catch up with the documentation each time I had to compile and patch a new source rpm.


This lead me to create a package with a Makefile:

  1. It pulls down the source rpm,
  2. applies the supplied patches and
  3. builds a new rpm.

This package registered with Jenkins and you have always an updated RPM for each CentOS release.
Copying the zonefile from e.g. Australia/Brisbane to /etc/localtime resulted in this:

 Tue Jun 5 14:04:27 AEST 2012

and Zope reports:

>>> import DateTime
>>> DateTime.DateTime()
DateTime('2012/06/05 14:06:49.964253 GMT+10')

… is still only a workaround

The problem seems to be application specific. Confirming with the man page on tzname seems to suggest, that applications need to check the timezone and the tzname tuple in order to correctly determine time with daylight saving.

Zope seems to have workarounds added to the DateTime package. Setting a TZ environment variable like:

TZ="AEST" bin/instance fg

will set the correct timezone. Unfortunately it means, that you need to adjust the timezone environment variable on daylight saving (in our case Sydney or Melbourne) and restart your application. Using a patched tzdata helps to avoid restarting applications.

What I learned

So what I now got out of this is:

  • Be more date/time/timezone aware when building applications
  • How to patch source rpms
  • A nice package
    which I can pop into Jenkins which compiles a custom patched rpm.
  • You can adjust the time/date with a simple environment variable, e.g. TZ=”AEST” bin/instance fg
  • Some problems appear as an easy fix, but reveal a more complex situation underneath

Update

Talking to Dylan Jay from Pretaweb, there is more to the environment variable (I should have read the man page of tzname more carefully). Setting the TZ variable to more than just AEST. You can specify beginning and end of daylight saving, eg:

TZ="AEST-10AEDT-11,M10.4.2,M4.4.3" bin/instance fg

The format is extensively documented in the man page of tzname.

If acquisition comes in the way

I’ve witnessed a very strange error today.

The problem

I had a custom content-type which was partly indexed. When viewing the content type, parts of the edit form showed raw HTML code instead of widgets:

What was going on?

The solution

I hunted around for a while but checked the contents of the index. Strangely, it had a whole page indexed. After investigating further: it’s another object in the portal with the same id as the attribute on the content-type. The error happens only, if the attribute is missing on the content-type and an object in the hirarchy above has the name id.

So – if you encounter a problem like this, check if the portal_catalog may grab a different object for indexing with the same name of your attribute.

If cookies come in the way…

I really had a strange error when I was recently browsing to my web site. The nginx web server returned a 400, which only happened to me with my main browser: Mozilla Firefox.

400 bad request

The error message I saw, although I knew that my site was fine.

After investigating, I figured, that it was a cookie which was sent by my browser and apparently set by Kupu during editing. If you find this weird behaviour for your site as well, check the cookies. The cookie name which led to my problem was: initial_state. Delete it and you should be able to browse to wherever you run into this error first.

How to setup a browseable source code repository in minutes.

Browsing source code repositories (such as git, subversion, etc) just by using the command line client is very cumbersome. Fortunately, there are web-repository browsers available, which – once set up – allow the user to browse the source code with his web-browser. The company haven’t had one, I needed one badly, so I was looking for solutions.

Bella and Thomas

What I needed, was actually very simple: a read-only checkout of a selected set of repositories, which are constantly updated and therefore browse-able.

I first tried tailor and … failed. The documentation is not very extensive and if you try to solve an authentication issue with three components involved (ssh, tailor and git) you’re burning time.

I thought a better way is possibly by using git-svn. I wrote a little bash script, which clones a list of source code repositories and updates them constantly. Because the whole thing took me about an hour to setup, I thought it would be prudent to share the little piece of code with the outside world. Maybe someone feels lucky and needs something similar:

#!/bin/bash
# the directory in which all repositories are mirrored
ROOT=$HOME/repos
SCRIPTROOT=$HOME/bin
# repository urls
SVNROOT=svn://svn.urltoyourrepsitory.net/repo
GITROOT=ssh://git.urltoyourrepository.net/repo
# list seperated by newline with the names of all repositories
GITREPOSITORIES=`cat $SCRIPTROOT/gitrepositories`
REPOSITORIES=`cat $SCRIPTROOT/svnrepositories`
AUTHORSFILE=$SCRIPTROOT/svnauthors

rebase_only() {
 if test -d $1; then
 cd $1;
 echo "Rebase in " `pwd`;
 case $2 in
 svn)
 git svn rebase;
 ;;
 *)
 git pull --rebase;
 esac
 cd $ROOT;
 fi;
}

cd $ROOT;

for repo in $REPOSITORIES; do
 rebase_only $repo "svn";
 if !(test -d $repo); then
 echo "creating new repo" $repo
 git svn clone -A$AUTHORSFILE $SVNROOT/$repo;
 fi;
done;

for repo in $GITREPOSITORIES; do
 tmp=${repo##*/};
 reponame=${tmp%*.git};
 rebase_only $reponame "git";
 if !(test -d $reponame); then
 echo "creating new git repo" $repo;
 git clone $GITROOT/$repo;
 fi;
done;

Vimperator – a firefox add-on

dsc_0053.nef

For my daily text editing and programming work, I’m using vim. Great editor suitable for almost every purpose.

There is a Firefox add-on out now for changing the browser interface to behave like a vim interface. It advertises itself that you can even throw away your mouse. First I was curious if that’s gonna even work, but it does and I’m very happy with it. Although, I’m still using my mouse for browsing. It saves time not touching the mouse in a few cases, though. A few key features I use every day:

Following links

You press the ‘f‘ key and vimperator hints all links on the current webpage. No you press either the first letters of the link label or a number associated with the link.

Navigating on a website

You can just use the normal arrow keys for browsing, but there is more. As usual you can use ‘j‘ and ‘k‘ for scrolling up and down, as well as ‘space bar‘ for jumping a page down, or ‘gg‘ for jumping to the top, or ‘GG‘ for jumping to the bottom.

Text editing

Remember editing text areas without using your favourite editor? The times are over: Press CTRL+i in insert mode (you’re automatically in insert mode when inserting text on an input field or text area) and vimperator fires up a vim. Very handy for editing large amounts of text in textareas.

Opening URLs

Just press ‘o‘ to open a new url, or ‘O‘ to use the current URL. You can easily open the url in a tab by using ‘t‘ or ‘T‘ instead. You can use yank and paste as well. Just pressing ‘y‘ on an opened web site, yanks the URL. If you’re already a URL in the buffer, press ‘p‘ and the browser opens the link (like the middle mouse click). Very handy.

You can also use tab for completing commands or URLs. For example you want to open the website you opened yesterday, but you only remember a few letters, You type: ‘o‘, enter ‘foo‘ and press tab. Vimperator shows a list of URLs matching your string. You can now tab and enter to the match and open the URL.

Navigating between tabs … err… buffers 😉

Jumping between tabs is like jumping between buffers in vim. Use CTRL+n or CTRL+n for jumping to the next and previous tabs. It’s similar to jumping in the history of visited pages: use CTRL+i or CTRL+o for back or forward in the history.

That are the commands are use almost everyday for browsing. There is support for more features like macros and quickmarks and so forth. So if you’re using vim everyday, give it a go. IMHO it’s worth the speed for browsing you get.

Before I forget: In case you need help to the features, use ‘:help‘ as usual for browsing the online help.

The skipper and the fish

I watched Trawlermen on SBS yesterday. The documentary is about the work of a number of trawler crews based in Peterhead/Ireland. I have two quotes from the documentary which I found really interesting. It can be applied probably to any other business including writing software.

The skipper said: “Without my crew I’m nothing, because I can’t catch the fish all by myself.” The crew in turn said: “We have to trust the decisions made by the skipper. He leads us to the best fishing grounds which in turn brings the most money for the fish.”

So for now, all the people out there who think they can always do it better on their own, think about this documentary.

Bad News are Good News

hubdub

Playing on hubdub.com is fun.

I’m playing on hubdub.com from time to time. It’s a funny online game of a prediction stock market. People creating questions for upcoming events and you can bet how this event will proceed in real.

For example: Who will be the new American president? Obama or McCain? Now it comes to all the news who’ve been floating around. Who will make it. That pushes the chances between Obama and McCain to various percentages. Say, for example, you start  at a 12% probability that Obama makes it, while everyone sets on McCain. The more likely Obama will win the race the more money you will make if you sell your stocks. If you wait until the end when the question is settled, you earn the most money.

But don’t forget: It’s all play money 😉