“Start request repeated too quickly”

If one of your units is not running any more and you find this in your journal: 


● getmail.service - getmail
Loaded: loaded (/home/rjoost/.config/systemd/user/getmail.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Thu 2018-11-29 18:42:17 AEST; 3s ago
Process: 20142 ExecStart=/usr/bin/getmail --idle=INBOX (code=exited, status=0/SUCCESS)
Main PID: 20142 (code=exited, status=0/SUCCESS)

Nov 29 18:42:17 bali systemd[3109]: getmail.service: Service hold-off time over, scheduling restart.
Nov 29 18:42:17 bali systemd[3109]: getmail.service: Scheduled restart job, restart counter is at 5.
Nov 29 18:42:17 bali systemd[3109]: Stopped getmail.
Nov 29 18:42:17 bali systemd[3109]: getmail.service: Start request repeated too quickly.
Nov 29 18:42:17 bali systemd[3109]: getmail.service: Failed with result 'start-limit-hit'.
Nov 29 18:42:17 bali systemd[3109]: Failed to start getmail.

it might be because your command really exits immediately and you may want to run the command manually to verify if that’s the case. Also check if you indeed have the unit configured with

Restart: always

.

I you’re sure it really does not restart too quickly, you can reset the counter with:

$ systemctl reset-failed unit

Further information can be found in the man pages of: systemd.unit(5) and systemd.service(5)

Advertisements

Best practices for diffing two online MySQL databases

We’ve had to move our internal Red Hat Beaker instance to a new MySQL database version. We made the jump with a 5min downtime of Beaker. One of the items we wanted to make sure is to not to loose any data.

Setup and Motivation

A database dump is about 135 GB compressed with gzip. The main database was being served by a MySQL 5.1 master/slave setup.

We discussed two possible strategies for switching to MariaDB. Either a dump and load which meant a downtime of 16h, or the use of an additional MariaDB slave which will be promoted to the new master. We chose the latter: a new MariaDB 10.2 slave promoted to be the new master.

We wanted to make sure that both slaves, the MySQL 5.1 and new MariaDB 10.2, were in sync and with promoting the MariaDB 10.2 slave to master we would not loose any data. To verify data consistency across the slaves, we diffed both databases.

Diffing

I went through a few iterations of dumping and diffing. Here are the items, which worked best.

Ignore mysql-utils if you only have read access

MySQL comes with a bunch of utilities and one of them is a tool to compare two databases, called mysqldbcompare and mysqldiff. I’ve tried mysqldiff first, but, after studying the source code, decided against using it. Reason being is that you will have to grant it additional write privileges to the databases which are arguably small, but still too much I was comfortable with.

Use the “at” utility to schedule mysqldump

The best way I found to kick off performing the database dumps at the same time is to use at. Scheduling a mysqldump manually for the two databases introduces way too much noisy differences. I guess, it goes without mention, that the database hosts clocks are synchronized (e.g. by the use of chronyd).

Dump the entire database at once

The mysqldump tool can dump each table separately, but that is not what you want. Also the default options which are geared towards a dump and load is not what you want.

Instead I dumped MySQL with:

mysqldump --single-transaction --order-by-primary --skip-extended-insert beaker | gzip > mysql.sql.gz;

while for MariaDB I used:

mysqldump --order-by-primary --skip-extended-insert beaker | gzip > mariadb.sql.gz;

The options used are aiding the later diff:

  • –order-by-primary orders every dumped table row consistently by their primary keys
  • –single-transaction keeps a transaction open until the dump has finished so you get a comparable database snapshot across the two databases for the same starting point
  • –skip-extended-inserts is used to have an INSERT statement for each row, otherwise they’re collapsed to multi-row insert statements which are harder to compare

Compression (GZip) and shell pipes are your friend

With big databases, like the Beaker production database, you want to avoid writing anything uncompressed. Linux ships additional gzip wrappers for cat (zcat), less (zless) and so on, which will help with creating shell pipes in order to process the data.

Cut up the dump

Once you have both database dumps, cut them up into their separate tables. Purpose of this is not to sift through the dumps with your own eye, but rather to cater for diff. The diff tool loads the entire file into memory and you will face, with large database dumps, it is running out of memory quickly:

diff mysql-beaker.sql.gz mariadb-replica-beaker.sql.gz
diff: memory exhausted

While I did found a tool to diff both large files, having a unified diff output is easier to compare data with.

Example: Using gzip and a pipe from my point above:

diff -u <(zcat mysql/table1.sql.gz) <(zcat mariadb/table1.sql.gz) > diffed/table1.diff

Now you can use your SHELL foo to loop over all cut up tables and write the diff into a separate folder which then lets you easily compare.

GIMP for Absolute Beginners

“Write a book” was always on my list to do. Well, here it is:

The book focuses primarily on people who have no understanding of GIMP and digital image manipulation. That said, a few chapters also help you getting your graphics tablet setup and painting with GIMP. Most of the examples have been written with a MS Windows installation of GIMP. Don’t worry though, I was testing most of the examples under my GNU/Linux (Ubuntu) installation as well.

Jan Smith is the main author of the book. I like to thank Jan again for her countless effort to get this book published.
You can order the book from Apress or your favorite book dealer.

How not to criticise

Today I got an e-mail about the GIMP user manual. Those insults are not happening very often, but probably happening for every software project:

Dear Sir,
Open Source software at no financial cost to the consumer is an
incredibly beautiful thing, which I honestly appreciate very much,
however, I have serious concerns regarding the quality of the GIMP user
manual.  Rather than point out the numerous grammatical errors and
flippant writing style I would like to report that the entire manual is
absolutely appalling, possibly the worst computer manual I have ever
encountered, in fact, so bad I report it as being almost evil.

The manual was clearly not written by native English speakers.  The
contents page is too long, sorry, just thinking about that manual is
making me angry and upset so i will leave it at that for now.

If only there is a photo/image editing program that is as well presented
as that on a MacBook. This is the first serious weakness I have found on
OS software, everything else is fantastic!

wishing you a lovely day and I hope that someone is kind enough to
re-write the instructions properly!

by matthileo on flickr

So what’s the problem with this so called criticism?

  1. The first widespread and common miss-assumption: “Open Source software at no financial cost to the consumer […]”. Free Software is not free as in free beer, it’s free as in free speech. If I like to be picky: GIMP isn’t Open Source, it’s Free Software; there is a difference.
  2. The mail – apart from the rant – doesn’t address any problem. If the author could refer to a specific area in the manual which is faulty (descriptions, paragraphs, a URL of an article) it would allow authors to look for mistakes.
  3. There is no pointer on better ways to do it.
  4. If you know how to make it better, provide patches. We know, that the user manual can be improved in lots of areas. By just ranting against authors you won’t change the manual.

The documentation team knows about the flaws of the manual, but everyone is trying hard each day to make it better. Helpful manuals don’t fall from trees – it’s hard work to produce them. Respect everyone who writes and contributes to free software!

Bad News are Good News

hubdub

Playing on hubdub.com is fun.

I’m playing on hubdub.com from time to time. It’s a funny online game of a prediction stock market. People creating questions for upcoming events and you can bet how this event will proceed in real.

For example: Who will be the new American president? Obama or McCain? Now it comes to all the news who’ve been floating around. Who will make it. That pushes the chances between Obama and McCain to various percentages. Say, for example, you start  at a 12% probability that Obama makes it, while everyone sets on McCain. The more likely Obama will win the race the more money you will make if you sell your stocks. If you wait until the end when the question is settled, you earn the most money.

But don’t forget: It’s all play money 😉

One stupid S.T.A.L.K.E.R. ending …

Wish granter in S.T.A.L.K.E.R. - Shadow of Chernobyl

Wish granter in S.T.A.L.K.E.R. - Shadow of Chernobyl

I wasted a bit of my free time to play S.T.A.L.K.E.R. I was fighting myself through hordes of bandits, mercenaries, soldiers, monsters until I reached the sarcophagus totally wasted (small amount of ammunition and armor). The ending I picked, was to sneak into the sarcophagus of the collapsed power plant of Chernobyl and find the wish granter; a mysterious monolith granting you one free wish.

I found it. The ending is a video and the charactarer you’re playing wishes…. “I want to be rich!”. The ending video shows him, rich of screws falling from the sarcophagus hull for a couple of seconds until he dies.

What a stupid moron? Such a stupid wish for all this work fighting through all the masses of evil? BRRRRRRR…

Update: I actually found a S.T.A.L.K.E.R. guide which lists all the possible endings of the game. So, it’s not the ending of the wish granter, but there are more endings determined by how you play.