Planet Downtime

I got Planet Afterlife working again, woot! Perhaps I need to simplify the current updating process somewhat from it’s current form:

  1. The Ubuntu installation running under VMware on my work computer runs Planet‘s planet.py script on a cron job
  2. The script pulls in all the necessary RSS feeds and generates the HTML and RSS files for Planet Afterlife
  3. Another cron job running on mimosa (one of our multi-user UNIX servers) uses wget to grab the HTML and RSS files from the work computer and deposit them in my public_html directory

All this happens once a minute, every day. Except when it doesn’t, either because of a problem with mimosa, my work computer or the VMware installation running on it, or because of something getting turned off or unplugged, or (in this case) the IP address of Ubuntu changing after I rebooted it.

I freely admit that this method sucks and I’m actually quite surprised (though glad) that it doesn’t break more often.

Perhaps one day I’ll find a reliable*, UK-based** web host who have Python 2.2 or greater installed on their servers and will let me run a cron job to update Planet Aftelife every minute or so. But that day hasn’t come yet and until it does I have to make do with the current system.

Which sucks. A lot.

* If you remember the frequent downtime I was getting on wabson.org until I changed back to Easily last summer then you’ll understand why reliability is my top requirement.

** As I said to Laurie yesterday, I want someone in the same country as me who I can phone up when problems crop up. Admittedly the need to do this is much reduced if you have a reliable hosting company, but call me paranoid.