DRBD and crappy RAIDS

Since my last blog post I had quite some fun with crappy DRBD performance. Long story short, it looks like as if I can't get a decent random write speed out of DRBD without disabling disk flushes and barriers. Oddly enough the latency tests on the backend device were great, so either something is entirely wrong with my DRBD config or the 3ware raid cards are as broken as many people say. Usually one shouldn't risk running a DRBD raid using drain without BBUs, but since my machines are on an UPS I don't expect to many issues (At least if I figure out a way to shut them down properly -- currently this requires an rmmod of the intel network driver).

Either way, aside from those issues the current setup works nicely. I am doing backups via on both servers and use Icinga to monitor the instances via remote ssh checks.

Last thing to do is to convert the current machines to XenServer. Currently they use logical volumes as partitions, which then are directly mounted -- so there is usually no /dev/sdX in the machines, just /dev/sdX1. XenServer uses logical volumes too, but each logical volume is a VDI (virtual disk image) which is exposed as VBD (virtual block device) to the virtual machines. Those VDIs are complete disks and contain partition tables etc as seen on physical drives. To get our data onto it we somehow need to attach this “physical” drive to our Dom0 and the data from the logical volume over using dd. It looks like as if this should be possible easily, since Dom0 is just another virtual machine -- I just have to figure out the correct commands in XenServer.

Oh and while playing around with my pool I apparently killed the default storage repo, so xe vm-install refused to work; it complained about a possibly destroyed OpaqueRef for class SR. The fix is as easy as:

xe pool-param-set default-SR=<uuid_of_the_new_default> uuid=<tab_since_we_only_have_one_pool>

More soon!