Uncategorized

Using LiveCD to chroot and repair software RAID and LVM

After the udev/lvm2 upgrade, my VM box was completely hosed. This was due to the fact that I had a older kernel and udevd was having problems with signalfd. If you found this post because of that, simply upgrade to a new kernel, and you should be good again.

I’m writing this post as more of a reference for me. My VM box uses RAID0 software raid and has LVM on top of that. When my box failed to boot (not seeing root filesystem in md) and was dead in the water. I rebooted with the LiveCD but I couldn’t remember to a) get the RAID going again and b) how to get LVM up. So after some length forum post searches, calling friends, I was able to finally chroot into the server and update the kernal. I wanted to post my procedure on how I did this.

After booting into LiveCD

After the udev/lvm2 upgrade, my VM box was completely hosed. This was due to the fact that I had a older kernel and udevd was having problems with signalfd. If you found this post because of that, simply upgrade to a new kernel, and you should be good again.

I’m writing this post as more of a reference for me. My VM box uses RAID0 software raid and has LVM on top of that. When my box failed to boot (not seeing root filesystem in md) and was dead in the water. I rebooted with the LiveCD but I couldn’t remember to a) get the RAID going again and b) how to get LVM up. So after some length forum post searches, calling friends, I was able to finally chroot into the server and update the kernal. I wanted to post my procedure on how I did this.

After booting into LiveCD


modprobe raid0
modprobe raid1
mdadm /dev/md2 -A /dev/sda3 --run
mount /dev/md2 /mnt/gentoo
mdadm /dev/md0 -a /dev/sda1 --run
mount /dev/md0 /mnt/gentoo/boot
mdadm /dev/md3 -A /dev/sda4 --run
vgscan
vgchange -a y
mount /dev/vg/usr /mnt/gentoo/usr
mount /dev/vg/home /mnt/gentoo/home
mount /dev/vg/tmp /mnt/gentoo/tmp
mount /dev/vg/var /mnt/gentoo/var
mdadm /dev/mda1 -A /dev/sda2 --run
swapon /dev/md1
mount -t proc proc /mnt/gentoo/proc
mount -o bind /dev /mnt/gentoo/dev
chroot /mnt/gentoo /bin/bash
env-update && source /etc/profile

After booting, I started to see these emails from mdadm:


This is an automatically generated mail message from mdadm running on comp

A DegradedArray event had been detected on md device /dev/md0.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sdb2[1] sda2[0]
979840 blocks [2/2] [UU]

md2 : active raid1 sda3[0]
1951808 blocks [2/1] [U_]

md3 : active raid1 sda4[0]
75095744 blocks [2/1] [U_]

md0 : active raid1 sda1[0]
96256 blocks [2/1] [U_]

unused devices: <none>

This is because the sdb disk was not included in the RAID, so I had to add those.


mdadm /dev/md0 --add /dev/sdb1
mdadm /dev/md2 --add /dev/sdb3
mdadm /dev/md3 --add /dev/sdb4
cat /proc/mdstat

Cat’ing /proc/mdstat, you should see the RAID building itself on sdb