Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lenny upgrade: Can't mount mdadm RAID drive under kernel 2.6.26

33 views
Skip to first unread message

S D

unread,
Feb 16, 2009, 6:30:11 AM2/16/09
to

Hi,

Just finished "dist-upgrade" to lenny from "etch". It appears that after the upgrade the system can no longer assemble and mount mdadm RAID drives under the etch kernel 2.6.26. I still can assemle and mount my mdadm RAID drives under the kernel 2.6.18 from etch.

Here's some relevant info:

Before the upgrade I had 3 mdadm RAID drives as follows, the system was working fine:
/dev/md1 was mounted as /boot(AID1)
/dev/md2 was mounted as /root (RAID5)
/dev/md3 was mounted as /swap (RAID5)

# cat /etc/mdadm/mdadm.conf
ARRAY /dev/md1 level=raid1 num-devices=3 UUID=bdf8a211:fcf6d2af:d7771298:fac716d3
devices=/dev/sda1,/dev/sdb1,/dev/sdc1
ARRAY /dev/md2 level=raid5 num-devices=3 UUID=2ed75bfb:aed804cd:7cc7e04b:e56f1452
devices=/dev/sda2,/dev/sdb2,/dev/sdc2
ARRAY /dev/md3 level=raid5 num-devices=3 UUID=3232181c:698319a1:67aa3321:ae8fa7a3
devices=/dev/sda3,/dev/sdb3,/dev/sdc3
MAILADDR root

The following is what I see on the screen while booting the system under kernel 2.6.26 before it drop into BusyBox. Note the system appears to be able to assemble some mysterious /dev/md0 out of nowhere, when I should've only had /dev/md1, /dev/md2, /dev/md3:

....
Begin: Mounting root file system...
Begin: Running /scripts/local-top...
....
Success: loaded module raid1
Success: loaded module raid456
....
Begin: Assembling all MD arrays...
md: md0 stopped
md: bind<sdb>
md: bind<sdc>
md: bind<sda>
raid5: device sda operational as raid disk 0
raid5: device sdc operational as raid disk 2
raid5: device sdb operational as raid disk 1
raid5: allocated 3170 kB for md0
raid5: raid level 5 set md0 active with 3 out 3 devices
raid5 conf printout:
rd:3 we:3
disk 0, 0:1, dev:sda
disk 1, 0:1, dev:sdb
disk 2, 0:1, dev:sdc
mdadm: /dev/md0 has been started with 3 drives.
....
md: md1 stopped
mdadm: no devices found for /dev/md1
mdadm: no devices found for /dev/md2
mdadm: no devices found for /dev/md3
Success: assembled all arrays
Begin: running /scripts/local-premount...
kinit: name_to_dev_t(/dev/md3)=md3(9,3)
....
Read error on swap_device (9:3:0)
mount: mounting /dev/md2 on /root failed: No such device
Begin: Running /scripts/init-bottom...
mount: mounting /dev on /root/dev failed: No such file or directory
....
BusyBox v1.10.2
(initramfs)

Any ideas? Thanks


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org

martin f krafft

unread,
Feb 16, 2009, 6:30:12 AM2/16/09
to
also sprach S D <sun...@yahoo.com> [2009.02.16.1221 +0100]:

> # cat /etc/mdadm/mdadm.conf
> ARRAY /dev/md1 level=raid1 num-devices=3 UUID=bdf8a211:fcf6d2af:d7771298:fac716d3
> devices=/dev/sda1,/dev/sdb1,/dev/sdc1
> ARRAY /dev/md2 level=raid5 num-devices=3 UUID=2ed75bfb:aed804cd:7cc7e04b:e56f1452
> devices=/dev/sda2,/dev/sdb2,/dev/sdc2
> ARRAY /dev/md3 level=raid5 num-devices=3 UUID=3232181c:698319a1:67aa3321:ae8fa7a3
> devices=/dev/sda3,/dev/sdb3,/dev/sdc3
> MAILADDR root

Remove those lines with "devices=" and run update-initramfs -u as
root, then try again.

--
.''`. martin f. krafft <madduck@d.o> Related projects:
: :' : proud Debian developer http://debiansystem.info
`. `'` http://people.debian.org/~madduck http://vcs-pkg.org
`- Debian - when you have better things to do than fixing systems

normaliser unix c'est comme pasteuriser le camembert.

digital_signature_gpg.asc

S D

unread,
Feb 16, 2009, 7:20:05 AM2/16/09
to
--- On Mon, 2/16/09, martin f krafft <mad...@debian.org> wrote:

> Remove those lines with "devices=" and run
> update-initramfs -u as
> root, then try again.

Removed lines with "devices=" and ran "update-initramfs -u -t". The "-t" option was added because, for some reason, update-initramfs complained that the original image was altered and refused to continue.

"-t" option allowed to create a new image (pretty much the same size, the difference is about 20 bytes) but the problem with mounting madm drives still remains.

When booting into the 2.6.18 kernel the system is still able to assemble and mount the drives, even with the "device=" lines removed from the config file.

martin f krafft

unread,
Feb 16, 2009, 7:50:08 AM2/16/09
to
Please file a bug against mdadm and make sure to include the output
of

/usr/share/bug/mdadm/script

as root.

--
.''`. martin f. krafft <madduck@d.o> Related projects:
: :' : proud Debian developer http://debiansystem.info
`. `'` http://people.debian.org/~madduck http://vcs-pkg.org
`- Debian - when you have better things to do than fixing systems

"da haben wir es also: eine kirchliche ordnung mit priesterschaft,
theologie, kultus, sakrament;
kurz, alles das, was jesus von nazareth bekämpft hatte..."
- friedrich nietzsche

digital_signature_gpg.asc

S D

unread,
Feb 16, 2009, 4:30:14 PM2/16/09
to
--- On Mon, 2/16/09, martin f krafft <mad...@debian.org> wrote:

> Please file a bug against mdadm and make sure to include the
> output
> of
>
> /usr/share/bug/mdadm/script
>
> as root.
>

A bug report was sent to sub...@bugs.debian.org. No confirmation that it was received yet, so here's a copy, just in case.

Package: mdadm
Version: 2.6.7.2-1

After "dist-upgrade" from etch to lenny the system can no longer assemble and mount mdadm RAID drives under the etch kernel 2.6.26. I still can assemble and mount mdadm RAID drives under the kernel 2.6.18 that I have left over from etch.

When running the upgraded system under kernel 2.6.18, the output of the "/usr/share/bug/mdadm/script" is as follows:

--- mount output
/dev/md2 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md1 on /boot type ext3 (rw,errors=remount-ro)

--- mdadm.conf


ARRAY /dev/md1 level=raid1 num-devices=3 UUID=bdf8a211:fcf6d2af:d7771298:fac716d3

ARRAY /dev/md2 level=raid5 num-devices=3 UUID=2ed75bfb:aed804cd:7cc7e04b:e56f1452

ARRAY /dev/md3 level=raid5 num-devices=3 UUID=3232181c:698319a1:67aa3321:ae8fa7a3

MAILADDR root

--- /proc/mdstat:
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md1 : active raid1 sda1[0] sdc1[2] sdb1[1]
40064 blocks [3/3] [UUU]

md2 : active raid5 sda2[0] sdc2[2] sdb2[1]
486351616 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md3 : active raid5 sda3[0] sdc3[2] sdb3[1]
1959680 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

--- /proc/partitions:
major minor #blocks name

8 0 244197527 sda
8 1 40131 sda1
8 2 243175905 sda2
8 3 979965 sda3
8 16 244198584 sdb
8 17 40131 sdb1
8 18 243175905 sdb2
8 19 979965 sdb3
8 32 244198584 sdc
8 33 40131 sdc1
8 34 243175905 sdc2
8 35 979965 sdc3
9 3 1959680 md3
9 2 486351616 md2
9 1 40064 md1

--- initrd.img-2.6.18-6-686:

gzip: /boot/initrd.img-2.6.18-6-686: not in gzip format
cpio: premature end of archive

0 new messages