Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Cannot Force Assemble Degraded Dirty Raid 5 Array

1 view
Skip to first unread message

RS4qu...@gmail.com

unread,
Mar 25, 2006, 12:58:01 AM3/25/06
to
Kernel: 2.6.14-1.1653_FC4smp on FC4

I removed 1 faulty drive from a 4 drive raid 5 array. I've been
running the array in degraded mode until a replacment drive arrives.
The system crashed and now the system cannot start the degraded array
because:

kernel: raid5: cannot start dirty degraded array for md0

attempting to reassable produces the following error:

# mdadm --assemble --force /dev/md0 /dev/sda1 /dev/sdb1 /dev/hdg1
mdadm: failed to RUN_ARRAY /dev/md0: Input/output error

however the array appears to be partialy assembled?

# cat /proc/mdstat
Personalities : [raid1] [raid5]
md0 : inactive sdb1[0] hdg1[2] sda1[1]
733503424 blocks

in this state md0 cannot be mounted. there sould be a 4th device
listed in md0 as missing. but its not there like it was before the
crash, after the single failed disk..

Is there another way to force the assembly of the array? How about
recreated the array with mdadm -C? What would be the correct format to
recreate the degraded array? ie (mdadm -C /dev/md0 -n4 -l5 /dev/md0
/dev/sda1 /dev/sdb1 missing /dev/hdg1)??


# mdadm --examine /dev/sda1 /dev/sdb1 /dev/hdg1
/dev/sda1:
Magic : a92b4efc
Version : 00.90.01
UUID : cae8912d:2fa8d0bd:55981e92:e3bfe42e
Creation Time : Tue Feb 15 18:02:37 2005
Raid Level : raid5
Device Size : 244195904 (232.88 GiB 250.06 GB)
Array Size : 732587712 (698.65 GiB 750.17 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0

Update Time : Fri Mar 24 22:33:52 2006
State : active
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Checksum : 72d8c60d - correct
Events : 0.16627113

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 1 8 1 1 active sync /dev/sda1

0 0 8 17 0 active sync /dev/sdb1
1 1 8 1 1 active sync /dev/sda1
2 2 3 1 2 active sync
3 3 0 0 3 faulty removed
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.01
UUID : cae8912d:2fa8d0bd:55981e92:e3bfe42e
Creation Time : Tue Feb 15 18:02:37 2005
Raid Level : raid5
Device Size : 244195904 (232.88 GiB 250.06 GB)
Array Size : 732587712 (698.65 GiB 750.17 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0

Update Time : Fri Mar 24 22:33:52 2006
State : active
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Checksum : 72d8c61b - correct
Events : 0.16627113

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 17 0 active sync /dev/sdb1

0 0 8 17 0 active sync /dev/sdb1
1 1 8 1 1 active sync /dev/sda1
2 2 3 1 2 active sync
3 3 0 0 3 faulty removed
/dev/hdg1:
Magic : a92b4efc
Version : 00.90.01
UUID : cae8912d:2fa8d0bd:55981e92:e3bfe42e
Creation Time : Tue Feb 15 18:02:37 2005
Raid Level : raid5
Device Size : 244195904 (232.88 GiB 250.06 GB)
Array Size : 732587712 (698.65 GiB 750.17 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0

Update Time : Fri Mar 24 22:33:52 2006
State : active
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Checksum : 72d8c60a - correct
Events : 0.16627113

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 3 1 2 active sync

0 0 8 17 0 active sync /dev/sdb1
1 1 8 1 1 active sync /dev/sda1
2 2 3 1 2 active sync
3 3 0 0 3 faulty removed

# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.02
Creation Time : Tue Feb 15 18:02:37 2005
Raid Level : raid5
Device Size : 244195904 (232.88 GiB 250.06 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Fri Mar 24 22:33:52 2006
State : active, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

UUID : cae8912d:2fa8d0bd:55981e92:e3bfe42e
Events : 0.16627113

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 1 1 active sync /dev/sda1
2 34 1 2 active sync /dev/hdg1
1711931416 0 0 0 removed

RS4qu...@gmail.com

unread,
Mar 25, 2006, 1:25:26 AM3/25/06
to
I solved it:

mdadm -C -l5 -n4 /dev/md0 /dev/sdb1 /dev/sda1 /dev/hdg1 missing

How? By examining the -E output of each of the block devices in the
array (sda1 sdb1 hdg1), I was able to construct the order the disks
were assembled when the array was originally created (including the
correct placment of the failed disk). In this case the magical
"missing" device just happen to appear at the end of the create
statement above.

I'm back in business.. now, no more crashing until the new drive
arrives :)

0 new messages