i use kernel is 2.6.26.2
what i do as follow:
1, I create a raid5:
mdadm -C /dev/md5 -l 5 -n 4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
--metadata=1.0 --assume-clean
2, dd if=/dev/zero of=/dev/md5 bs=1M &
write data to this raid5
3, mdadm --manage /dev/md5 -f /dev/sda
4 mdadm --manage /dev/md5 -f /dev/sdb
if i faild 2 disks ,then the OS kernel display OOP error and kernel down
do somebody know why ?
Is MD/RAID5 bug ?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
RAID5 can only tolerate ONE drive to fail of ALL members. If you want to
be able to fail two drives you will have to use RAID6 or RAID5 with one
hot-spare (and give it time to rebuild before failing the second drive).
PLEASE read the documentation on raid levels, like on wikipedia.
Joachim Otahal
That is true,
but should we get a kernel oops and crash if two RAID5 drives are
failed? (THAT part looks like a bug!)
Jin, can you try a newer kernel, and a newer mdadm?
-- Kristleifur
Jin:
Did you really use the whole drives for testing or loopback files or
partitions on the drives? I never did my hot-plug testings with whole
drives being in an array, only with partitions.
I would usually say that any kernel OOPS is a bug, but in this case,
what are you running your Linux on, given that you just trashed the
first four drives? While it's possible to run off of other drives, you
have to make an effort to configure Linux to do so.
--
Bill Davidsen <davi...@tmr.com>
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein
ths for your test on "Debian 2.6.26-21lenny4".
if you want to see the oop ,you should always write to the raid5 ,and
pull 2 disks out.maybe you can see the error
i think no matter what i do ,even if i pull out all the disk , kernel
should not oop.
You don't have swap (file or partition, doesn't matter) on that RAID, do
you?
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
> hi;
>
> i use kernel is 2.6.26.2
>
> what i do as follow:
>
> 1, I create a raid5:
> mdadm -C /dev/md5 -l 5 -n 4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
> --metadata=1.0 --assume-clean
>
> 2, dd if=/dev/zero of=/dev/md5 bs=1M &
>
> write data to this raid5
>
> 3, mdadm --manage /dev/md5 -f /dev/sda
>
> 4 mdadm --manage /dev/md5 -f /dev/sdb
>
> if i faild 2 disks ,then the OS kernel display OOP error and kernel down
>
>
> do somebody know why ?
>
> Is MD/RAID5 bug ?
Certainly this is a bug.
2.6.26 is quite old now - it is possible that the bug has already been fixed.
If you are able to post the oops message - possibly use a digital camera to
get a photograph - then I can probably explain what is happening and whether
it has been fixed.
Thanks,
NeilBrown