Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

cannot add device to partitioned raid6 array

101 views
Skip to first unread message

Florian D.

unread,
Mar 31, 2007, 8:03:13 PM3/31/07
to linux-...@vger.kernel.org
hi list!

in short:
I created a partitioned raid6 array with 2 missing drives. Now, I want to add a device. It fails with:
flockmock ~ # mdadm -a /dev/md_d4 /dev/sdb2
mdadm: add new device failed for /dev/sdb2 as 4: Invalid argument

I think it is the same problem as in:
http://marc.info/?l=linux-raid&m=115316147716600&w=2

details:
kernel 2.6.20.4
mdadm-2.6.1
the raid6 array:

flockmock ~ # mdadm --detail /dev/md_d4
/dev/md_d4:
Version : 00.90.03
Creation Time : Sat Mar 31 19:48:58 2007
Raid Level : raid6
Array Size : 490030464 (467.33 GiB 501.79 GB)
Used Dev Size : 245015232 (233.66 GiB 250.90 GB)
Raid Devices : 4
Total Devices : 2
Preferred Minor : 4
Persistence : Superblock is persistent

Update Time : Sat Mar 31 23:28:37 2007
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 32K

UUID : dfb5e536:2447b984:b7699fd8:8ba37cbf
Events : 0.2972

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 34 1 active sync /dev/sdc2
2 0 0 2 removed
3 0 0 3 removed

the device, which I want to add:
flockmock ~ # mdadm -E /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
UUID : dfb5e536:2447b984:b7699fd8:8ba37cbf
Creation Time : Sat Mar 31 19:48:58 2007
Raid Level : raid6
Used Dev Size : 245015232 (233.66 GiB 250.90 GB)
Array Size : 490030464 (467.33 GiB 501.79 GB)
Raid Devices : 4
Total Devices : 2
Preferred Minor : 4

Update Time : Sat Mar 31 23:29:23 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 2
Spare Devices : 0
Checksum : 8aeeef56 - correct
Events : 0.2974

Chunk Size : 32K

Number Major Minor RaidDevice State
this 4 8 18 -1 spare /dev/sdb2

0 0 8 2 0 active sync /dev/sda2
1 1 8 34 1 active sync /dev/sdc2
2 2 0 0 2 faulty removed
3 3 0 0 3 faulty removed

dmesg shows such error mesgs:
[ 220.681135] md: sdb2 has invalid sb, not importing!
[ 220.681186] md: md_import_device returned -22
[ 220.681435] md: sdb2 has invalid sb, not importing!
[ 220.681463] md: md_import_device returned -22

any idea how to resolve this? thanks,
florian

PS: please cc me in replies, thanks.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Neil Brown

unread,
Mar 31, 2007, 10:58:15 PM3/31/07
to Florian D.
On Saturday March 31, floc...@gmx.at wrote:
> hi list!
>
> in short:
> I created a partitioned raid6 array with 2 missing drives. Now, I want to add a device. It fails with:
> flockmock ~ # mdadm -a /dev/md_d4 /dev/sdb2
> mdadm: add new device failed for /dev/sdb2 as 4: Invalid argument

Thanks for the detailed problem report.

I think the cause of the error is that /dev/sdb2 is too small.
It needs to be at least 490030594 sectors. How big is it?

Yes, both the kernel an mdadm should handle this case better.
I have made a note to look into this.

NeilBrown

Florian D.

unread,
Apr 1, 2007, 5:26:03 AM4/1/07
to Neil Brown
Neil Brown wrote:
> On Saturday March 31, floc...@gmx.at wrote:
>> hi list!
>>
>> in short:
>> I created a partitioned raid6 array with 2 missing drives. Now, I want to add a device. It fails with:
>> flockmock ~ # mdadm -a /dev/md_d4 /dev/sdb2
>> mdadm: add new device failed for /dev/sdb2 as 4: Invalid argument
>
> Thanks for the detailed problem report.
>
> I think the cause of the error is that /dev/sdb2 is too small.
> It needs to be at least 490030594 sectors. How big is it?

but the *device* size should be only ~250GB, so the array size is ~500GB, no?

flockmock ~ # mdadm --detail /dev/md_d4
/dev/md_d4:
Version : 00.90.03
Creation Time : Sat Mar 31 19:48:58 2007
Raid Level : raid6
Array Size : 490030464 (467.33 GiB 501.79 GB)
Used Dev Size : 245015232 (233.66 GiB 250.90 GB)

^^^^^^^^^^

these disks are all 250 GB harddisks, formatted in the same way -- but they are from different
vendors! The block sizes from the partitions vary a little bit (taken from /sbin/fdisk):

/dev/sda2: 245015347
/dev/sdc2: 245015347
/dev/sdb2: 244099642 -> different vendor

do you think that may be the cause?

thanks a lot,
florian

Neil Brown

unread,
Apr 1, 2007, 6:08:16 AM4/1/07
to Florian D.
On Sunday April 1, floc...@gmx.at wrote:
> Neil Brown wrote:
> > On Saturday March 31, floc...@gmx.at wrote:
> >> hi list!
> >>
> >> in short:
> >> I created a partitioned raid6 array with 2 missing drives. Now, I want to add a device. It fails with:
> >> flockmock ~ # mdadm -a /dev/md_d4 /dev/sdb2
> >> mdadm: add new device failed for /dev/sdb2 as 4: Invalid argument
> >
> > Thanks for the detailed problem report.
> >
> > I think the cause of the error is that /dev/sdb2 is too small.
> > It needs to be at least 490030594 sectors. How big is it?
^^^^^^^
I should have said 490030594 sectors or 245015297 KB, which would have
made it clearer, sorry.

>
> but the *device* size should be only ~250GB, so the array size is ~500GB, no?
>
> flockmock ~ # mdadm --detail /dev/md_d4
> /dev/md_d4:
> Version : 00.90.03
> Creation Time : Sat Mar 31 19:48:58 2007
> Raid Level : raid6
> Array Size : 490030464 (467.33 GiB 501.79 GB)
> Used Dev Size : 245015232 (233.66 GiB 250.90 GB)
> ^^^^^^^^^^
>
> these disks are all 250 GB harddisks, formatted in the same way -- but they are from different
> vendors! The block sizes from the partitions vary a little bit (taken from /sbin/fdisk):
>
> /dev/sda2: 245015347
> /dev/sdc2: 245015347
> /dev/sdb2: 244099642 -> different vendor
>
> do you think that may be the cause?

Definitely the cause. If you really need to add this array, you may
be able to reduce the usage of the array, then reduce the size of the
array, then add the drive.
Depending on how you have partitioned the array, and how you are using
the partitions, you may just need to reduce the filesystem in the last
partition, then use *fdisk to resize the partition. Then something
like:

mdadm --grow --size=243000000 /dev/md_d4

Is is generally safer to reduce the filesystem too much, resize the
device, then grow the filesystem up to the size of the device. That
way avoids fiddly arithmetic and so reduces the chance of failure.

NeilBrown

Florian D.

unread,
Apr 1, 2007, 7:24:00 AM4/1/07
to Neil Brown
Neil Brown wrote:
> Definitely the cause. If you really need to add this array, you may
> be able to reduce the usage of the array, then reduce the size of the
> array, then add the drive.
> Depending on how you have partitioned the array, and how you are using
> the partitions, you may just need to reduce the filesystem in the last
> partition, then use *fdisk to resize the partition. Then something
> like:
>
> mdadm --grow --size=243000000 /dev/md_d4
>
> Is is generally safer to reduce the filesystem too much, resize the
> device, then grow the filesystem up to the size of the device. That
> way avoids fiddly arithmetic and so reduces the chance of failure.
>
> NeilBrown
>

thanks, but I decided to begin from scratch(backup is available ;)
now, all partitions have the same size. creating a raid6 array from 2 drives and hot-adding another
one works now. so this could be regarded as solved.

But when I try to create the array with 3 drives at once, the following strange error appears:

flockmock ~ # mdadm --create /dev/md_d4 --level=6 -a mdp --chunk=32 -n 4 /dev/sda2 /dev/sdb2
/dev/sdc2 missing
mdadm: RUN_ARRAY failed: Input/output error
mdadm: stopped /dev/md_d4

dmesg shows:
[ 484.362525] md: bind<sda2>
[ 484.363429] md: bind<sdb2>
[ 484.364337] md: bind<sdc2>
[ 484.364397] md: md_d4: raid array is not clean -- starting background reconstruction
[ 484.365876] raid5: device sdc2 operational as raid disk 2
[ 484.365879] raid5: device sdb2 operational as raid disk 1
[ 484.365881] raid5: device sda2 operational as raid disk 0
[ 484.365884] raid5: cannot start dirty degraded array for md_d4
[ 484.365886] RAID5 conf printout:
[ 484.365887] --- rd:4 wd:3
[ 484.365889] disk 0, o:1, dev:sda2
[ 484.365891] disk 1, o:1, dev:sdb2
[ 484.365893] disk 2, o:1, dev:sdc2
[ 484.365895] raid5: failed to run raid set md_d4
[ 484.365897] md: pers->run() failed ...
[ 484.366271] md: md_d4 stopped.
[ 484.366303] md: unbind<sdc2>
[ 484.366309] md: export_rdev(sdc2)
[ 484.366314] md: unbind<sdb2>
[ 484.366318] md: export_rdev(sdb2)
[ 484.366321] md: unbind<sda2>
[ 484.366325] md: export_rdev(sda2)


I just wanted to report that FYI, I will take the first route and wait a little...
cheers, florian

0 new messages