I just got a new amd64 based system that has an Adaptec 1430sa
controller and Etch D-I does not recognize it. I have configured the
controller to be in RAID 10 mode with one logical volume, D-I does not
see any disks.
How do I install on this system?
Help would be greatly appreciated.
-Steve
--
To UNSUBSCRIBE, email to debian-amd...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Well it is a fake raid card, so you are better off using it with
software raid in linux (performance and support will both be much
better).
There exists a driver for it, although by the looks of it, it is binary
only and only available for suse and redhat as usual.
sata_mv might run it if you disable the raid setup on the card and make
it just a bunch of seperate disks.
In general you are better off buying hardware with linux support, and
any company that lists redhat and suse as supported and nothing else
tend to mean it is not supported by linux, but instead that they choose
to support redhat and suse.
If you want a hardware raid card look at areca or 3ware, not adaptec.
--
Len Sorensen
I was going to get the 3ware card, but with only 4 drives it seemed like
overkill and from what I read running RAID 10 is faster then RAID5 in a
4 drive config. So I opted for the Adaptec controller. <sigh>
I did find
and it looks like amd64 etch-custom-1013.iso - kernel version 2.6.23 +
experimental e1000e driver D-I iso recognizes the the raw drives
regardless of how the raid contoller is configured. I guess I will
reconfigure it as JBOD so the controller doesn't try to manage them.
So with 4 x 750GB drives, how can I get debian to build a RAID10 and
make it bootable. This probably isn't the list to ask a general question
like this, but ... here it is :)
Thanks,
-Steve
It is never overkill to get supported hardware. raid10 is certainly
less processor intensive since it only does comparisons not XORs. It
also gets less disk space (raid 10 gets you 50% of your total disk
capacity, while raid5 gets you all but one disk worth, so 75% in your
case). As for performance, well the raid0 may give you better
performance than raid5 in some cases, but possibly worse in others.
After all the raid5 has 3 out of 4 disks available for reading, while
raid10 has 2 or 4 depending on if the raid1 reads from alternate disks
or not (I think linux raid1 does the alternate reads but not sure).
raid10 is certainly simpler and safer I would say.
> I did find
>
> http://kmuto.jp/debian/d-i/
>
> and it looks like amd64 etch-custom-1013.iso - kernel version 2.6.23 +
> experimental e1000e driver D-I iso recognizes the the raw drives
> regardless of how the raid contoller is configured. I guess I will
> reconfigure it as JBOD so the controller doesn't try to manage them.
>
> So with 4 x 750GB drives, how can I get debian to build a RAID10 and
> make it bootable. This probably isn't the list to ask a general question
> like this, but ... here it is :)
Well the boot loader can boot from raid1, but not raid0 or raid10 or
raid5 (unless you have hardware raid in which case it becomes
transparent). So you would probably have to make a small (100MB or so I
would suggest) raid1 partition on a pair of disks to boot from, and then
you can do whatever you want with the rest.
It is also posible to do raid1 volumes and then use LVM on top (LVM can
do striping like raid0 but you would have to tell it to do so at
creation time as far as I know).
--
Len Sorensen