I just received a new Dell Precision T5400 and a new Adaptec 2230SLP
SCSI Controller card. This system has been freshy installed with Open
SuSE v10.2. The system is fully patched.
After I install the Adaptec Storage Manager from Adaptec's website,
asm-linux_v2.12_922.rpm, and launch the GUI, the ASM reports "No
controllers were found in this system". I can see the card using the
following:
#lspci
07:05.0 RAID bus controller: Adaptec AAC-RAID (Rocket) (rev 03)
#lspci -v
07:05.0 RAID bus controller: Adaptec AAC-RAID (Rocket) (rev 03)
Subsystem: Adaptec ASR-2230S + ASR-2230SLP PCI-X (Lancer)
Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 74
Memory at f7a00000 (64-bit, non-prefetchable) [size=2M]
Memory at f79ff000 (64-bit, non-prefetchable) [size=4K]
Expansion ROM at f7c00000 [disabled] [size=32K]
Capabilities: [40] Power Management version 2
Capabilities: [48] Message Signalled Interrupts: Mask- 64bit+
Queue=0/1 Enable-
Capabilities: [58] PCI-X non-bridge device
Capabilities: [60] Vital Product Data
#hwinfo --storage-ctrl
45: PCI 705.0: 0104 RAID bus controller
[Created at pci.286]
UDI: /org/freedesktop/Hal/devices/pci_9005_286
Unique ID: ztKE.Cs92cf6rBgD
Parent ID: Sulp.yyIDGfAwA_7
SysFS ID: /devices/pci0000:00/0000:00:09.0/0000:03:00.3/0000:07:05.0
SysFS BusID: 0000:07:05.0
Hardware Class: storage
Model: "Adaptec ASR-2230S + ASR-2230SLP PCI-X (Lancer)"
Vendor: pci 0x9005 "Adaptec"
Device: pci 0x0286 "AAC-RAID (Rocket)"
SubVendor: pci 0x9005 "Adaptec"
SubDevice: pci 0x028c "ASR-2230S + ASR-2230SLP PCI-X (Lancer)"
Revision: 0x03
Driver: "aacraid"
Driver Modules: "aacraid"
Memory Range: 0xf7a00000-0xf7bfffff (rw,non-prefetchable)
Memory Range: 0xf79ff000-0xf79fffff (rw,non-prefetchable)
Memory Range: 0xf7c00000-0xf7c07fff (ro,prefetchable,disabled)
IRQ: 74 (70 events)
Module Alias:
"pci:v00009005d00000286sv00009005sd0000028Cbc01sc04i00"
Driver Info #0:
Driver Status: aacraid is active
Driver Activation Cmd: "modprobe aacraid"
Config Status: cfg=no, avail=yes, need=no, active=unknown
Attached to: #41 (PCI bridge)
I can also access the Adaptec card during system boot. I don't have
any SCSI storage plugged into it just now but I'm going to attached a
Dell PowerVault 221S and hopefully configure it for RAID50.
Another question, the 2230SLP card is advertised to run up to 133Mhz
in the PCI-X bus. My Dell Precision is advertised to run up to 100Mhz
in its PCI-X slots. Why is my card showing up as running at 66Mhz???
There are no other cards in any of the other PCI-X slots.
Thanks in advance,
Rob Ramsey
Colorado Springs
> Hello,
>
> I just received a new Dell Precision T5400 and a new Adaptec 2230SLP
> SCSI Controller card. This system has been freshy installed with Open
> SuSE v10.2. The system is fully patched.
>
> After I install the Adaptec Storage Manager from Adaptec's website,
> asm-linux_v2.12_922.rpm, and launch the GUI, the ASM reports "No
> controllers were found in this system". I can see the card using the
> following:
>
> #lspci
> 07:05.0 RAID bus controller: Adaptec AAC-RAID (Rocket) (rev 03)
>
> [...]
> I can also access the Adaptec card during system boot. I don't have
> any SCSI storage plugged into it just now but I'm going to attached a
> Dell PowerVault 221S and hopefully configure it for RAID50.
Perhaps it is a bug in the software, making it fail because there are no
disks attached to the controller at present.
> Another question, the 2230SLP card is advertised to run up to 133Mhz
> in the PCI-X bus. My Dell Precision is advertised to run up to 100Mhz
> in its PCI-X slots. Why is my card showing up as running at 66Mhz???
> There are no other cards in any of the other PCI-X slots.
As I (mis)understand it, the Adaptec 2130 and 2230 cards are PCI-X cards
rated for normal operation at 133 MHz when plugged into a 133 MHz PCI-X
slot.
If the PCI-X slot on the motherboard only runs at 100 MHz, then the
particular needs of this adapter card to run in PCI-X mode are not met and
then it defaults to legacy PCI mode, which is 66 MHz for 64-bit devices and
33 MHz for 32-bit devices. The 2230 and 2130 cards are 64-bit devices, so
they'll run at 66 MHz in PCI mode.
--
*Aragorn*
(registered GNU/Linux user #223157)
I downloaded a newer ASM package, asm_linux_x64_v5_20_17414.rpm, off
the Adaptec site and got the Adaptec Storage Manager GUI to work
properly.
In my original post I had installed a much older version of the ASM
package asm-linux_v2.12_922.rpm. This package was "supposed" to work
with the 2230SLP card and the SuSE OS though. Aparently not : )
I'm still curious about the fact that the controller card is running
at 66Mhz though. I used to work for Silicon Graphics as a System
Support Engineer and I used to install a lot of SCSI and Fiber Channel
HBAs in their IX-Bricks. Depending on the model of card and whether
or not the neighbor slot was populated, the PCI-X card could run at
66Mhz, 100Mhz, or 133Mhz.
A good example would be putting two U320 cards in adjoing slots.
These cards would normally clock at 133Mhz each but since their paired
up on the same SHUB (Bus) chip they downclock to 100Mhz. The only
time I've seen a card clock at 66Mhz is because it's a legacy card and
didn't support a faster bus speed or the bus, in the case of an older
I-Brick, only supported 66Mhz.
Thanks,
Rob Ramsey
Colorado Springs
> Hello again,
>
> I downloaded a newer ASM package, asm_linux_x64_v5_20_17414.rpm, off
> the Adaptec site and got the Adaptec Storage Manager GUI to work
> properly.
>
> In my original post I had installed a much older version of the ASM
> package asm-linux_v2.12_922.rpm. This package was "supposed" to work
> with the 2230SLP card and the SuSE OS though. Aparently not : )
Glad you figured that one out by yourself then. ;-)
> I'm still curious about the fact that the controller card is running
> at 66Mhz though. I used to work for Silicon Graphics as a System
> Support Engineer and I used to install a lot of SCSI and Fiber Channel
> HBAs in their IX-Bricks. Depending on the model of card and whether
> or not the neighbor slot was populated, the PCI-X card could run at
> 66Mhz, 100Mhz, or 133Mhz.
Yes, but 66 MHz is not PCI-X mode. That's the legacy PCI mode, which most
(but not all) PCI-X adapter cards support.
> A good example would be putting two U320 cards in adjoing slots.
> These cards would normally clock at 133Mhz each but since their paired
> up on the same SHUB (Bus) chip they downclock to 100Mhz.
This is true only if the adapter card actually supports working at 100 MHz.
You have to keep in mind that if the card is forced to work at 100 MHz when
it is listed as normally working at 133 MHz, it would not function well
without that the manufacturer had taken the proper measures to ensure that
it does.
If the card were rated suitable for operation at 100 MHz, then it would have
been listed on Adaptec's website as PCI-X 133/100 MHz-capable. Yet it
isn't - I have a 2130-SLP myself, which is the single channel variant of
the card you have. It's listed as PCI-X 133 MHz or PCI 66 MHz.
Similarly, there are PCI-X cards that are geared for operation at 100 MHz,
so it would be a bad idea to plug them into a 133 MHz slot. You can
compare it to 3.3 Volt and 5 Volt PCI cards. Some cards are rated to work
at both voltages, while others are rated to work only at 3.3 Volt, and
others only at 5 Volt.
In addition, if you were to have a motherboard with two PCI-X slots on the
same bus, then putting a single PCI-X card in one of the slots while
leaving the other open would indeed run it at 133 MHz - provided that the
adapter card is rated for that speed - while plugging in a PCI-X card in
both slots would reduce their bus speed to 100 MHz, again, provided that
the adapter card is rated for that speed.
However, if you were to plug one PCI-X card into one of the slots and a
regular PCI card into the other slot, then *both* slots will operate at 66
MHz and at PCI specification, not PCI-X specification.
> The only time I've seen a card clock at 66Mhz is because it's a legacy
> card and didn't support a faster bus speed or the bus, in the case of an
> older I-Brick, only supported 66Mhz.
PCI-X slots are backwards compatible with PCI - although not all PCI-X
*adapter* *cards* are backwards compatible! - so if you plug a PCI device
into a PCI-X slot, the slot will operate at PCI specification. PCI-X is
more than just PCI running at a higher bus speed. ;-)
There is ample documentation on the web about the latest PCI, PCI-X and PCIe
standards. Google is your friend and all that. ;-)
You know your storage pretty well! I have also had the same experienc
where a customer would pair a Quad 4Gb FC (133Mhz) card with a crap
U160 card and drop the bus down to 66Mhz for both.
I thought I had done enough research when I purchaed my Dell Precision
and this Adaptec card. I had assumed that the card would run at
100Mhz (it should considering it cost almost $700). Anyway, the only
systems Dell sells with 133Mhz PCI-X slots are their rack mount
systems. I don't have a rack and putting a 1U box on a destop looks
retarded so I opted for the Dell Precision tower instead. Looks like
I made a bad call.
The motivation for this purchase was to improve my current NFS
performance. I've currently got a Dell Precision 490 (2P Xeon@3Ghz,
1GB RAM, Open SuSE 10.2) with a Dell PERC320 SCSI Controller (64Mb, 2
Channel, PCI-X) connected to a PowerVault 221S.
In this configuration, the PERC320 is also running at 66Mhz. The
storage is currently configured for RAID50 utilizing 10 drives (10k
RPM), 5 drives in each bay (with the storage tray setup in a 2x7
configuration, instead of a 1x14). I can only hope that the system is
concatinating two 4+1 strips with each stripe contained in each bay.
In this configuration I'm getting 56Mbps. I'm getting this number
using:
time dd if=/dev/zero of=/root/file.out bs=1MB count=1000
By comparison, my interal 7,200 RPM SATA drives (software mirrored
with LVM) are getting 80Mbps. Ove the Gig-E LAN, using NFS v3, I'm
getting about 10Mbps on the NFS exported PowerVault.
My hope with this new Dell Precision T5400 and Adaptec 2230SLP
controller card was that I'd get more performance by having a faster
bus speed, 100Mhz, and better write caching with it's 128Mb. Reading
through your post, I'm probably not going to much if any performance
improvement out of this new box. Does that sound about right?
Thanks!
Rob
You are writing 1GB. Possibly all of that could be cached and you might
not be seeing the performance of the controller and disk system. In my
experience, RAID systems work well writing a single large file but suck
badly when reading/writing a lot of small files.
You might want to install bonnie or bonnie++ to test performance. Another
test to perform might be to create a RAM disk on the server, export it via
NFS and then test the performance of the ramdisk over NFS.
> On Fri, 16 May 2008, Rob wrote:
>
>> [...]
>> In this configuration, the PERC320 is also running at 66Mhz. The
>> storage is currently configured for RAID50 utilizing 10 drives (10k
>> RPM), 5 drives in each bay (with the storage tray setup in a 2x7
>> configuration, instead of a 1x14). I can only hope that the system is
>> concatinating two 4+1 strips with each stripe contained in each bay.
>> In this configuration I'm getting 56Mbps. I'm getting this number
>> using:
>>
>> time dd if=/dev/zero of=/root/file.out bs=1MB count=1000
>>
>> By comparison, my interal 7,200 RPM SATA drives (software mirrored
>> with LVM) are getting 80Mbps. Ove the Gig-E LAN, using NFS v3, I'm
>> getting about 10Mbps on the NFS exported PowerVault.
>
> You are writing 1GB. Possibly all of that could be cached and you might
> not be seeing the performance of the controller and disk system. In my
> experience, RAID systems work well writing a single large file but suck
> badly when reading/writing a lot of small files. [...]
The performance of a RAID system depends on many things. First of all
there's the type of RAID. Secondly there's the stripe size. Thirdly
there's the number of disks in the array. Fourthly there's the intended
goal of the RAID array.
Trust me, the finetuning of a RAID array for performance is a whole subject
of its own, about which books have been written, and which many corporate
IT departments are still struggling with. ;-)
> Hello Aragorn,
>
> You know your storage pretty well!
Ehm... I'm just a SCSI fan. ;-)
> [...]
>
> I thought I had done enough research when I purchaed my Dell Precision
> and this Adaptec card. I had assumed that the card would run at
> 100Mhz (it should considering it cost almost $700).
Oh, I know all about the pricetag. :-) I'm currently setting up a machine
with Xen and Gentoo, which has an Adaptec SAS RAID PCIe controller in it,
with four Hitachi UltraStar 15 k 147 GB SAS drives in RAID 5. ;-)
Yet, the mistake you've made is one I had almost made myself, with regard to
a PCI-X SATA controller I intended to buy, and which would only work at 133
MHz PCI-X - so not even in legacy 66 MHz PCI mode.
> Anyway, the only systems Dell sells with 133Mhz PCI-X slots are their rack
> mount systems. I don't have a rack and putting a 1U box on a destop looks
> retarded so I opted for the Dell Precision tower instead. Looks like
> I made a bad call.
I've stopped purchasing brandname computers for myself a long time ago. Now
I just scout the internet for components that I like and then I have
someone build me the system I want with those components. At least,
everything will then be made to *my* preferences, and in addition I won't
get any Microsoft license shoved down my throat. ;-)
This said, our organization does use (second-hand) Dell equipment. A
PowerEdge, to be precise - we had another PowerEdge earlier but it was
given a glass of Coca-Cola to drink by a careless admin and it didn't
appear to like that. :p
> The motivation for this purchase was to improve my current NFS
> performance. I've currently got a Dell Precision 490 (2P Xeon@3Ghz,
> 1GB RAM, Open SuSE 10.2) with a Dell PERC320 SCSI Controller (64Mb, 2
> Channel, PCI-X) connected to a PowerVault 221S.
If my memory serves me right, then the Dell PERC320 is actually a
Dell-branded Adaptec, LSI or QLogic controller.
> In this configuration, the PERC320 is also running at 66Mhz. The
> storage is currently configured for RAID50 utilizing 10 drives (10k
> RPM), 5 drives in each bay (with the storage tray setup in a 2x7
> configuration, instead of a 1x14). I can only hope that the system is
> concatinating two 4+1 strips with each stripe contained in each bay.
> In this configuration I'm getting 56Mbps. I'm getting this number
> using:
>
> time dd if=/dev/zero of=/root/file.out bs=1MB count=1000
>
> By comparison, my interal 7,200 RPM SATA drives (software mirrored
> with LVM) are getting 80Mbps. Ove the Gig-E LAN, using NFS v3, I'm
> getting about 10Mbps on the NFS exported PowerVault.
>
> My hope with this new Dell Precision T5400 and Adaptec 2230SLP
> controller card was that I'd get more performance by having a faster
> bus speed, 100Mhz, and better write caching with it's 128Mb. Reading
> through your post, I'm probably not going to much if any performance
> improvement out of this new box. Does that sound about right?
As I wrote in my other post - in reply to the poster dubbed Whoever - RAID
performance depends a lot on the number of disks involved, the RAID type,
the intended usage - e.g. small files versus large streams or databases -
and even on the filesystem used. /ext3/ is not exactly the best performer
among Linux filesystems - in fact, it's about the worst.
The bigger cache on your RAID device will add somewhat to your performance,
and of course having the device operate at 133 MHz and PCI-X specification
would yield better performance than when operating at 66 MHz PCI specs.
Everything depends on your budget and on how far you want to push this, but
if you have any PCIe slots available and you have the cash to spend, then I
would recommend buying a PCIe RAID adapter - from Adaptec, why not? - and
then you would really be getting good performance out of your gear.
However, there's also one other thing to keep in mind. SAS and SATA are
advertised as allowing 3.0 Gbit/sec transfer - that's 384 MB/sec - per
attached disk, while Ultra 320 SCSI advertises 320 MB/sec per SCSI channel.
Quite a bit lower (since it concerns the combined transfer of information
from all devices per channel), but still fairly impressive.
What most people overlook however is that no single hard disk can ever give
you that performance simply because of the way a hard disk works. The SAS
disks I have are about the fastest disks available today, and still their
sustained throughput per disk is only about 120-130 MB/sec - I'm not sure
on the exact numbers, but that should be close enough.
If you do go hunting for a PCIe SCSI RAID controller, then I would recommend
a SAS controller. SAS disks are not the same as parallel SCSI disks, but a
SAS controller allows you to connect both SAS disks and the much more
affordable SATA disks, and mix them into a single RAID array.
Of course, all of the above depends on how far you want to push it, as I've
already said. ;-) Your mileage and bank account balance may vary... ;-)
I contacted Adaptec's support and the e-mailed me the following info:
---------------------------------------------------------------------
Yes, the controller supports 100Mhz clock rates. No you do not need
to have the controller replaced, the slot negotiates the transfer
rate. If both 64bit PCI slots are populated, it's highly likely only
one will support the 100Mhz transfer rate. If not, try the other slot
or check for a BIOS update from Dell.
Thank you for using Adaptec's ASK US Support Service.
---------------------------------------------------------------------
So, I moved this card to another slot and the system still shows 66Mhz
using "lspci -v". Does the lspci comman accurately display bus
speeds? Is there any other tool I can use to see what speed this card
is running at?
Again, this card is all by itself on the PCI-X bus. The ONLY other
card I have installed is a PCI-E video card.
Thanks again,
Rob Ramsey
> Hello Aragorn,
>
> I contacted Adaptec's support and the e-mailed me the following info:
>
> ---------------------------------------------------------------------
> Yes, the controller supports 100Mhz clock rates. No you do not need
> to have the controller replaced, the slot negotiates the transfer
> rate. If both 64bit PCI slots are populated, it's highly likely only
> one will support the 100Mhz transfer rate. If not, try the other slot
> or check for a BIOS update from Dell.
>
> Thank you for using Adaptec's ASK US Support Service.
> ---------------------------------------------------------------------
>
> So, I moved this card to another slot and the system still shows 66Mhz
> using "lspci -v". Does the lspci comman accurately display bus
> speeds? Is there any other tool I can use to see what speed this card
> is running at?
Possibly /lshw/ can show you more or other details, but I've never used that
myself. You could even get more output from /lspci,/ if you want. See
the /man/ page. ;-)
For instance,...
lspci -vvv
... gives you a whole lot more of verbosity. ;-)
If you really want to make this card run at 100 MHz, then check with Dell
for a BIOS upgrade, as the Adaptec guys have told you. ;-)