Check out www.icp-vortex.com for more info-- and buy them online 24x7 at
www.icp-order.com
We've been running them with Redhat here for 3 years (and some NT too
actually... they work great with any major OS). They have supported Linux
long before any of the other manufacturers.
-Leo
_____ . .
' \\ . . |>>
O// . . |
\_\ . . |
| | . . . |
/ | . mailto:prost...@home.net . . . |
/ .| 800-324-6711 / 800-720-7618 fax . . . |
/ . | http://www.geocities.com/scsiperipherals . .. . o
----------------------------------------------------------------------
Authorized - DIRECT VAR/VAD/Distributor for new SCSI/FC-AL peripherals
from: IBM, Seagate, Quantum, NAS / SAN / RAID, QLogic, JNI, ATL, ect.
In article <cKKh6.49832$49.89...@news1.rdc2.pa.home.com>, le...@yahoo.com
says...
--
>sort of true
>but Adaptec has come out with good Linux drivers for their new
>PCI-Ultra-160M RAID controllers, as has Mylex
>[...]
Really? I spent 1 hour now at Adaptecs web site and didn't find a
driver for the 3200S for Linux kernel 2.4.2... Are you sure that
they have one besides their old RedHat 6.X and SuSe 6.X/7.0?
--nung
> In <sXdi6.73438$Ti5.1...@news1.alsv1.occa.home.com> prost...@home.com (Pros
> torage) writes:
> Really? I spent 1 hour now at Adaptecs web site and didn't find a
> driver for the 3200S for Linux kernel 2.4.2... Are you sure that
> they have one besides their old RedHat 6.X and SuSe 6.X/7.0?
We do server manufacturing.
We use icp-vortex SCSI (Raid x) controllers with great success,
ICP supports completely linux directly by manufacturer, including
waranty and all drivers, utilities ...
There is no wish open ... Great thing.
http://www.icp-vortex.com
ICP-Vortex Compared to mylex, adaptec et.al, they can't reach vortex.
We sold Servers >4 Terabyte ...
mfG
Jojo
--
Jürgen Sauer - AutomatiX GmbH, +49-4209-4699, jo...@automatix.de
http://www.automatix.de to Mail me: remove: -not-for-spawm-
*snip*
I've started using the 3Ware IDE RAID controller and been
very impressed with it so far. Using a pair of IBM 75GXP drives
in mirroring mode, disk reads are very fast. The mirroring
works as advertised. I can "fail" a drive by unplugging it
and it keeps running. Rebuilding onto the failed drive
to return to full mirroring functionality can be done while
the system is running. Sweet.
Only weird thing I note is that if I put my Tekram D390U3W card
in with a Seagate X15 drive it seems I cannot access it. I think
the 3Ware and the Tekram card conflict.
--
"Who needs horror movies when we have Microsoft"?
-- Christine Comaford, PC Week, 27/9/95
I can only say as a consultant who has tried a lot of controllers in
multiple OS's (Linux, and Novell/NT) ICP has been the best. I'm certainly
not slamming others, I've successfully deployed many other array controllers
including Adaptec, Compaq SMART Array Controllers, HP NetRAID, etc. But,
ICP has been the best overall. Best performing, best support, and best ease
of setup our consulting team has experienced.
Since we started www.icp-order.com, I've asked customers where they heard of
ICP and why they made an ICP decision. One of the latest responses I
received was "We saw ICP at Linuxworld and were impressed. After having
numerous issues with Adaptec (and he mentioned driver related issues) we
decided to try ICP.
Actually, ICP is the ONLY Tier 1 RAID controller approved by Redhat. Check
out http://www.redhat.com/support/hardware/intel/61/rh6.1-hcl-i.ld-5.html
and scroll to section 5.6.
Sincerely,
Leo J. Squire
www.icp-order.com
High Availability, Now Highly Available
"Vincent Fox" <vin...@cad.gatech.edu> wrote in message
news:97e0ko$s6b$1...@news-int.gatech.edu...
On 25 Feb 2001 21:18:29 GMT, Juergen Sauer <jo...@automatix.de> wrote:
We are struggling to get a 3ware 6400 to work properly. Three IBM 75GXP
drives in raid5 configuration. The kernel keeps spitting out errors about
DMA buffers being low (with 512mb ram?!) and then locks up with scsi
timeouts. When it doesnt lockup, we get read speeds of 4mbyte/sec and
write speeds of 5mbyte/sec.
The drives work fine (38mbyte/sec) on the motherboard builtin IDE.
o) Tried two different 3ware 6400's (original we bought, and a replacement
that was sent)
o) Tried four different motherboards with intel and via chipsets,
athlon and pentium CPUs.
o) Tried different PCI slots.
o) Tried different video cards.
o) Pulled the ethernet NIC to rule it out.
o) Disabled everything in the motherboard BIOS (isapnp,acpiserial,parallel,
usb,etc) as well as in the kernel, to rule those out.
o) Tried kernel 2.2 and kernel 2.4 both with stock kernel 3ware drivers
and with 3ware-supplied binary drivers, redhat 6.2 and 7.0
o) Used the 3ware bios that came with the card as well as the latest bios
from 3ware support.
o) Tried with different drives, different power supplies.
o) Tried with 128m instead of 512m, and tried with different memory, to rule
out dimm and memory problems.
o) Disabled swap, to rule out VM problems.
In the end, the thing still wont work properly. We are pretty close to
giving up and sending the thing back.
Anyone know of a hardware IDE raid solution that *works*?
-Dan
(To reply in email replace blort dot invalid with anime dot net)
Yeah, a Ford Escort and a Mercedes both have 4 wheels and a steering wheel.
Both with get you from point A to B. Are they the same class of product?
Hell no.
"Hubba Bubba" <hu...@bubba.com> wrote in message
news:0gtq9t0tjiej7vuae...@4ax.com...
> Anyone know of a hardware IDE raid solution that *works*?
Look at the products from Zero-D and Syneraid -- external IDE RAID
enclosures that talk SCSI to the host.
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
*snip*
>We are struggling to get a 3ware 6400 to work properly. Three IBM 75GXP
>drives in raid5 configuration. The kernel keeps spitting out errors about
>DMA buffers being low (with 512mb ram?!) and then locks up with scsi
>timeouts. When it doesnt lockup, we get read speeds of 4mbyte/sec and
>write speeds of 5mbyte/sec.
*snip*
Ummm, dunno about your problems, I am only running a 2-port.
However, in reading the reviews on www.storagereview.com they
were of the opinion the RAID 5 stuff was bolted on at the last
minute to be competitive, and didn't seem so great. Your experiences
would seem to along with this.
My experience with the plain old mirroring stuff is it works great.
A set of 4 drives in a RAID 10 setup would be the ideal IMHO.
On Thu, 01 Mar 2001 13:20:08 GMT, "Leo" <le...@yahoo.com> wrote:
>GDT6513RS
On 1 Mar 2001 14:03:14 GMT, Joshua Baker-LePain
So? *ALL* raid controllers do it via software. If the CPU isn't burdened w/
handling the raid software then it can be considered a hardware solution.
First, please stop top quoting. It makes it very hard to follow the flow of
the conversation. Second, could you give examples of hardware raid
(IDE or SCSI) solutions which do have hardware XOR, and exactly what you
mean by that?
> Given that these little lunchboxes only do Raid 0 and 1, you
> get what you pay for.
Sorry, but you are completely wrong here. Our group has a Syneraid-800
(aka Brownie Raid from Axus Microsystems). The controller in this unit
(which is the same controller used in Zero-D's G-Force series)
is based on the Intel i960RN (64-bit) processor, which is also used in
SCSI RAID products, e.g. the Adaptec 3200S and 3400S and the Mylex
AcceleRAID 352. The system supports RAID levels 0, 1, 3, 5, and 0+1, as
well as hot swap, hot spare, automatic drive rebuilds, and 2 redundant,
hot-swappable power supplies. The channel to the host is U2W SCSI.
Our unit is configured with 8 80GB 5400RPM Maxtor drives and 128MB of cache
RAM. It is setup as a RAID 5 with no hot spare (560GB useable space) and
having a stripe size of 128 blocks (per disk). We consistently get 25-30MB/s
reading *and* writing to the system. It cost ~$US6000. That's what I
call price/performance.
> On Mon, 09 Apr 2001 22:47:18 GMT,
> The.Central.Scru...@invalid.pobox.com () wrote:
>>On Mon, 09 Apr 2001 21:30:48 GMT, Hubba Bubba <hu...@bubba.com> wrote:
>>> Both of the below referenced external raid boxes have
>>>*software* raid logic built in, not hardware (no XOR onboard).
>>
>>
>>So? *ALL* raid controllers do it via software. If the CPU isn't burdened w/
>>handling the raid software then it can be considered a hardware solution.
>>
>>>
>>>
>>>
>>>On 1 Mar 2001 14:03:14 GMT, Joshua Baker-LePain
>>><jl...@duke.spam.begone.edu> wrote:
>>>
>>>>goe...@blort.invalid wrote:
>>>>
>>>>> Anyone know of a hardware IDE raid solution that *works*?
>>>>
>>>>Look at the products from Zero-D and Syneraid -- external IDE RAID
>>>>enclosures that talk SCSI to the host.
>>>
> First, please stop top quoting. It makes it very hard to follow the flow of
> the conversation. Second, could you give examples of hardware raid
> (IDE or SCSI) solutions which do have hardware XOR, and exactly what you
> mean by that?
I understand that Infortrend does XOR in their ASIC. Don't know how it
would help, unless major parts of the data flow also are through
hardware, which is of course possible. The issue is then likely not so
much of whether the XOR is hardware, but how DMA based the architecture
is. For RAID5, a single block write needs to be put in nvram twice. The
original data is read and XORed over a copy, and the original parity
block is also read and XORed over that copy. Both buffers are then
scheduled for a write.
If you want to make use of DMA, it is very helpful to do the XOR in
hardware. But it is more of an extension of the DMA engine than of a
thing in itself, I feel.
Thomas
Hubba
On 12 Apr 2001 17:02:21 GMT, Joshua Baker-LePain
Still top quoting, huh? *sigh*
> You can get U3-160 solutions for the *same* price: well, ok,
> about $500 more. So, how exactly is that price/performance?
Excuse me? Did you see the specs? 640GB of raw disk space. The
cheapest price for a 73GB, U160 drive on pricewatch is $900. 8 of those
those runs you $7200, already $1200 greater than my entire system, and gets
you 56GB less raw space. And you still have to buy the chassis (which,
for my system, cost $3300). So you're looking at *at least* $4500 more.
I'd really like to know where you got your numbers.
I'll leave some of the quoted material below, for the specs.
> On 12 Apr 2001 17:02:21 GMT, Joshua Baker-LePain
> <jl...@duke.spam.begone.edu> wrote:
>>Sorry, but you are completely wrong here. Our group has a Syneraid-800
>>(aka Brownie Raid from Axus Microsystems). The controller in this unit
>>(which is the same controller used in Zero-D's G-Force series)
>>is based on the Intel i960RN (64-bit) processor, which is also used in
>>SCSI RAID products, e.g. the Adaptec 3200S and 3400S and the Mylex
>>AcceleRAID 352. The system supports RAID levels 0, 1, 3, 5, and 0+1, as
>>well as hot swap, hot spare, automatic drive rebuilds, and 2 redundant,
>>hot-swappable power supplies. The channel to the host is U2W SCSI.
>>
>>Our unit is configured with 8 80GB 5400RPM Maxtor drives and 128MB of cache
>>RAM. It is setup as a RAID 5 with no hot spare (560GB useable space) and
>>having a stripe size of 128 blocks (per disk). We consistently get 25-30MB/s
>>reading *and* writing to the system. It cost ~$US6000. That's what I
>>call price/performance.
--
On 19 Apr 2001 21:26:34 GMT, Joshua Baker-LePain
"Is headed", as in, "Is not there yet". Yep, serial ATA looks quite good.
But some of us need storage now.
> After reviewing said loaf of turd for 30 days, I can tell you
> that it is horrible. Performance sucks, the box is extremely
27-30MB/s on an 80MB/s bus? In RAID5? Sucks? Hmm. OK. And ours
has been rock solid.
> tempermental. No TCP support, no GUI support, and no remote
> communications whatsoever, unless you call a serial cable to a dumb
> terminal remote. Where is the GUI support? Where is the intelligent
> management?
I believe a GUI management app is available for 'Doze platforms, but who
cares? The front-side LCD/serial connection does all I need.
The point you seem to be missing is that this is not an enterprise (or even
high end) solution. Had I wanted SCSI, 24/7/365 uptime, redundant
host and internal channels, remote/GUI management, etc, I could have gone
with, e.g. a RocketRAID from Zzyzx. I quoted those out as well. Those
are absolutely sweet systems (U2W, the RocketSTOR is U160) from a company
I trust. They're also pricey -- $15K for a system with 438GB usable.
They are overkill for my needs.
I believe that this thread started with somebody looking for inexpensive
solutions to storing lots of data. This is one suggestion. Yes, SCSI
is better if you're going to have scores of people pounding on the system
at all times. But it's expensive. These solutions are far cheaper and,
in this case, very easy to integrate into an existing network.
> Oh, wait, I see you are a Dukie. I rest my case. Oh, and sorry
> little internet nazi, I can top quote all I want to.
Ah, and now we resort to name calling and the timeless "I don't care about
standards" motif. I'm done with this thread.
I certainly don't aim to keep up this war (er... thread). But SCSI
doesn't necessarily have to be terribly expensive. When I initially set up
the servers for my company, we were on a shoestring budget, and we bought
some used RAID cards, and found the drives that offered the best
price/peformance, and it turned out very nicely. It seems from your earlier
post that you got about 640 GB of space for around $6,000. Let me see what
I can come up with on PriceWatch for a decently-sized RAID array....
Mylex 170 U160 RAID controller, 64 MB cache, ~ $500.
IBM UltraStar 36 gig $249
So, 15 drives and the controller, in a RAID 5 configuration, gives you 500
gigs for $4200. Add another $250 for external chassis and cables, and that
gives you 400 GB for $3750. Still not there.... so...
Mylex 352 $730
18 drives $4482
2 9-bay enclosures $550
Two VHDCI cables $50
---------------------
That totals up to about $5800, right around what you spent, and now you'd
have a dual-channel controller for more bandwidth, and 64 megs of cache on
the board to speed things along. You might need some converters to hook
the SCA drives to the 58-pin cables, but the extra $200 would more than
cover that. Now, I'm not saying that's what you should have done, and I'm
not saying that's the ideal setup. I'm just saying that SCSI doesn't
necessarily have to be extremely expensive in all situations.
steve
> I certainly don't aim to keep up this war (er... thread). But SCSI
And I never intended a war. I really appreciate reasoned debate (thank
you!), which is why I am posting in response to you (and won't in response
to hubba anymore).
> Mylex 352 $730
> 18 drives $4482
> 2 9-bay enclosures $550
> Two VHDCI cables $50
> ---------------------
> That totals up to about $5800, right around what you spent, and now you'd
> have a dual-channel controller for more bandwidth, and 64 megs of cache on
> the board to speed things along. You might need some converters to hook
> the SCA drives to the 58-pin cables, but the extra $200 would more than
> cover that. Now, I'm not saying that's what you should have done, and I'm
> not saying that's the ideal setup. I'm just saying that SCSI doesn't
> necessarily have to be extremely expensive in all situations.
Very nice configuration, and I absolutely agree with your conclusions.
I looked at such configurations, and ran into one problem. We're a research
group chock full of workstations. As such, we really don't have any server
hardware, and thus no boxen with 64bit PCI slots. Anything above the
cheapest entry-level SCSI RAID cards requires 64bit PCI slots, and would
have required us to buy a server-class system. Now, an entry level server
isn't expensive, but it *does* add to the price. A self-contained RAID
system with a SCSI channel to the host allows me to just hook it up to
any box with a SCSI card. One of our old dual PII-450, 512MB RAM
workstations lacks 64bit PCI, but works just fine NFS serving our RAID
array.
Thanks again.
Au contrair, mon freir. : )
64-bit PCI cards will work in 32-bit slots. I've used both 64-bit RAID
and gigabit ethernet controllers in 32-bit slots when I needed to, they work
great. I'm not 100% positive, but I'm fairly sure that it's part of the
spec that 64-bit cards must also work in 32-bit mode.
steve
On Thu, 26 Apr 2001 12:02:55 -0600, "Steve Wolfe" <s...@codon.com>
wrote:
> Au contrair, mon freir. : )
> 64-bit PCI cards will work in 32-bit slots. I've used both 64-bit RAID
> and gigabit ethernet controllers in 32-bit slots when I needed to, they work
> great. I'm not 100% positive, but I'm fairly sure that it's part of the
> spec that 64-bit cards must also work in 32-bit mode.
Ah, ha. Excellent. I knew that some would, but I didn't know that it
(probably) was part of the 64-bit spec. Thanks!