-John
--
~~~~~~~~~~~~~
email: john AT o s s c DOT net
What is iSCSI client or server software ?????
Access to devices (primarily disks, but it could be others) over a TCP
connection.
The server would offer storage (a target) and a client would access it
(initiator).
The IP network takes the place of the SCSI bus, in effect.
At least one of the locals has an iSCSI client running on OpenVMS Alpha.
Donno if he's got a server running. The code is a port of the Intel
iSCSI software, IIRC.
---------------------------- #include <rtfaq.h> -----------------------------
For additional, please see the OpenVMS FAQ -- www.hp.com/go/openvms/faq
--------------------------- pure personal opinion ---------------------------
Hoff (Stephen) Hoffman OpenVMS Engineering hoff[at]hp.com
Hi John,
I can see that there is a market for a software based solution but
I would have thought it was small. Surely HP must eventually produce
iSCSI products based on one of the host-based adapters. Several are
mentioned in the FAQ at http://technomagesinc.com/iscsi_faq.html. When
that happens, they will take the serious users of iSCSI and you will
be left with the folk who are using it on an ah-hoc basis.
I think iSCSI is going to be an important technology. It seems to
offer the speed of fibre channel (almost) with the price of network
attached storage (almost). But I do think the HBA approach will
prevail.
- Jim
-----------------------------------------------------------------------------
Jim Brankin
Brankin at nildram dot co dot uk
Strictly Personal Opinion
-----------------------------------------------------------------------------
You really nead a Gigabit card that supports TOE for iSCSI to
be performant otherwise the stack ends up being to much of
a performance drag.
regards
Andrew Harrison
Couple of points:
The protocol is still emerging, and purchasing a card soon may yield
headaches.
For redundency you may have to purchase additional cards (akin to
redundant NICs)
if a problem should arise. I also don't believe that those cards
will be cheap, but if
you have the dough for a lot of storage, you'll have it for the
adapters. But then, if
you have the dough for your big servers, what about your other
systems?
You are also assuming that the iSCSI targets are not going to be on
any of your
servers. For those with VAXes yet, you are assuming that someone is
making an
adapter for them.
You forget that most of Windows is used on an ad-hoc basis. VMS is
just one
version of the product. I am confident that my phone will ring.
> You really nead a Gigabit card that supports TOE for iSCSI to
> be performant otherwise the stack ends up being to much of
> a performance drag.
Is this measured, or theoretical?
I assume that's theoretical. Here are the measurements:
With an AS255 4/300 reading a disk (COPY) from an XP1000 with a
6/667, I was able to
transfer about 1.2MBYTES/second. Using the same 6/667 writing a
disk on an ES40 6/1000
(BACKUP/IMAGE/INIT) I get 2.3 MBYTES/second. In both cases the
systems certainly DO
NOT run out of CPU time. The EV6's spend only a few percentage
points on the interrupt stack
(the ES40 is about 2-3%, which REALLY astonished me). The AS255 has
10Mbit through a
10/100 switch to the XP1000 with 100Mbit. The XP's 100MBit goes
through switches and
eventually through fiber to the ES40.
On an AS1200 with 2x5/533's, using localhost (client and server on
the same machine), I
am transferring 6.2 MBytes/second at about 62% CPU utilitzation.
The same local copy
gives 6.8MBytes/second. Keep in mind that with the loopback
interface there is more data
copying going on, and the host has both the client and server
overhead on it.
These tests allow unsolicited data to be transferred up to 64K-512
bytes per PDU, and with
a single connection. Multiple connections will be added if enabled.
The above is only caching on the client side (XQP and XFC/VCC). I
am working on adding
caching to the server right now to get a feel for how write
performance would differ.
By the way, does MSCP require a gigabit card with TOE to be
performant? I think you'll get
a lot of feedback on that one.
-Thank you both for your input.
>
A number of articles cite TOE performance.
http://www.iscsistorage.com/iscsidevices.htm
http://www.trebia.com/pdf/Tech_note_44.pdf
> I assume that's theoretical. Here are the measurements:
>
> With an AS255 4/300 reading a disk (COPY) from an XP1000 with a
> 6/667, I was able to
> transfer about 1.2MBYTES/second. Using the same 6/667 writing a
> disk on an ES40 6/1000
> (BACKUP/IMAGE/INIT) I get 2.3 MBYTES/second. In both cases the
> systems certainly DO
> NOT run out of CPU time. The EV6's spend only a few percentage
> points on the interrupt stack
> (the ES40 is about 2-3%, which REALLY astonished me). The AS255 has
> 10Mbit through a
> 10/100 switch to the XP1000 with 100Mbit. The XP's 100MBit goes
> through switches and
> eventually through fiber to the ES40.
>
1.2-2.3 MB/s however is very slow compared with what you can achieve
either with directly attached storage or with SAN based storage.
based on your results pushing ~60MB/s over Gigabit ethernet using a
standard NIC could result a CPU load of between 50 and 75%. This
would be much less acceptable when the alternative using DAS or
a SAN would have a much lower overhead.
The server serves a container file, and the other is a test program that
pretends to be a client and verifies the server's operation.
This is a mostly direct compile and go of the Intel iSCSI software from
SourceForge with some minor changes to fix syntax issues with the
HP/COMPAQ C compiler. As with the Intel source, the names are still
hardcoded.
I have not tried anything other than using the loopback interface for
access.
Building it currently requires OpenVMS 7.3-2 FT or later.
-John
malm...@dskwld.zko.dec.compaq.hp
Personal Opinion Only
And that's all that they do. They cite for the adapters and storage (that
aren't cheap)
what they will do. Although, one says that at 245 I/O's per second they
have offloaded
99% of the load -- at this moment I am running a fair mix on ONE DEVICE of
about
180 /second at the 65% (of one CPU) with the client and server on the same
system.
A good guess is to lower that number by half to just about 35%, and this is
with old
SCSI hardware. The SANs also would implement some more special features,
which
I have not yet gotten to, such as the caching on the data.
>
>
> > I assume that's theoretical. Here are the measurements:
> >
> > With an AS255 4/300 reading a disk (COPY) from an XP1000 with a
> > 6/667, I was able to
> > transfer about 1.2MBYTES/second. Using the same 6/667 writing a
> > disk on an ES40 6/1000
> > (BACKUP/IMAGE/INIT) I get 2.3 MBYTES/second. In both cases the
> > systems certainly DO
> > NOT run out of CPU time. The EV6's spend only a few percentage
> > points on the interrupt stack
> > (the ES40 is about 2-3%, which REALLY astonished me). The AS255
has
> > 10Mbit through a
> > 10/100 switch to the XP1000 with 100Mbit. The XP's 100MBit goes
> > through switches and
> > eventually through fiber to the ES40.
> >
>
> 1.2-2.3 MB/s however is very slow compared with what you can achieve
> either with directly attached storage or with SAN based storage.
This starts with 10Mb ethernet. 1.2MB/second really isn't that bad. I have
also
been improving the performance in such a way that I can actually get better
performance
using the iSCSIdriver against the local SCSI disk than I can against the
disk directly.
>
> based on your results pushing ~60MB/s over Gigabit ethernet using a
> standard NIC could result a CPU load of between 50 and 75%. This
> would be much less acceptable when the alternative using DAS or
> a SAN would have a much lower overhead.
If you take the current (first trial) code base and interpolate the results,
then you
get these numbers. There are a LOT of sites using MSCP right now. Are you
saying that they are doing it all wrong?
The storage (and adapter) businesses are working on iSCSI at the moment. My
question is: who is willing to invest BIG BUX in an immature technology? I
would
rather play with the technology in software and move into hardware when it
has
proven itself. But then, that's just me. It also lets me play with the
pieces that I
have now ... VMS systems with disks on them ... client and server on the
same
hardware BEFORE I SPEND THE BUX ON THE BIG IRON.
You are right with the TOE that you will get the best performance. iSCSI is
an
emerging technology and, as I said in my last post, at this moment in time
it's a leap
of faith and headaches in the wings until they get the kinks out and prove
that you
can mix and match hardware.
Thank you for your post and if there are people interested in testing the
software,
please do contact me.
-John