Performance

13 views
Skip to first unread message

khul...@gmail.com

unread,
Jul 25, 2006, 11:40:11 AM7/25/06
to Core-iSCSI
I'm using Core-ISCSI on a host client, and iet as the target on another
system. Both systems are beefy servers, 15Krpm scsi HD's, gigE
connected to a cisco4500 series switch, and on a private network. I'm
wondering what kind of performance tweaks are recommended with using
Core and IET. I've seen people talk about getting 30~40mb/s out of it,
but I'm only seeing around 11mb/s. I'm exporting a ~7G disk image via
losetup on the target machine, and the client machine is running Xen
3.0 on it, with Core-Initiator being built off the the source xen
kernel (2.6.16)

Using hdparm to measure the performance, here is what I get:

ISCSI-Target:
/dev/sda:
Timing cached reads: 3136 MB in 2.00 seconds = 1566.98 MB/sec
Timing buffered disk reads: 196 MB in 3.02 seconds = 64.86 MB/sec
backuppc iscsi-target # hdparm -Tt /dev/sda

/dev/sda:
Timing cached reads: 3136 MB in 2.00 seconds = 1568.58 MB/sec
Timing buffered disk reads: 214 MB in 3.01 seconds = 71.01 MB/sec
backuppc iscsi-target # hdparm -Tt /dev/sda

/dev/sda:
Timing cached reads: 3140 MB in 2.00 seconds = 1569.40 MB/sec
Timing buffered disk reads: 194 MB in 3.01 seconds = 64.43 MB/sec

ISCSI-Client:

/dev/sdb:
Timing cached reads: 1204 MB in 2.00 seconds = 601.69 MB/sec
Timing buffered disk reads: 34 MB in 3.06 seconds = 11.10 MB/sec
hypervisor1:/usr/local/src/core-iscsi-tools-v3.5/core-iscsi/scripts#
hdparm -Tt /dev/sdb

/dev/sdb:
Timing cached reads: 1208 MB in 2.00 seconds = 604.07 MB/sec
Timing buffered disk reads: 34 MB in 3.06 seconds = 11.10 MB/sec
hypervisor1:/usr/local/src/core-iscsi-tools-v3.5/core-iscsi/scripts#
hdparm -Tt /dev/sdb

/dev/sdb:
Timing cached reads: 1232 MB in 2.00 seconds = 615.67 MB/sec
Timing buffered disk reads: 34 MB in 3.06 seconds = 11.10 MB/sec


My ultimate goal is to be able to reliably mount the remote drives via
iscsi in dom0 to boot my domU vservers from, and not have the domU
servers aware of iscsi. I'll be running heartbeat and drbd over the
vservers, and utilizing several storage servers and client servers to
create a high-availability solution.

I'm also looking at implementing a true san solution down the road
(sometime in the next 2 years) using sanrad iscsi switches, so if
anyone has some experience with those, it would also be nice to hear.

Any performance woes/tips/tweaks would be helpful, thanks.

is...@digitaltadpole.com

unread,
Jul 25, 2006, 3:36:36 PM7/25/06
to Core-iSCSI
1. Regarding using Xen with an initiator, you may want to look at this
post:
http://groups.google.com/group/Core-iSCSI/browse_thread/thread/79f59d8b32934402/?hl=en#

At least you'll know some other people who are close to the same place
and have a similar interest. I'm not quite 'there' yet to be able to
offer suggestions/advice/woes/etc with Xen (but, it's coming...)

2. I like the direction you are looking at, and would like to suggest
that you post some performance metrics here:
http://linux-iscsi.org/index.php/Main_Page
(similar to what you've done already, only maybe in a table that can be
duped by others with similar metrics)
Performance is useful as it's own category, so I'd suggest making it
it's own category, the same as News, Interoperability, Diskless Boot,
Next Gen Projecgts, etc.

3. Regarding your results - this may be too much trouble to set up,
but I'd suggest the following to simplify the results you are getting:

A. Get rid of Xen termporarily and see if you get similar results from
a straight Linux install without Xen. This may just involve rebooting
a different kernel that doesn't include Xen and has the startup scripts
commented out. It may be easier to do this on a different disk if you
can "hot plug" what you have.
B. Try out open-iscsi initiator and see if results are similar
(historically, they've had similar performance, within 5% of each
other)
C. Set up a md software raid on the target (maybe Raid0 if you've got
the disks to do it), export the resulting targets, and see if the
metrics are roughly doubled for a 2 disk Raid0 on the target and if
increased metrics follow on the initiator(s). This will depend on your
setup, of course. You may be hitting what a single scsi channel can
deliver. The syntax between initiators is similar/analogous, but not
the same. Upside is that neither is extremely complex to deal with
(but it's one more thing to have to deal with).

I'm dealing with similar issues, but have elected to get into ethernet
bonding and md raid across multiple storage servers on diskless
initiators, and I'm having reliability issues that I haven't gotten to
the bottom of yet. (will try out xen next, but I'm not quite ready -
I'm doing my own procedure similar to the above to figure out my
problems)

4. I don't have direct experience with SanRad storage switches, but
I'm becoming more and more convinced that the major benefit of going
with iSCSI is the promise of running commodity hardware that is
generally available and known to work. You have to start to ask
yourself 'why not FibreChannel?' if you end up with specialized
hardware at the initiator and the switches. I would be inclined to use
a specialized solution if someone could provide a TOTAL iSCSI solution
that included the target, the initiators (h/w or s/w), the switches,
etc, had already qualified/tested it, and provided support (as long as
it was someone else's money and my job on the line). You are
definitely on the "bleeding edge" of technology when you are trying to
set up a linux iSCSI storage solution - the promise of increased
performance, simplified management, increased reliability, reduced cost
is very great. But I don't think very many people are playing in this
space yet, and consequently, all the bugs haven't been worked out and
all scenarios haven't been explored. For instance, there are only
around 90 people subscribed to this mailing list and not that many more
with IET and open-iscsi (plus the same people are cross subscribed and
read each other's posts in any of the lists).

I find your ultimate goal of using heartbeat and drbd interesting and
one of the directions I'll need to explore next also. It would be
really interesting to get Xen Dom0 working with heartbeat/drbd to set
up an LVS cluster for web pages. I'd also be interested in any ideas
you have on how to manage all this stuff
(monitoring/measuring/changing). (For instance, something I haven't
tried, but have thought of, is setting up specialized NIS maps as a
'low rent' substitute for iSNS).

Good luck!! I'll get back with you once I get to where you are at
(and, what I find out may be useful in your next iteration since I'm
covering some ground you'll want to go in next). When you are on the
"bleeding edge" getting some help for your 'wounds' can be like walking
in mud - very slow going.

Mike Mazarick

Ming Zhang

unread,
Jul 25, 2006, 3:40:18 PM7/25/06
to Core-...@googlegroups.com
blind guess. this 11MB/s looks so interesting. any chance u run network
at 100Mb/s?

Ming

is...@digitaltadpole.com

unread,
Jul 25, 2006, 8:57:40 PM7/25/06
to Core-iSCSI
I don't think it's as big a problem with 1gig networks, but 100mb was
plagued with autonegotion problems, where it all looked good and
reported one setting, but in reality it was set differently. The only
way to fix a 100mb autonegotiate problem was to remove power and
re-initialize the interfaces (including the switch). With his
readings being this far off, it looks similar to the old autonegotiate
problems we've seen in the past. (half vs full duplex, different
speeds, etc.)

Ming Zhang

unread,
Jul 25, 2006, 9:13:37 PM7/25/06
to Core-...@googlegroups.com
i just feel the 11MB/s number looks so strange.

also a quick test is to export a NULLIO mode target via IET and run some
speed test with dd. if still 11MB/s i am right, if not, something else
happened.

Ming

Reply all
Reply to author
Forward
0 new messages