My write speed
dd if=/dev/zero of=zero bs=4096 count=1572864
1572864+0 records in
1572864+0 records out
6442450944 bytes (6.4 GB) copied, 150.675 seconds, 42.8 MB/s
My read speed
dd if=zero of=/dev/null bs=4096
1572864+0 records in
1572864+0 records out
6442450944 bytes (6.4 GB) copied, 200.923 seconds, 32.1 MB/s
I'm using the default config for iscsid and am at my wits end, can
anyone on this list point me in the right direction?
Thanks,
Josh.
OK I was just told that dd does not have that problem any more. (It
used to in old systems) And it does wait for a sync. So the only thing
is if you have lots of caching on the AX150.
Other wise it is weird. I usually get about the same scores.
The default is 256 sectors.
Try:
#blockdev --setra /dev/<device> 4096
And re-run your test.
Also make sure that flowcontrol is enabled.
#ethtool -a ethX
You can then try larger values and see if that helps your application. If
your requirement is highly random then a high value will likely hurt
performance.
Don
-----Original Message-----
From: open-...@googlegroups.com [mailto:open-...@googlegroups.com] On
I think there is a bug in 2.6.18-8.1.10.el5 + any version of iscsi,
where in some setups the read speed is much slower than writes. So with
bs 4096, we may get some slower IO numbers, but if you do
echo noop > /sys/block/sdXYZ/queue/scheduler
if=/dev/zero of=zero bs=128k count=1572864
and
if=zero of=/dev/null bs=128k count=1572864
what numbers do you get?
With IET I get:
I am getting 112 - 119 MB/s for writes (119 with jumbo frames on both
the initiator box and IET box and both boxes are running e1000).
And only
35 - 71 MB/s for reads.
With different bs values the writes speed is always the same, but the
read speed is always slow sometimes 26MB/s with smaller bs and up to 90
with 256k, but never as fast as writes.
The with another box (same specs quad xeon and 4 gigs of mem and
e1000s), and same version of RHEL5, I get 112 - 119 MB/s for reads and
writes with any bs size.
I am not sure what the problem is yet. I just hit while trying to debug
the RHEL4/Centos4/linux-iscsi issue you reported and debugging a write
perf issues someone reported to this list. The weird thing is that with
RHEL4/centos4 and linux-iscsi I get 112 MB/s on both boxes.
-----Original Message-----
From: open-...@googlegroups.com [mailto:open-...@googlegroups.com]
On Behalf Of Don Williams
Sent: 20 September 2007 18:48
To: open-...@googlegroups.com
Subject: RE: Slow iSCSI
For sequential read performance you can increase the read ahead value.
The default is 256 sectors.
Try:
#blockdev --setra /dev/<device> 4096
And re-run your test.
Also make sure that flowcontrol is enabled.
#ethtool -a ethX
You can then try larger values and see if that helps your application.
If your requirement is highly random then a high value will likely hurt
performance.
Don
-----Original Message-----
From: open-...@googlegroups.com [mailto:open-...@googlegroups.com]
On Behalf Of jo...@moonfruit.com
Sent: Thursday, September 20, 2007 7:21 AM
To: open-iscsi
Subject: Slow iSCSI
This is weird, I have Dell (EMC) Ax150i running centos5
2.6.18-8.1.10.el5
iscsi-initiator-utils.i386-6.2.0.742-0.6.el5
My write speed
dd if=/dev/zero of=zero bs=4096 count=1572864
1572864+0 records in
1572864+0 records out
6442450944 bytes (6.4 GB) copied, 150.675 seconds, 42.8 MB/s
My read speed
dd if=zero of=/dev/null bs=4096
1572864+0 records in
1572864+0 records out
6442450944 bytes (6.4 GB) copied, 200.923 seconds, 32.1 MB/s
I'm using the default config for iscsid and am at my wits end, can
Write 24.5MB/s
Read 19MB/s =(
Do you have any more info on the 2.6.18 bug?
-----Original Message-----
From: open-...@googlegroups.com [mailto:open-...@googlegroups.com]
On Behalf Of Mike Christie
Sent: 20 September 2007 20:35
To: open-...@googlegroups.com
Subject: Re: Slow iSCSI
jo...@moonfruit.com wrote:
> This is weird, I have Dell (EMC) Ax150i running centos5
> 2.6.18-8.1.10.el5
> iscsi-initiator-utils.i386-6.2.0.742-0.6.el5
>
> My write speed
> dd if=/dev/zero of=zero bs=4096 count=1572864
> 1572864+0 records in
> 1572864+0 records out
> 6442450944 bytes (6.4 GB) copied, 150.675 seconds, 42.8 MB/s
>
> My read speed
> dd if=zero of=/dev/null bs=4096
> 1572864+0 records in
> 1572864+0 records out
> 6442450944 bytes (6.4 GB) copied, 200.923 seconds, 32.1 MB/s
>
I think there is a bug in 2.6.18-8.1.10.el5 + any version of iscsi,
No, I am right in the middle of trying to hunt it down now.
But it looks like your problem is not the same since you have slow
speeds for both reads and writes.
Here are my benchmarks:
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
svn 1G 14604 65 18248 40 6406 11 19791 55 24722 15
283.8 3
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 8128 88 +++++ +++ 8946 80 7959 89 +++++ +++
12198 86
Miguel
Pretty sure that was me.
I've had to let that project drop so I haven't hit the box with any of
the more recent versions but my issue was a hardware/software problem in
the AX150 itself and I didn't solve it, my VAR did. Turned out that the
box was in some sort of limp mode because it was incorrectly seeing some
hardware as bad or non-existent. To fix it the VAR accessed a Windows XP
GUI on the thing (I knew it was XP based but I was horrified to see the
GUI ;') and messed around with some settings on it, rebooted it once or
twice and my speed was up to acceptable if not fast.
I've got it off in the corner of the lab now but should be dusting it
off and getting back to it soon if you want me to try and get duplicate
results.
I can't remember if I mentioned it in my other mails to the list but I
was running Debian unstable with various custom kernels and whatever the
latest version of open-iscsi was at the time.
Dan
This is somewhat off-topic, but unless there's substantially
different thought in the storage world (which I'm relatively new to),
Ethernet flow control is almost always a bad idea. Search for
head-of-line blocking to read about the common breakage that flow
control causes.
Rather than using flow-control, buy non-blocking switches that use
shared-memory for input frames. This will make flow-control a
non-issue, since the situations that would lead to pause frames can
never come about if the switch isn't oversubscribed.
If the switch is oversubscribed, then use QoS to classify packets into
queues so latency sensitive traffic can be switched across congested
links first.
--
Ross Vandegrift
ro...@kallisti.us
"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37
And while it's holding those frames at the head of the queue, remember
that the paused ports can't do anything else.
> Storage Area Networks are what I do. I can state unequivocally that
> Flowcontrol is essential for well performing and stable iSCSI SANs.
> QoS won't help when a server becomes the bottleneck. <br>
Let me give an example that seems to convince this network guy that
iSCSI should go without flow control as well. Please let me know
where I've incorrectly assumed something, or made an error!
Consider a network with two iSCSI targets, and a host that will access
both via a single interface.
The host initiates a connection with each target for two volumes and
begins reading from both. The read pattern to the first is bursty,
lots of small data files. The pattern to the second is streaming
large volumes of data. Say that 80% of the packets are from this bulk
data source. When the link to the host reaches saturation, two conditions
could obtain.
If flow control is enabled:
1) The host sends a PAUSE frame to the connected station.
2) The switch blocks data transmission for both disks.
3) Depending on the implementation and config, the swtich may
propogate PAUSE frames back to the MACs that are sending to the
congested port.
4) Your SAN is now blocked and not transmitting to anyone
becasue one host had a congested link.
If flow control is disabled:
1) The saturated host link begins to drop packets
2) TCP on both targets notice this, and begin to back off
until packet loss subsides.
3) Through this, performance on other devices using the SAN is
unaffected.
QoS can be used to make the second case even better. Use diffserv or
IP precedence to indicate that traffic to one target is higher-priority.
Most decent switches can convert those to 802.1p tags, putting the
higher-priority traffic into a seperate queue that gets first service.
Now, the bulk queue is only serviced after the higher priority queue
is empty. This ensures that all dropped frames are from your bulk
disk. TCP on that target now sees the packet loss and backs off. The
more critical disk didn't even notice the slowdown.
Jeez, xp =(
-----Original Message-----
From: open-...@googlegroups.com [mailto:open-...@googlegroups.com]
On Behalf Of Dan Reagan
Sent: 21 September 2007 20:01
To: open-...@googlegroups.com
Subject: Re: Slow iSCSI
I used dd to copy over 10G worth of zeros and timed it.
The bonnie output I think shows that it's your cpu holding things back not the disks.....
I have no idea why it says ++++ on the reads bit.
This is worth a read:
http://www.textuality.com/bonnie/advice.html
-----Original Message-----
From: open-...@googlegroups.com [mailto:open-...@googlegroups.com] On Behalf Of Miguel Gonzalez Castaños
Sent: 21 September 2007 19:17
To: open-...@googlegroups.com
Subject: Re: Slow iSCSI
Thanks,
Miguel
Hiren Joshi escribió:
> The debate over flow control aside, I turned flow control off on the
> eth and:
> Write
> dd if=/dev/zero of=zero bs=4096 count=1572864
> 1572864+0 records in
> 1572864+0 records out
> 6442450944 bytes (6.4 GB) copied, 154.027 seconds, 41.8 MB/s
> Read
> dd if=zero of=/dev/null bs=4096
> 1572864+0 records in
> 1572864+0 records out
> 6442450944 bytes (6.4 GB) copied, 171.52 seconds, 37.6 MB/s
>
> Better......
>
>
> ------------------------------------------------------------------------
> *From:* open-...@googlegroups.com
> [mailto:open-...@googlegroups.com] *On Behalf Of *Don Williams
> *Sent:* 22 September 2007 20:50
> *To:* open-...@googlegroups.com
> *Subject:* Re: Slow iSCSI
the bs equals the block size of my filesystem. the count gives me a 6G file (twice the size of my RAM.)
Read test:
dd if=/mnt/virtualdata1/zero of=/dev/null bs=4096
-----Original Message-----
From: open-...@googlegroups.com [mailto:open-...@googlegroups.com] On Behalf Of Miguel Gonzalez Castaños
Sent: 24 September 2007 16:36
To: open-...@googlegroups.com
Subject: Re: Slow iSCSI
I tried it on my Virtual Machine (it has only 256 Mb of memory right
now, but can be increased at will):
svn:~# dd if=/dev/zero of=/mnt/virtualdata1/zero bs=4096 count=1572864
1572864+0 records in
1572864+0 records out
6442450944 bytes (6.4 GB) copied, 486.128 seconds, 13.3 MB/s
svn:~#
svn:~# dd if=/mnt/virtualdata1/zero of=/dev/null bs=4096
1572864+0 records in
1572864+0 records out
6442450944 bytes (6.4 GB) copied, 505.934 seconds, 12.7 MB/s
Should I start increasing the memory assigned to the VM to see how it
affects the performance or should I look somewhere else? By the way, any
doc of tuning the performance?
Miguel
Hiren Joshi escribió:
I only suggested using a file size that's at least twice your memory so, you *know* the file is coming from the disk not the ram. So I don't think giving your VM more ram will help.....
Bonnie++ is a good benchmarking tool,
http://sourceforge.net/projects/bonnie/
This will (at the very least) show you if it's the disk or the processor that's slowing you down.
Miguel
Hiren Joshi escribió:
Miguel
Hiren Joshi escribió: