Infortrend + "iSCSI: detected conn error (1011)" + "TCP Dup ACK"

1,282 views
Skip to first unread message

Santi Saez

unread,
Nov 2, 2009, 1:16:38 PM11/2/09
to open-...@googlegroups.com

Hi,

Randomly we get Open-iSCSI "conn errors" when connecting to an
Infortrend A16E-G2130-4 storage array. We had discussed about this
earlier in the list, see:

http://tr.im/DVQm
http://tr.im/DVQp

Open-iSCSI logs this:

===============================================
Nov 2 18:34:02 vz-17 kernel: ping timeout of 5 secs expired, last rx
408250499, last ping 408249467, now 408254467
Nov 2 18:34:02 vz-17 kernel: connection1:0: iscsi: detected conn
error (1011)
Nov 2 18:34:03 vz-17 iscsid: Kernel reported iSCSI connection 1:0
error (1011) state (3)
Nov 2 18:34:07 vz-17 iscsid: connection1:0 is operational after
recovery (1 attempts)
Nov 2 18:34:52 vz-17 kernel: ping timeout of 5 secs expired, last rx
408294833, last ping 408299833, now 408304833
Nov 2 18:34:52 vz-17 kernel: connection1:0: iscsi: detected conn
error (1011)
Nov 2 18:34:53 vz-17 iscsid: Kernel reported iSCSI connection 1:0
error (1011) state (3)
Nov 2 18:34:57 vz-17 iscsid: connection1:0 is operational after
recovery (1 attempts)
===============================================

Running on CentOS 5.4 with "iscsi-initiator-utils-6.2.0.871-0.10.el5";
I think it's not a Open-iSCSI bug as Mike suggested at:

http://groups.google.com/group/open-iscsi/msg/fe37156096b2955f

I have only this error when connecting to Infortrend storage, and not
with NetApp, Nexsan, etc. *connected in the same SAN*.

Using Wireshark I see a lot of "TCP Dup ACK", "TCP ACKed lost
segment", etc. and iSCSI session finally ends in timeout, see a
screenshot here:

http://tinyurl.com/ykpvckn

Using Wireshark IO graphs I get this strange report about TCP/IP errors:

http://tinyurl.com/ybm4m8x

And this is another report in the same SAN connecting to a NetApp:

http://tinyurl.com/ycgc8ul

Those TCP/IP errors only occurs when connecting to Infortrend
storage.. and no with other targets in the same SAN (using same switch
infrastructure); is there anyway to deal with this using Open-iSCSI?
As I see in Internet, there're a lot of Infortrend's users suffering
this behavior.

Thanks!

P.D: speed and duplex configuration is correct in all point, there
aren't CRC errors in the switch.

--
Santi Saez
http://woop.es

James A. T. Rice

unread,
Nov 2, 2009, 1:43:49 PM11/2/09
to open-...@googlegroups.com
On Mon, 2 Nov 2009, Santi Saez wrote:

> Randomly we get Open-iSCSI "conn errors" when connecting to an
> Infortrend A16E-G2130-4 storage array. We had discussed about this
> earlier in the list, see:

> Nov 2 18:34:02 vz-17 kernel: ping timeout of 5 secs expired, last rx 408250499, last ping 408249467, now 408254467
> Nov 2 18:34:02 vz-17 kernel: connection1:0: iscsi: detected conn error (1011)
> Nov 2 18:34:03 vz-17 iscsid: Kernel reported iSCSI connection 1:0 error (1011) state (3)
> Nov 2 18:34:07 vz-17 iscsid: connection1:0 is operational after recovery (1 attempts)
> Nov 2 18:34:52 vz-17 kernel: ping timeout of 5 secs expired, last rx


That looks vaguely familiar, although I think mine was nop-out timeout
(might be reported in another log file). Does it mostly happen when you do
long sequential reads from the Infortrend unit? In my case it turned out
to be a very low level of packet drops being caused by a cisco 2960G when
'mls qos' was enabled (which due to an IOS bug, didn't increment the drop
counter). I'm not sure if the loss when 'mls qos' is enabled is by design
as part of WRED, or a function of the port buffers being divided up into
things smaller than optimal.

Having TCP window scaling enabled made the problem an order of magnitude
worse, try disabling it and seeing if you have the same problem still?
(suggest something like dd if=/dev/sdc of=/dev/null bs=1048576 count=10 to
see if that triggers it, assuming it was the same problem I was
suffering).

Every other iSCSI target I've tried recovered pretty gracefully from this,
but not the Infortrend, I suspect their TCP retransmit algorithm needs a
lot of love. I suspect it's pathologically broken when window scaling is
enabled.

Sadly when I opened a ticket with Infortrend, enclosing tcpdumps and
analysis, they were no more useful than to let me know they don't support
debian (despite having instructions for debian iscsi on their website),
and they don't support Western Digital drives in redundant controller
configurations (nice of them to have this somewhere public).

Hope that might be somewhat useful to you.. Please to let me / the list
know how you get on. There was sadly little information on this when I was
tearing my hair out about it.

Best wishes
James

--
James Rice
Jump Networks Ltd

Mike Christie

unread,
Nov 2, 2009, 6:52:32 PM11/2/09
to open-...@googlegroups.com

You can turn off ping/nops by setting

node.conn[0].timeo.noop_out_interval = 0
node.conn[0].timeo.noop_out_timeout = 0

(set that in iscsid.conf then rediscovery the target or run "iscsiadm -m
node -T your_target -o update -n name_of_param_above -v 0"

Or you might want to set them higher.

This might just work around. What might happen is that you will not see
the nop/ping and conn errors and instead would just see a slow down in
the workloads being run.

If you guys can get a hold of any infrotrend people let me know, because
I would be happy to work with them on this.

Ulrich Windl

unread,
Nov 3, 2009, 2:45:34 AM11/3/09
to open-...@googlegroups.com
Hi!

I would guess that your network/storage system is overloaded occasionally, and
then network packets are significantly (>5s) delayed. It sounds unlikely, but that
would explain a duplicate ACK IMHO.

Here with SLES10 SP2 on x86_64 with a HP EVA 6000 + iSCSI connectivity option
(MPX100) there is no such problem, even though there are about 20 hosts connected
to the storage system, most of them directly through a fabric switch.

However the iSCSI load is light. Here is what one of the two MPX boxes reports
(the load will rise; it's still early morning):
#> show perf byte

WARNING: Valid data is only displayed for port(s) that are not
associated with any configured FCIP routes.

Displaying bytes/sec (total)... (Press any key to stop display)

GE1 GE2 FC1 FC2
--------------------------------
53K 606K 43K 620K
774 774 1K 1K
651K 868 17K 636K
2K 26K 18K 10K
49K 0 22K 26K
55K 805K 799K 66K
29K 17K 40K 7K
597K 7K 534K 72K
172 196K 150K 47K
49K 516 25K 28K
294K 43K 29K 313K
774 774 1K 1K
228K 868 65K 171K
18K 20K 528 43K
6K 6K 0 12K
145K 516 101K 45K
15K 29K 15K 30K

(GEx are the gigabit Ethernet ports, FCx are the FibreChannel ports that are
connected with the storage system via a fabric switch)

Some more statistics (for those who care or like inspiration):
#> show stats

FC Port Statistics
--------------------
FC Port 1
Interrupt Count 82812963
Target Command Count 0
Initiator Command Count 83526964
Link Failure Count 0
Loss of Sync Count 0
Loss of Signal Count 0
Primitive Sequence Error Count 0
Invalid Transmission Word Count 0
Invalid CRC Error Count 0

FC Port 2
Interrupt Count 81551444
Target Command Count 0
Initiator Command Count 82221649
Link Failure Count 0
Loss of Sync Count 0
Loss of Signal Count 0
Primitive Sequence Error Count 0
Invalid Transmission Word Count 0
Invalid CRC Error Count 0

iSCSI Port Statistics
-----------------------
iSCSI Port 1
Interrupt Count 190227478
Target Command Count 100974368
Initiator Command Count 0
MAC Xmit Frames 122882662
MAC Xmit Byte Count 28445475500
MAC Xmit Multicast Frames 0
MAC Xmit Broadcast Frames 0
MAC Xmit Pause Frames 0
MAC Xmit Control Frames 0
MAC Xmit Deferrals 0
MAC Xmit Late Collisions 0
MAC Xmit Aborted 0
MAC Xmit Single Collisions 0
MAC Xmit Multiple Collisions 0
MAC Xmit Collisions 0
MAC Xmit Dropped Frames 0
MAC Xmit Jumbo Frames 1702770
MAC Rcvd Frames 156585933
MAC Rcvd Byte Count 187107689124
MAC Rcvd Unknown Control Frames 0
MAC Rcvd Pause Frames 0
MAC Rcvd Control Frames 0
MAC Rcvd Dribbles 0
MAC Rcvd Frame Length Errors 0
MAC Rcvd Jabbers 0
MAC Rcvd Carrier Sense Errors 0
MAC Rcvd Dropped Frames 0
MAC Rcvd CRC Errors 0
MAC Rcvd Encoding Errors 0
MAC Rcvd Length Errors Large 0
MAC Rcvd Length Errors Small 0
MAC Rcvd Multicast Frames 178239
MAC Rcvd Broadcast Frames 48

iSCSI Port 2
Interrupt Count 182729151
Target Command Count 97555857
Initiator Command Count 0
MAC Xmit Frames 120067355
MAC Xmit Byte Count 27948414986
MAC Xmit Multicast Frames 0
MAC Xmit Broadcast Frames 0
MAC Xmit Pause Frames 0
MAC Xmit Control Frames 0
MAC Xmit Deferrals 0
MAC Xmit Late Collisions 0
MAC Xmit Aborted 0
MAC Xmit Single Collisions 0
MAC Xmit Multiple Collisions 0
MAC Xmit Collisions 0
MAC Xmit Dropped Frames 0
MAC Xmit Jumbo Frames 1718670
MAC Rcvd Frames 156126093
MAC Rcvd Byte Count 196255090216
MAC Rcvd Unknown Control Frames 0
MAC Rcvd Pause Frames 0
MAC Rcvd Control Frames 0
MAC Rcvd Dribbles 0
MAC Rcvd Frame Length Errors 0
MAC Rcvd Jabbers 0
MAC Rcvd Carrier Sense Errors 0
MAC Rcvd Dropped Frames 0
MAC Rcvd CRC Errors 0
MAC Rcvd Encoding Errors 0
MAC Rcvd Length Errors Large 0
MAC Rcvd Length Errors Small 0
MAC Rcvd Multicast Frames 178236
MAC Rcvd Broadcast Frames 34

iSCSI Shared Statistics
-----------------------
PDUs Xmited 324033312
Data Bytes Xmited 29836097572
PDUs Rcvd 198783508
Data Bytes Rcvd 35739418624
I/O Completed 165710975
Unexpected I/O Rcvd 0
iSCSI Format Errors 0
Header Digest Errors 0
Data Digest Errors 0
Sequence Errors 0
IP Xmit Packets 242949995
IP Xmit Byte Count 47161789220
IP Xmit Fragments 0
IP Rcvd Packets 312354406
IP Rcvd Byte Count 371426357904
IP Rcvd Fragments 0
IP Datagram Reassembly Count 0
IP Error Packets 0
IP Fragment Rcvd Overlap 0
IP Fragment Rcvd Out of Order 0
IP Datagram Reassembly Timeouts 0
TCP Xmit Segment Count 242949995
TCP Xmit Byte Count 38654705673
TCP Rcvd Segment Count 312354406
TCP Rcvd Byte Count 361430272728
TCP Persist Timer Expirations 0
TCP Rxmit Timer Expired 0
TCP Rcvd Duplicate Acks 644
TCP Rcvd Pure Acks 4091830
TCP Xmit Delayed Acks 13648891
TCP Xmit Pure Acks 31445514
TCP Rcvd Segment Errors 101
TCP Rcvd Segment Out of Order 306
TCP Rcvd Window Probes 0
TCP Rcvd Window Updates 0
TCP ECC Error Corections 0


Regards,
Ulrich

Santi Saez

unread,
Nov 4, 2009, 10:24:13 AM11/4/09
to open-...@googlegroups.com
El 02/11/09 19:43, James A. T. Rice escribió:

Dear James,

> That looks vaguely familiar, although I think mine was nop-out timeout
> (might be reported in another log file). Does it mostly happen when you do
> long sequential reads from the Infortrend unit? In my case it turned out
> to be a very low level of packet drops being caused by a cisco 2960G when
> 'mls qos' was enabled (which due to an IOS bug, didn't increment the drop
> counter). I'm not sure if the loss when 'mls qos' is enabled is by design
> as part of WRED, or a function of the port buffers being divided up into
> things smaller than optimal.
>
> Having TCP window scaling enabled made the problem an order of magnitude
> worse, try disabling it and seeing if you have the same problem still?
> (suggest something like dd if=/dev/sdc of=/dev/null bs=1048576 count=10 to
> see if that triggers it, assuming it was the same problem I was
> suffering).
>
> Every other iSCSI target I've tried recovered pretty gracefully from this,
> but not the Infortrend, I suspect their TCP retransmit algorithm needs a
> lot of love. I suspect it's pathologically broken when window scaling is
> enabled.

Disabling TCP window scaling [1] on Linux solves nop-out problem, we
don't get more "iscsi: detected conn error" and performance improves :)

It's very strange, we have 3 Cisco 2960G in the SAN and this behavior
only occurs in two of them, we're looking in depth this problem.

Nop-out has been solved but we still have a lot of "duplicate ACKs" in
all machines. I will update this post with more info. James, thanks a
lot of for the help.

Regards,

[1] # echo 0 > /proc/sys/net/ipv4/tcp_window_scaling

Santi Saez

unread,
Nov 4, 2009, 10:38:20 AM11/4/09
to open-...@googlegroups.com
El 03/11/09 0:52, Mike Christie escribió:

Dear Mike,

> You can turn off ping/nops by setting
>
> node.conn[0].timeo.noop_out_interval = 0
> node.conn[0].timeo.noop_out_timeout = 0
>
> (set that in iscsid.conf then rediscovery the target or run "iscsiadm -m
> node -T your_target -o update -n name_of_param_above -v 0"

Thanks!! As I said to James in the previous email, disabling TCP window
scaling *solves partially* this problem, we still hold nop pings in the
configuration. But still have too many "TCP Dup ACKs" in the network :-S


> This might just work around. What might happen is that you will not see
> the nop/ping and conn errors and instead would just see a slow down in
> the workloads being run.

I have sent your contact to Infortrend developers, a engineer will
contact you, thanks!

Regards,

Matthew Dickinson

unread,
Nov 5, 2009, 8:41:00 PM11/5/09
to open-...@googlegroups.com
(sorry for possible dupe.)

I’m also seeing similar issues on a RedHat 5.4 server, iscsi-initiator-utils-6.2.0.871-0.10.el5.

I’ve got a Dell Poweredge 2950 with a 2 port MLT using broadcom NICs to a Nortel 5510 swtich. My Infortrend S16E-R1130 unit is connected using 8 Gbig ports – it’s all on the same switch and same private IP subnet.

The storage is configured using the multipathd to load balance.

I’m seeing similar TCP “weirdness” as the other posts mention as  well as the below errors.

Nov  2 08:15:14 backup kernel:  connection28:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4295808782, last ping 4295813782, now 4295818782
Nov  2 08:15:14 backup kernel:  connection28:0: detected conn error (1011)
Nov  2 08:15:14 backup kernel:  connection27:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4295808790, last ping 4295813790, now 4295818790
Nov  2 08:15:14 backup kernel:  connection27:0: detected conn error (1011)
Nov  2 08:15:14 backup kernel:  connection33:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4295808796, last ping 4295813796, now 4295818796
Nov  2 08:15:14 backup kernel:  connection33:0: detected conn error (1011)


The performance isn’t what I’d expect:

[root@backup backup]# dd if=/dev/zero of=test count=5000000
5000000+0 records in
5000000+0 records out
2560000000 bytes (2.6 GB) copied, 280.271 seconds, 9.1 MB/s

Matthew

mdaitc

unread,
Nov 6, 2009, 8:10:26 AM11/6/09
to open-iscsi

(sorry for possible dupe - googlegroups didn't seem to be accepting
any post via email, i had to use the web interace?)

I’m also seeing similar issues on a RedHat 5.4 server, iscsi-initiator-
utils-6.2.0.871-0.10.el5.

I’ve got a Dell Poweredge 2950 with a 2 port MLT using broadcom NICs
to a Nortel 5510 swtich. My Infortrend S16E-R1130 unit is connected
using 8 Gbig ports – it’s all on the same switch and same private IP
subnet.

The storage is configured using the multipathd to load balance.

I’m seeing similar TCP “weirdness” as the other posts mention as well
as the below errors.

Nov 2 08:15:14 backup kernel: connection28:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 4295808782, last ping 4295813782, now
4295818782
Nov 2 08:15:14 backup kernel: connection28:0: detected conn error
(1011)
Nov 2 08:15:14 backup kernel: connection27:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 4295808790, last ping 4295813790, now
4295818790
Nov 2 08:15:14 backup kernel: connection27:0: detected conn error
(1011)
Nov 2 08:15:14 backup kernel: connection33:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 4295808796, last ping 4295813796, now
4295818796
Nov 2 08:15:14 backup kernel: connection33:0: detected conn error
(1011)


The performance isn’t what I’d expect:

[root@backup backup]# dd if=/dev/zero of=test count=5000000
5000000+0 records in
5000000+0 records out
2560000000 bytes (2.6 GB) copied, 280.271 seconds, 9.1 MB/s

thanks,

Mike Christie

unread,
Nov 6, 2009, 4:08:13 PM11/6/09
to open-...@googlegroups.com
Matthew Dickinson wrote:
> (sorry for possible dupe.)
>
> Iąm also seeing similar issues on a RedHat 5.4 server,
> iscsi-initiator-utils-6.2.0.871-0.10.el5.
>
> Iąve got a Dell Poweredge 2950 with a 2 port MLT using broadcom NICs to a

> Nortel 5510 swtich. My Infortrend S16E-R1130 unit is connected using 8 Gbig
> ports ­ itąs all on the same switch and same private IP subnet.

>
> The storage is configured using the multipathd to load balance.
>
> Iąm seeing similar TCP łweirdness˛ as the other posts mention as well as

> the below errors.
>
> Nov 2 08:15:14 backup kernel: connection28:0: ping timeout of 5 secs
> expired, recv timeout 5, last rx 4295808782, last ping 4295813782, now
> 4295818782
> Nov 2 08:15:14 backup kernel: connection28:0: detected conn error (1011)
> Nov 2 08:15:14 backup kernel: connection27:0: ping timeout of 5 secs
> expired, recv timeout 5, last rx 4295808790, last ping 4295813790, now
> 4295818790
> Nov 2 08:15:14 backup kernel: connection27:0: detected conn error (1011)
> Nov 2 08:15:14 backup kernel: connection33:0: ping timeout of 5 secs
> expired, recv timeout 5, last rx 4295808796, last ping 4295813796, now
> 4295818796
> Nov 2 08:15:14 backup kernel: connection33:0: detected conn error (1011)
>

Could you send more of the log? Do you see a message like


connection1:0 is
operational after recovery (1 attempts)

after you see the conn errors (how many attempts)?

Try disabling nops by setting

Matthew Dickinson

unread,
Nov 6, 2009, 4:39:14 PM11/6/09
to open-...@googlegroups.com
On 11/6/09 3:08 PM, "Mike Christie" <mich...@cs.wisc.edu> wrote:

>
> Could you send more of the log? Do you see a message like
> connection1:0 is
> operational after recovery (1 attempts)
> after you see the conn errors (how many attempts)?

Here's one particular connection:

Nov 4 05:12:14 backup kernel: connection22:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 4321648393, last ping 4321653393, now
4321658393
Nov 4 05:12:14 backup kernel: connection22:0: detected conn error (1011)
Nov 4 05:12:21 backup iscsid: connection22:0 is operational after recovery
(1 attempts)
Nov 4 05:12:46 backup kernel: connection22:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 4321680691, last ping 4321685691, now
4321690691
Nov 4 05:12:46 backup kernel: connection22:0: detected conn error (1011)
Nov 4 05:12:58 backup iscsid: connection22:0 is operational after recovery
(1 attempts)
Nov 4 07:46:03 backup kernel: connection22:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 4330877890, last ping 4330882890, now
4330887890
Nov 4 07:46:03 backup kernel: connection22:0: detected conn error (1011)
Nov 4 07:46:10 backup iscsid: connection22:0 is operational after recovery
(1 attempts)
Nov 4 07:46:27 backup kernel: connection22:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 4330901733, last ping 4330906733, now
4330911733
Nov 4 07:46:27 backup kernel: connection22:0: detected conn error (1011)
Nov 4 07:46:32 backup iscsid: connection22:0 is operational after recovery
(1 attempts)
Nov 4 07:47:21 backup kernel: connection22:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 4330955414, last ping 4330960414, now
4330965414
Nov 4 07:47:21 backup kernel: connection22:0: detected conn error (1011)
Nov 4 07:47:28 backup iscsid: connection22:0 is operational after recovery
(1 attempts)
Nov 4 07:48:28 backup kernel: connection22:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 4331023213, last ping 4331028213, now
4331033213
Nov 4 07:48:28 backup kernel: connection22:0: detected conn error (1011)
Nov 4 07:48:35 backup iscsid: connection22:0 is operational after recovery
(1 attempts)

FWIW:

[root@backup ~]# cat /var/log/messages | grep "after recovery" | awk '{print
$11" "$12}' | sort | uniq
(113 attempts)
(1 attempts)
(24 attempts)
(2 attempts)
(3 attempts)
(4 attempts)
(5 attempts)
(66 attempts)
(68 attempts)
(6 attempts)
(7 attempts)
(8 attempts)
(9 attempts)

>
> Try disabling nops by setting
>
> node.conn[0].timeo.noop_out_interval = 0
> node.conn[0].timeo.noop_out_timeout = 0

Ok, I'll let you know how it pans out.

Thanks,

Matthew


Santi Saez

unread,
Nov 9, 2009, 4:33:07 AM11/9/09
to open-...@googlegroups.com
El 06/11/09 14:10, mdaitc escribió:

Hi mdaitc,

> I’m seeing similar TCP “weirdness” as the other posts mention as well
> as the below errors.

(..)

> Nov 2 08:15:14 backup kernel: connection33:0: detected conn error

> The performance isn’t what I’d expect:

(..)

What happens if you disable TCP window scaling option in RHEL servers?

# echo 0 > /proc/sys/net/ipv4/tcp_window_scaling

In our case, iSCSI "conn errors" stopped after disabling, but still have
a lot of TCP “weirdness” in the network, mainly dup ACKs packages.

Hoot, Joseph

unread,
Nov 9, 2009, 6:01:18 AM11/9/09
to open-...@googlegroups.com
What version of OiS are you using? I had lots of weirdness and the
same types of disconnects to our Dell EqualLogic when we were
(actually still are in production) using 868 code. I'm now using open-
iscsi-871 code plus a sendwait patch and haven' had the issue. I've
now been slamming my storage for a week and a half with multiple
threads of dt.
===========================
Joseph R. Hoot
Lead System Programmer/Analyst
(w) 716-878-4832
(c) 716-759-HOOT
joe....@itec.suny.edu
GPG KEY: 7145F633
===========================

Gopu Krishnan

unread,
Nov 7, 2009, 9:08:43 AM11/7/09
to open-...@googlegroups.com
Hi all,
 
I am working on iSCSI En. Tar. Could you please someone explain about the performance of the IET.
If so how the performance was calculated and what was the througput for the same.
 
Thanks
Gopala krishnan Varatharajan

Mike Christie

unread,
Nov 9, 2009, 2:31:00 PM11/9/09
to open-...@googlegroups.com
Hoot, Joseph wrote:
> What version of OiS are you using? I had lots of weirdness and the
> same types of disconnects to our Dell EqualLogic when we were
> (actually still are in production) using 868 code. I'm now using open-
> iscsi-871 code plus a sendwait patch and haven' had the issue. I've

What is the sendwait patch? Is it a patch for open-iscsi or to the
kernel network code?

Hoot, Joseph

unread,
Nov 9, 2009, 2:41:34 PM11/9/09
to open-...@googlegroups.com
it was for OiS 871 code prior to RHEL 5.4 release (not sure if the
release include it or not). I'm not sure who came up with it. I was
working with Don Williams from Dell EqualLogic. He got ahold of it
somehow. I applied it and it seemed to improve things.

Mike Christie

unread,
Nov 9, 2009, 6:18:57 PM11/9/09
to open-...@googlegroups.com
Hoot, Joseph wrote:
> it was for OiS 871 code prior to RHEL 5.4 release (not sure if the
> release include it or not). I'm not sure who came up with it. I was
> working with Don Williams from Dell EqualLogic. He got ahold of it
> somehow. I applied it and it seemed to improve things.
>

Ah ok. I think it was the patch I sent to Don.

If you just used 871 without the patch (or what is in the stock RHEL 5.4
kernel) does it work ok? There were a couple changes from 868 to 871
that I thought would also fix the problem, so I was waiting for Don and
them to retest just 871 and get back to me.

Hoot, Joseph

unread,
Nov 9, 2009, 8:13:07 PM11/9/09
to open-...@googlegroups.com
I've had about 3 threads of dt (kicking off a bit randomly) on (3) separate volumes for over a week and haven't had a single disconnect yet. I am currently using whatever rpm is distributed with Oracle VM v2.2. I know for sure that they have included the 871 base, plus I believe at least a one off patch. I can get more details if you'd like.

But so far so good for now

Mike Christie

unread,
Nov 10, 2009, 12:17:52 PM11/10/09
to open-...@googlegroups.com
Hoot, Joseph wrote:
> I've had about 3 threads of dt (kicking off a bit randomly) on (3) separate volumes for over a week and haven't had a single disconnect yet. I am currently using whatever rpm is distributed with Oracle VM v2.2. I know for sure that they have included the 871 base, plus I believe at least a one off patch. I can get more details if you'd like.
>
> But so far so good for now
>

I think I have the source they are using. Could you do a uname -r, so I
can see what kernel they are using.

Hoot, Joseph

unread,
Nov 10, 2009, 12:21:08 PM11/10/09
to open-...@googlegroups.com
[root@storage ~]# uname -r
2.6.18-164.el5
[root@storage ~]# rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.868-0.18.el5_3.1
[root@storage ~]#

Matthew Dickinson

unread,
Nov 10, 2009, 12:28:58 PM11/10/09
to open-...@googlegroups.com
On 11/6/09 3:39 PM, "Matthew Dickinson" <matt-op...@alpha345.com> wrote:

>
>
>>
>> Try disabling nops by setting
>>
>> node.conn[0].timeo.noop_out_interval = 0
>> node.conn[0].timeo.noop_out_timeout = 0
>

I'm still getting errors:

Nov 10 09:08:04 backup kernel: connection12:0: detected conn error (1011)
Nov 10 09:08:05 backup iscsid: Kernel reported iSCSI connection 12:0 error
(1011) state (3)
Nov 10 09:08:08 backup iscsid: connection12:0 is operational after recovery
(1 attempts)
Nov 10 09:09:43 backup kernel: connection11:0: detected conn error (1011)
Nov 10 09:09:43 backup kernel: connection12:0: detected conn error (1011)
Nov 10 09:09:44 backup kernel: connection11:0: detected conn error (1011)
Nov 10 09:09:44 backup iscsid: Kernel reported iSCSI connection 11:0 error
(1011) state (3)
Nov 10 09:09:44 backup iscsid: Kernel reported iSCSI connection 12:0 error
(1011) state (3)
Nov 10 09:09:44 backup iscsid: Kernel reported iSCSI connection 11:0 error
(1011) state (1)
Nov 10 09:09:46 backup kernel: session11: target reset succeeded\
Nov 10 09:09:47 backup iscsid: connection11:0 is operational after recovery
(1 attempts)
Nov 10 09:09:47 backup iscsid: connection12:0 is operational after recovery
(1 attempts)
Nov 10 09:09:56 backup kernel: sd 18:0:0:2: SCSI error: return code =
0x000e0000
Nov 10 09:09:56 backup kernel: end_request: I/O error, dev sdv, sector
60721248
Nov 10 09:09:56 backup kernel: device-mapper: multipath: Failing path 65:80.
Nov 10 09:09:56 backup kernel: sd 18:0:0:2: SCSI error: return code =
0x000e0000
Nov 10 09:09:56 backup kernel: end_request: I/O error, dev sdv, sector
60727648
Nov 10 09:09:56 backup kernel: sd 18:0:0:2: SCSI error: return code =
0x000e0000
Nov 10 09:10:31 backup kernel: device-mapper: multipath: Failing path
65:112.

Interestingly, I tried a Windows 2008 server R2 talking over a single
connection to the storage unit, configured to access just via one
interface, I was able to sustain 20MB/s ­ so it would ³appear² to be a
Linux-related issue - I'm only able to get 9MB/s out of Linux even when
using 8 interfaces on both controllers.

Matthew


Mike Christie

unread,
Nov 10, 2009, 12:39:30 PM11/10/09
to open-...@googlegroups.com

What version of open-iscsi were you using and what kernel, and were you
using the iscsi kernel modules with open-iscsi.org tarball or from the
kernel?


It looks like we are sending more IO than the target can handle. In one
of those cases it took more than 30 or 60 seconds (depending on your
timeout value).

What is the value of

cat /sys/block/sdXYZ/device/timeout

?

If it is 30 or 60 could you increase it to 360? After you login to the
target do

echo 360 > /sys/block/sdXYZ/device/timeout

And what is the value of:

iscsiadm -m node -T your_target | grep node.session.cmds_max

If that is 128, then could you decrease that to 32 or 16?

Run

iscsiadm -m node -T your_target -u
iscsiadm -m node -T your_target -o update -n node.session.cmds_max -v 32
iscsiad-m node -T your_target -l


And if those prevent the io errors then could you do

echo noop > /sys/block/sdXYZ/queue/scheduler

to see if performance increases with a difference scheduler.

Mike Christie

unread,
Nov 10, 2009, 2:30:18 PM11/10/09
to open-...@googlegroups.com
Hoot, Joseph wrote:
> [root@storage ~]# uname -r
> 2.6.18-164.el5
> [root@storage ~]# rpm -qa | grep iscsi
> iscsi-initiator-utils-6.2.0.868-0.18.el5_3.1
> [root@storage ~]#
>

Weird.

Is 2.6.18-164.el5 the kernel being used in the virtual machine/DonU? Is
that where you are using iscsi? It looks like the Oracle enterprise
linux kernel is 2.6.18-164.el5, which looks like it is based on RHEL
5.4. The iscsi code in there is the same as RHEL/upstream. No sendwait
patch.

However, it looks like there is a 2.6.18-128.2.1.4.9 kernel (comes with
the Oracle VM rpms). In here we have a different iscsi version. It looks
a little older than what is in 2.6.18-164.el5, but it has the sendwait
patch I send to dell. Do you use this kernel in the Dom0? Are you using
this kernel with iscsi?

Hoot, Joseph

unread,
Nov 12, 2009, 10:15:11 AM11/12/09
to open-...@googlegroups.com
sorry... wrong information. Here is the correct information. I was doing some testing in VMWare Fusion VM's for a presentation that I'm giving. The "storage" server is CentOS 5.3, which dishes out IETD targets for my OVM servers. The OVM 2.2 environment is as follows:

[root@ovm1 ~]# uname -r
2.6.18-128.2.1.4.9.el5xen
[root@ovm1 ~]# rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.871-0.7.el5
[root@ovm1 ~]#

Hoot, Joseph

unread,
Nov 12, 2009, 10:28:17 AM11/12/09
to open-...@googlegroups.com
oh... also, the "ovm1" server is the dom0. That system is connecting to IETD targets from within the dom0 using iscsi-initiator-utils-6.2.0.871-0.7. There are two ifaces setup to connect to two portal IP's on my CentOS 5.3, aka "storage", system. The result is this:

[root@ovm1 ~]# iscsiadm -m session
tcp: [1] 192.168.30.1:3260,1 iqn.2009-09.com.itec.storage:tgtd
tcp: [2] 192.168.30.1:3260,1 iqn.2009-09.com.itec.storage:tgtd
tcp: [3] 192.168.30.2:3260,1 iqn.2009-09.com.itec.storage:tgtd
tcp: [4] 192.168.30.2:3260,1 iqn.2009-09.com.itec.storage:tgtd

In my REAL test and production OVM 2.2 environments, we have 2 ifaces setup to connect to one load balanced Dell EqualLogic group IP, which results in something like this:

[root@oir7102901 log]# iscsiadm -m session | grep lun0
tcp: [2] 192.168.0.19:3260,1 iqn.2001-05.com.equallogic:0-8a0906-781989902-5a9000000114ae5e-ovm-t-lun0
tcp: [5] 192.168.0.19:3260,1 iqn.2001-05.com.equallogic:0-8a0906-781989902-5a9000000114ae5e-ovm-t-lun0
[root@oir7102901 log]#

In our current TEST environment (Dell EqualLogic + Dell powerconnect switch running MTU=9126 + Dell R710 with OVM 2.2 installed) we've had some strange server issues (possible hardware problems. I'm still diagnosing this). However, with regards to iSCSI, I haven't seen any issues and can see (3) dt threads going for my ovm-t-lun0, ovm-t-lun1, and ovm-t-lun2 volumes. The overall throughput is roughly only about 70MB/s TOTAL (that includes metrics from all three of those luns) for reads and writes. The iowait% is my bottleneck at the moment though. And we have not had any "iscsi disconnect" messsages.

In our PROD2 environment (Dell m610 blades + Cisco switch modules in the blade chassis-- I believe m1000e-- + Dell powerconnect switch + Dell EqualLogic members) we've had absolutely no issues thus far. The dt tests have the same throughput and have been running for about 8-9 days so far without a single disconnect.

Matthew Dickinson

unread,
Nov 19, 2009, 10:50:29 AM11/19/09
to open-...@googlegroups.com



On 11/10/09 11:39 AM, "Mike Christie" <mich...@cs.wisc.edu> wrote:
>
> What version of open-iscsi were you using and what kernel, and were you
> using the iscsi kernel modules with open-iscsi.org tarball or from the
> kernel?

iscsi-initiator-utils-6.2.0.871-0.10.el5
kernel-2.6.18-164.2.1.el5

RedHat RPMs

>
>
> It looks like we are sending more IO than the target can handle. In one
> of those cases it took more than 30 or 60 seconds (depending on your
> timeout value).
>
> What is the value of
>
> cat /sys/block/sdXYZ/device/timeout
>
> ?
>
> If it is 30 or 60 could you increase it to 360? After you login to the
> target do
>
> echo 360 > /sys/block/sdXYZ/device/timeout

I've tried setting this, but it appears to have no effect - it was 60, and I
increased to 360.

>
> And what is the value of:
>
> iscsiadm -m node -T your_target | grep node.session.cmds_max
>
> If that is 128, then could you decrease that to 32 or 16?
>
> Run
>
> iscsiadm -m node -T your_target -u
> iscsiadm -m node -T your_target -o update -n node.session.cmds_max -v 32
> iscsiad-m node -T your_target -l

I've tried setting to both 16 and 32, but it behaves about the same.

>
>
> And if those prevent the io errors then could you do
>
> echo noop > /sys/block/sdXYZ/queue/scheduler
>
> to see if performance increases with a difference scheduler.


I really think I'm back to the duplicate ACK problem - see the attached
packet dump - at one point there's 30 duplicate ACKs... Interestingly, the
storage has "worked" for the past week - I'm using it as D2D backup. This
morning (about 7 days later), it's giving all these duplicate ACKs.

I'm currently running into messages such as:

Nov 19 09:46:58 backup iscsid: Kernel reported iSCSI connection 2:0 error
(1011) state (3)
Nov 19 09:47:00 backup kernel: session2: target reset succeeded
Nov 19 09:47:01 backup iscsid: connection2:0 is operational after recovery
(1 attempts)
Nov 19 09:47:10 backup kernel: sd 9:0:0:0: SCSI error: return code =
0x000e0000
Nov 19 09:47:10 backup kernel: end_request: I/O error, dev sdf, sector 8856
Nov 19 09:47:10 backup kernel: device-mapper: multipath: Failing path 8:80.
Nov 19 09:47:10 backup kernel: sd 9:0:0:0: SCSI error: return code =
0x000e0000
Nov 19 09:47:10 backup kernel: end_request: I/O error, dev sdf, sector 74424
Nov 19 09:47:10 backup kernel: sd 9:0:0:2: SCSI error: return code =
0x000e0000
Nov 19 09:47:10 backup kernel: end_request: I/O error, dev sdm, sector
8845240
Nov 19 09:47:10 backup kernel: device-mapper: multipath: Failing path 8:192.
Nov 19 09:47:10 backup kernel: sd 9:0:0:2: SCSI error: return code =
0x000e0000
Nov 19 09:47:10 backup kernel: end_request: I/O error, dev sdm, sector
62915456
Nov 19 09:47:10 backup kernel: sd 9:0:0:2: timing out command, waited 300s
Nov 19 09:47:10 backup multipathd: /sbin/mpath_prio_alua exitted with 1
Nov 19 09:47:10 backup multipathd: error calling out /sbin/mpath_prio_alua
/dev/sdm
Nov 19 09:47:10 backup multipathd: 3600d0230ffffffff061d4479bfb83902: switch
to path group #2

This is also interesting:

Nov 18 01:48:30 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 8
Nov 18 20:16:29 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 7
Nov 18 20:16:29 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 6
Nov 18 20:16:29 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 5
Nov 18 20:16:29 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 4
Nov 18 20:16:34 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 5
Nov 18 20:32:09 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 6
Nov 18 20:43:05 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 7
Nov 18 20:48:08 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 8
Nov 18 20:53:36 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 7
Nov 18 20:53:36 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 6
Nov 18 20:53:36 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 5
Nov 18 20:53:36 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 4
Nov 18 20:53:36 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 3
Nov 18 20:53:36 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 2
Nov 18 20:53:36 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 1
Nov 18 20:53:36 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 0
Nov 18 20:53:41 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 1
Nov 18 20:59:09 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 2
Nov 18 21:04:37 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 3
Nov 18 21:10:05 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 4
Nov 18 21:15:33 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 5
Nov 18 21:20:35 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 6
Nov 18 21:26:03 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 7
Nov 18 21:31:06 backup multipathd: 3600d0230ffffffff061d4479bfb83902:
remaining active paths: 8

Matthew


Mike Christie

unread,
Nov 19, 2009, 3:07:55 PM11/19/09
to open-...@googlegroups.com
I did not get the attachement.

> storage has "worked" for the past week - I'm using it as D2D backup. This
> morning (about 7 days later), it's giving all these duplicate ACKs.
>
> I'm currently running into messages such as:
>
> Nov 19 09:46:58 backup iscsid: Kernel reported iSCSI connection 2:0 error
> (1011) state (3)
> Nov 19 09:47:00 backup kernel: session2: target reset succeeded

If you are using Red Hat RPMs, make a red hat bugzilla
https://bugzilla.redhat.com/. CC mchr...@redhat.com on the bugzilla or
email me at that address when you have made the bugzilla. I will then
add some network people to it. Attach your trace to the bugzilla.

Matthew Dickinson

unread,
Nov 19, 2009, 3:32:48 PM11/19/09
to open-...@googlegroups.com



On 11/19/09 2:07 PM, "Mike Christie" <mich...@cs.wisc.edu> wrote:
>> I really think I'm back to the duplicate ACK problem - see the attached
>> packet dump - at one point there's 30 duplicate ACKs... Interestingly, the
>
> I did not get the attachement.
>

Apologies, I've attached it this time :-)

Matthew

itrend.broke.pcap
Reply all
Reply to author
Forward
0 new messages