Broken pipe on my target. Is there any option on my initiator to fix it?

32 views
Skip to first unread message

Felipe Gutierrez

unread,
May 9, 2015, 1:05:19 PM5/9/15
to open-...@googlegroups.com
Hi, I am using jscsi.org target and open-iscsi initiator. Through NFS I can copy a bunch of files and it seems ok. When I execute a virtual machine from vmware (vmware -> NFS -> open-iscsi -> target jscsi) the target throws a broken pipe some times. THe initiator reestabilish the connection, but this broken pipe is corrupting my VM file system.

On a good work my target sends SCSIResponseParser PDU and after that receives SCSICommandParser PDU from the initiator. When the broken pipe is up to happen the target sends SCSIResponseParser PDU and does not receive SCSICommandParser PDU. Instead of it, the target receives after 5 seconds NOPOutParser PDU, and sends  NOPInParser PDU. After 60 seconds my target receives TaskManagementFunctionRequestParser PDU with OpCode: 0x2, which means to abort the task. So, the target do what the initiator is asking. The broken pipe happens ans a nes connections is estabilished.

My question is, why the initiator does not keep the comunication after the SCSIResponseParser PDU sent by the target? Is there any way to see if this message is wrong? Or any initiator log error?
Here is the target debug.

(228)19:19:01 DEBUG [main] fullfeature.WriteStage - PDU sent 4:   ParserClass: SCSIResponseParser
  ImmediateFlag: false
  OpCode: 0x21
  FinalFlag: true
  TotalAHSLength: 0x0
  DataSegmentLength: 0x0
  InitiatorTaskTag: 0x28000010
  Response: 0x0
  SNACK TAG: 0x0
  StatusSequenceNumber: 0xc8a
  ExpectedCommandSequenceNumber: 0xc6e
  MaximumCommandSequenceNumber: 0xc6e
  ExpDataSN: 0x0
  BidirectionalReadResidualOverflow: false
  BidirectionalReadResidualUnderflow: false
  ResidualOverflow: false
  ResidualUnderflow: false
  ResidualCount: 0x0
  Bidirectional Read Residual Count: 0x0

(273)19:19:06 DEBUG [main] connection.TargetSenderWorker - Receiving this PDU:
  ParserClass: NOPOutParser
  ImmediateFlag: true
  OpCode: 0x0
  FinalFlag: true
  TotalAHSLength: 0x0
  DataSegmentLength: 0x0
  InitiatorTaskTag: 0x29000010
  LUN: 0x0
  Target Transfer Tag: 0xffffffff
  CommandSequenceNumber: 0xc6e
  ExpectedStatusSequenceNumber: 0xc8b

(144)19:19:06 DEBUG [main] connection.TargetSenderWorker - connection.getStatusSequenceNumber: 3211
(167)19:19:06 DEBUG [main] connection.TargetSenderWorker - Sending this PDU:
  ParserClass: NOPInParser
  ImmediateFlag: false
  OpCode: 0x20
  FinalFlag: true
  TotalAHSLength: 0x0
  DataSegmentLength: 0x0
  InitiatorTaskTag: 0x29000010
  LUN: 0x0
  Target Transfer Tag: 0xffffffff
  StatusSequenceNumber: 0xc8b
  ExpectedCommandSequenceNumber: 0xc6e
  MaximumCommandSequenceNumber: 0xc6e

(228)19:19:11 DEBUG [main] connection.TargetSenderWorker - Receiving this PDU:
  ParserClass: NOPOutParser
  ImmediateFlag: true
  OpCode: 0x0
  FinalFlag: true
  TotalAHSLength: 0x0
  DataSegmentLength: 0x0
  InitiatorTaskTag: 0x2a000010
  LUN: 0x0
  Target Transfer Tag: 0xffffffff
  CommandSequenceNumber: 0xc6e
  ExpectedStatusSequenceNumber: 0xc8c


....
...
...
...
(228)19:20:02 DEBUG [main] connection.TargetSenderWorker - Receiving this PDU:
  ParserClass: TaskManagementFunctionRequestParser
  ImmediateFlag: true
  OpCode: 0x2
  FinalFlag: true
  TotalAHSLength: 0x0
  DataSegmentLength: 0x0
  InitiatorTaskTag: 0x36000010
  LUN: 0x0
  Referenced Task Tag: 0x6b000010
  CommandSequenceNumber: 0xc6e
  ExpectedStatusSequenceNumber: 0xc98
  RefCmdSN: 0xab6
  ExpDataSN: 0x0


Thanks, Felipe

Felipe Gutierrez

unread,
May 13, 2015, 10:24:48 AM5/13/15
to open-...@googlegroups.com
I am using async option to export me nfs disk. http://unixhelp.ed.ac.uk/CGI/man-cgi?exports

This help all writes on the disk to be very fast.

Donald Williams

unread,
May 13, 2015, 5:11:42 PM5/13/15
to open-...@googlegroups.com
Hello Felipe, 

 I'm not sure about anyone else, but I wouldn't expect that tweaking the iSCSI settings you've been talking about will improve this. 

 Have you tested just connecting from server to storage via iSCSI?   Take NFS out of the picture.  iSCSI is very dependent on the network.  What kind of switch are you using?   Is flowcontrol enabled?    Have you configured MPIO?  

 With just iSCSI you can potentially getting better triage data from the iSCSId logs. 

 don 

 

--
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+...@googlegroups.com.
To post to this group, send email to open-...@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.

Felipe Gutierrez

unread,
May 13, 2015, 5:39:06 PM5/13/15
to open-...@googlegroups.com
Hi Donald, thanks for reply,

I realized the my problem was the NFS. The default mode is sync, now I am exporting my NFS async at /etc/export. It is very dangers because I can corrupt my data, but I am gonna put a firewall to these two machines.

Thanks again!
Felipe

--
You received this message because you are subscribed to a topic in the Google Groups "open-iscsi" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/open-iscsi/oVTiSOJUaRI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to open-iscsi+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages