Logical Unit Not Ready, Initializing Cmd. Required

1,644 views
Skip to first unread message

sinysee

unread,
May 16, 2008, 1:07:32 AM5/16/08
to open-iscsi
Hello,

I am trying to use open-iscsi to connect to an FC storage via an ATTO
IPBridge 2700C. This is not a planned setup, but an emergency attempt
to regain access to the data after an FC switch failure.

I am using SLES 10.1 on the host. When trying to connect to the target
I get through the discovery stage successfully. When I log in,
iscsiadm returns success, but there are messages like

sd 17:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
Additional sense: Logical unit not ready, cause not reportable

in the kernel log and the device never gets available.

I got the tcpdump from the host and it shows that the host repeatedly
sends "Test Unit Ready" requests and receives "Logical Unit Not Ready,
Initializing Cmd. Required (0x0402)" in response. "Mode Sense" and
"Read Capacity" get meaningful responses. Then the host does a "Read"
request and gets "Logical Unit Not Ready, Cause Not Reportable
(0x0400)".

I am running Open-iSCSI packaged by Novell (open-iscsi-2.0.707-0.32).
# iscsiadm -V
iscsiadm version 2.0-754

# uname -ri
2.6.16.46-0.12-smp x86_64

I have tried open-iscsi 869 from the Open-iSCSI repository but the
picture does not change much.

Having very little prior experience with iSCSI I am not sure what
Initializing Cmd. Required is supposed to mean.

Thank you,
Dima

Konrad Rzeszutek

unread,
May 16, 2008, 9:49:47 AM5/16/08
to open-...@googlegroups.com
On Thu, May 15, 2008 at 10:07:32PM -0700, sinysee wrote:
>
> Hello,
>
> I am trying to use open-iscsi to connect to an FC storage via an ATTO
> IPBridge 2700C. This is not a planned setup, but an emergency attempt
> to regain access to the data after an FC switch failure.
>
> I am using SLES 10.1 on the host. When trying to connect to the target
> I get through the discovery stage successfully. When I log in,
> iscsiadm returns success, but there are messages like
>
> sd 17:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
> Additional sense: Logical unit not ready, cause not reportable
>
> in the kernel log and the device never gets available.

What do you mean by "never gets available"? Can you attach the full dmesg? Is it that
the block device (/dev/sdX) that is unavailable or the multipath device (/dev/dm-XX)?

Is it all of them or just one or two?


>
> I got the tcpdump from the host and it shows that the host repeatedly
> sends "Test Unit Ready" requests and receives "Logical Unit Not Ready,
> Initializing Cmd. Required (0x0402)" in response. "Mode Sense" and
> "Read Capacity" get meaningful responses. Then the host does a "Read"
> request and gets "Logical Unit Not Ready, Cause Not Reportable
> (0x0400)".

Is that for each of the block disks that showed up when you logged in
or just for some of them? If it is the latter than is expected with
Active-Passive targets which you might have.

.. snip..


> Having very little prior experience with iSCSI I am not sure what
> Initializing Cmd. Required is supposed to mean.

This looks more like an multipath configuration, or the lack of
it.

sinysee

unread,
May 16, 2008, 11:28:43 AM5/16/08
to open-iscsi
Hello,

> What do you mean by "never gets available"? Can you attach the full dmesg? Is it that
> the block device (/dev/sdX) that is unavailable or the multipath device (/dev/dm-XX)?

I am looking to set up an initiator-target configuration with an FC-
iSCSI bridge in between.
As I understand the bridge should be transparently packaging an FC
disk as an iSCSI target.

After
# iscsiadm -m node -p <target portal> -T <target> -l
I get errors in dmesg and finally
sd 6:0:0:1: Attached scsi disk sdb
However
# fdisk /dev/sdb
Returns with
Unable to read /dev/sdb
Finally,
# iscsiadm -m node -p <target portal> -T <target> -u
returns.

I am attaching the relevant part of the dmesg log. Hope it can clarify
the problem

Dear Konrad, I find it strange that the host does not send "Request
Sense" after receiving an error
for "Test Unit Ready". Is it a legit SCSI behaviour?

Thank you


***********************

Loading iSCSI transport class v2.0-754.
iscsi: registered transport (tcp)
scsi6 : iSCSI Initiator over TCP/IP
Vendor: ATTO Model: iPBridge 2700 Rev: 4.00
Type: Processor ANSI SCSI revision: 05
6:0:0:0: Attached scsi generic sg1 type 3
Vendor: COMPAQ Model: MSA1000 VOLUME Rev: 5.10
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdb: 976639545 512-byte hdwr sectors (500039 MB)
sdb: Write Protect is off
sdb: Mode Sense: 83 00 00 08
SCSI device sdb: drive cache: write back
SCSI device sdb: 976639545 512-byte hdwr sectors (500039 MB)
sdb: Write Protect is off
sdb: Mode Sense: 83 00 00 08
SCSI device sdb: drive cache: write back
sdb:<6>sd 6:0:0:1: Device not ready: <6>: Current: sense key: Not
Ready
Additional sense: Logical unit not ready, cause not reportable
end_request: I/O error, dev sdb, sector 0
Buffer I/O error on device sdb, logical block 0
sd 6:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
Additional sense: Logical unit not ready, cause not reportable
end_request: I/O error, dev sdb, sector 1
Buffer I/O error on device sdb, logical block 1
Buffer I/O error on device sdb, logical block 2
Buffer I/O error on device sdb, logical block 3
Buffer I/O error on device sdb, logical block 4
Buffer I/O error on device sdb, logical block 5
Buffer I/O error on device sdb, logical block 6
Buffer I/O error on device sdb, logical block 7
sd 6:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
Additional sense: Logical unit not ready, cause not reportable
end_request: I/O error, dev sdb, sector 0
Buffer I/O error on device sdb, logical block 0
Buffer I/O error on device sdb, logical block 1
sd 6:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
Additional sense: Logical unit not ready, cause not reportable
end_request: I/O error, dev sdb, sector 0
unable to read partition table
sd 6:0:0:1: Attached scsi disk sdb
sd 6:0:0:1: Attached scsi generic sg2 type 0
sd 6:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
Additional sense: Logical unit not ready, cause not reportable
end_request: I/O error, dev sdb, sector 0
printk: 14 messages suppressed.
Buffer I/O error on device sdb, logical block 0
sd 6:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
Additional sense: Logical unit not ready, cause not reportable
end_request: I/O error, dev sdb, sector 1
Buffer I/O error on device sdb, logical block 1
Buffer I/O error on device sdb, logical block 2
Buffer I/O error on device sdb, logical block 3
Buffer I/O error on device sdb, logical block 4
Buffer I/O error on device sdb, logical block 5
Buffer I/O error on device sdb, logical block 6
Buffer I/O error on device sdb, logical block 7
Buffer I/O error on device sdb, logical block 8
Buffer I/O error on device sdb, logical block 9
sd 6:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
Additional sense: Logical unit not ready, cause not reportable
end_request: I/O error, dev sdb, sector 0
Synchronizing SCSI cache for disk sdb:

***********************

Konrad Rzeszutek

unread,
May 16, 2008, 11:41:48 AM5/16/08
to open-...@googlegroups.com
On Fri, May 16, 2008 at 08:28:43AM -0700, sinysee wrote:
>
> Hello,
>
> > What do you mean by "never gets available"? Can you attach the full dmesg? Is it that
> > the block device (/dev/sdX) that is unavailable or the multipath device (/dev/dm-XX)?
>
> I am looking to set up an initiator-target configuration with an FC-
> iSCSI bridge in between.
> As I understand the bridge should be transparently packaging an FC
> disk as an iSCSI target.
>
> After
> # iscsiadm -m node -p <target portal> -T <target> -l
> I get errors in dmesg and finally
> sd 6:0:0:1: Attached scsi disk sdb

So the block disks show up fine. That means iSCSI works just fine.

> However
> # fdisk /dev/sdb
> Returns with
> Unable to read /dev/sdb
> Finally,
> # iscsiadm -m node -p <target portal> -T <target> -u
> returns.
>
> I am attaching the relevant part of the dmesg log. Hope it can clarify
> the problem

You only see one disk? Either way, are you running multipath on your machine?
If you are, what is the multipath -ll output?

>
> Dear Konrad, I find it strange that the host does not send "Request
> Sense" after receiving an error
> for "Test Unit Ready". Is it a legit SCSI behaviour?

The kernel intoragates the disks and that is what it gets. It reports to you
these errors. There is no need to send a Request Sense command as the Test Unit
Ready has already provided you the SCSI error, along with the ASC/ASCQ values, as
you can see:

> sd 6:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
> Additional sense: Logical unit not ready, cause not reportable

The next step for enterprise storages, such as the HP StorageWorks
is to issue a START SCSI command - which is what you should see if you
are using multipath. The path checker would start the LUN.

sinysee

unread,
May 18, 2008, 7:21:49 AM5/18/08
to open-iscsi
On May 16, 10:41 pm, Konrad Rzeszutek <kon...@virtualiron.com> wrote:
>
> So the block disks show up fine. That means iSCSI works just fine.
>

starting multipath up solved the problem

> You only see one disk? Either way, are you running multipath on your machine?
> If you are, what is the multipath -ll output?

Anyway, I am sending the multipath output in the configuration that
works

# multipath -ll
3600508b300920350a640cb720bce000edm-0 COMPAQ,MSA1000 VOLUME
[size=838G][features=1 queue_if_no_path][hwhandler=1 hp_sw]
\_ round-robin 0 [prio=4][active]
\_ 13:0:0:1 sdb 8:16 [active][ready]

Sometimes, depending mainly on the path_checker setting, the device
fails. Once I had a segfault in multipathd.

May 17 19:41:40 <hostname> multipathd: sdb: tur checker reports path
is down
May 17 19:41:51 <hostname> multipathd: sdb: tur checker reports path
is down
May 17 19:42:01 <hostname> multipathd: sdb: tur checker reports path
is down
May 17 19:42:07 <hostname> multipathd:
3600508b3009203503b2fc2c200040011: stop event checker thread
May 17 19:42:07 <hostname> kernel: multipathd[13092]: segfault at
0000000000000012 rip 00002b0b549d541d rsp 00007fff5682d1b0 error 4
May 17 19:42:07 <hostname> multipathd: error calling out /sbin/scsi_id
-g -u -s /block /sda
May 17 19:42:07 <hostname> multipathd: sdb: checker msg is
"readsector0 checker reports path is down"
May 17 19:42:07 <hostname> multipathd:
3600508b3009203503b2fc2c200040011: event checker started

In the present configuration the device sometimes goes away but
returns quickly

May 18 17:52:14 <hostname> kernel: sd 13:0:0:1: Device not ready: <6>:
Current: sense key: Not Ready
May 18 17:52:14 <hostname> kernel: Additional sense: Logical unit
not ready, cause not reportable
May 18 17:52:14 <hostname> kernel: end_request: I/O error, dev sdb,
sector 4775
May 18 17:52:14 <hostname> kernel: device-mapper: dm-multipath:
Failing path 8:16.
May 18 17:52:14 <hostname> multipathd: 8:16: mark as failed
May 18 17:52:14 <hostname> multipathd:
3600508b300920350a640cb720bce000e: remaining active paths: 0
May 18 17:52:19 <hostname> multipathd: sdb: hp_sw checker reports path
is ghost
May 18 17:52:19 <hostname> kernel: device-mapper: hp-sw: Flushing I/O
May 18 17:52:19 <hostname> kernel: device-mapper: hp-sw: sending
START_STOP_UNIT command
May 18 17:52:19 <hostname> multipathd: 8:16: reinstated
May 18 17:52:19 <hostname> multipathd:
3600508b300920350a640cb720bce000e: remaining active paths: 1

> The next step for enterprise storages, such as the HP StorageWorks
> is to issue a START SCSI command - which is what you should see if you
> are using multipath. The path checker would start the LUN.

Yes that was the original problem. There was no multipath running and
HP StorageWorks array would not
start. When multipath was started the problem went away.

Thank you for the advice.

I have multipath 0.4.7 running the default configuration (no /etc/
multipath.conf)

Konrad Rzeszutek

unread,
May 19, 2008, 9:26:13 AM5/19/08
to open-...@googlegroups.com
> # multipath -ll
> 3600508b300920350a640cb720bce000edm-0 COMPAQ,MSA1000 VOLUME
> [size=838G][features=1 queue_if_no_path][hwhandler=1 hp_sw]
> \_ round-robin 0 [prio=4][active]
> \_ 13:0:0:1 sdb 8:16 [active][ready]
>
> Sometimes, depending mainly on the path_checker setting, the device

That is expected. The one that was set in the .conf file by your vendor
or the built-in should be used.

These are the built-in values:

{
/* MSA 1000/MSA1500 EVA 3000/5000 with old firmware */
.vendor = "(COMPAQ|HP)",
.product = "(MSA|HSV)1.0.*",
.getuid = DEFAULT_GETUID,
.features = "1 queue_if_no_path",
.hwhandler = "1 hp-sw",
.selector = DEFAULT_SELECTOR,
.pgpolicy = GROUP_BY_PRIO,
.pgfailback = FAILBACK_UNDEF,
.rr_weight = RR_WEIGHT_NONE,
.no_path_retry = 12,
.minio = DEFAULT_MINIO,
.checker_name = HP_SW,
.prio_name = PRIO_HP_SW,
},

> May 17 19:42:07 <hostname> multipathd:
> 3600508b3009203503b2fc2c200040011: stop event checker thread
> May 17 19:42:07 <hostname> kernel: multipathd[13092]: segfault at
> 0000000000000012 rip 00002b0b549d541d rsp 00007fff5682d1b0 error 4

You might want to file a bug with the vendor for that.

.. snip ..


> Thank you for the advice.

You are welcome.


>
> I have multipath 0.4.7 running the default configuration (no /etc/
> multipath.conf)

Good. That should use the default values built-in.

Reply all
Reply to author
Forward
0 new messages