fail to create VMFS datastore but can see disk - new setup with esos on ML110 G6 with FC HBAs

6,493 views
Skip to first unread message

Lulu62

unread,
May 11, 2018, 4:40:39 PM5/11/18
to esos-users
Hi All,

I'm pulling my hair out:

(copy-paste of my original post in vmware community)

 

I'm building my vmware home lab and I'm having an issue when it comes to create datastores:

 

The host is a Dell Optiplex 7010 running unmodified ESXi 6.7.0 on a USB stick.

The san storage server is a HP Proliant ML110 G6 running the latest version of ESOS, disks are standard SATA spinners.

Both are connected with fibre channel via a switch, QLE2462 on the host and QLE2460 on the san storage server.

 

The zoning has been done and seems to work, I can see the 'ATA Fibre Channel Disk (naa...)' on the host.

 

Now when I try to create a datastore with the disk, I get the following error message:

 

Failed to create VMFS datastore test - Operation failed, diagnostics report: Unable to create Filesystem, please see VMkernel log for more details: Failed to create VMFS on device naa.5000c50051394f65:1

 

I looked at the VMkernel log, and this is what I found:

 

(showing all events between first mention of naa and last mention)

 

2018-05-10T12:33:49.592Z cpu2:2097564)ScsiDeviceIO: 9297: Get VPD 86 Inquiry for device "naa.5000c50051394f65" from Plugin "NMP" failed. Not supported

2018-05-10T12:33:49.592Z cpu0:2097564)ScsiDeviceIO: 7998: QErr is correctly set to 0x0 for device naa.5000c50051394f65.

2018-05-10T12:33:49.592Z cpu1:2097564)ScsiDeviceIO: 8495: Could not detect setting of sitpua for device naa.5000c50051394f65. Error Not supported.

2018-05-10T12:33:49.595Z cpu1:2097564)WARNING: NFS: 1227: Invalid volume UUID naa.5000c50051394f65:1

2018-05-10T12:33:49.601Z cpu3:2097564)FSS: 6092: No FS driver claimed device 'naa.5000c50051394f65:1': No filesystem on the device

2018-05-10T12:33:49.601Z cpu3:2097564)ScsiEvents: 300: EventSubsystem: Device Events, Event Mask: 40, Parameter: 0x4302c7adbac0, Registered!

2018-05-10T12:33:49.601Z cpu3:2097564)ScsiEvents: 300: EventSubsystem: Device Events, Event Mask: 200, Parameter: 0x4302c7adbac0, Registered!

2018-05-10T12:33:49.601Z cpu3:2097564)ScsiEvents: 300: EventSubsystem: Device Events, Event Mask: 800, Parameter: 0x4302c7adbac0, Registered!

2018-05-10T12:33:49.601Z cpu3:2097564)ScsiEvents: 300: EventSubsystem: Device Events, Event Mask: 400, Parameter: 0x4302c7adbac0, Registered!

2018-05-10T12:33:49.601Z cpu3:2097564)ScsiEvents: 300: EventSubsystem: Device Events, Event Mask: 8, Parameter: 0x4302c7adbac0, Registered!

2018-05-10T12:33:49.601Z cpu3:2097564)ScsiDevice: 4936: Successfully registered device "naa.5000c50051394f65" from plugin "NMP" of type 0

2018-05-10T12:33:49.601Z cpu0:2097564)NMP: nmp_DeviceUpdateProtectionInfo:747: Set protection info for device 'naa.5000c50051394f65', Enabled: 0 ProtType: 0x0 Guard: 0x0 ProtMask: 0x0

2018-05-10T12:33:49.601Z cpu0:2097564)ScsiEvents: 300: EventSubsystem: Device Events, Event Mask: 180, Parameter: 0x430661162010, Registered!

2018-05-10T12:35:07.531Z cpu0:2097547)INFO (ne1000): false RX hang detected on vmnic0

2018-05-10T12:35:28.531Z cpu0:2097547)INFO (ne1000): false RX hang detected on vmnic0

2018-05-10T12:35:31.950Z cpu1:2098532 opID=5e847b04)World: 11942: VC opID 0f90c609 maps to vmkernel opID 5e847b04

2018-05-10T12:35:31.950Z cpu1:2098532 opID=5e847b04)NVDManagement: 1478: No nvdimms found on the system

2018-05-10T12:35:38.182Z cpu1:2099660)WARNING: NFS: 1227: Invalid volume UUID naa.5000c50051394f65:1

2018-05-10T12:35:38.189Z cpu0:2099660)FSS: 6092: No FS driver claimed device 'naa.5000c50051394f65:1': No filesystem on the device

2018-05-10T12:35:38.197Z cpu3:2097178)ScsiDeviceIO: 3015: Cmd(0x459a40b53a40) 0x1a, CmdSN 0x352 from world 0 to dev "t10.SanDisk00Ultra00000000000000000000004C530001311229109510" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-05-10T12:35:38.225Z cpu0:2098648 opID=c86f69bb)World: 11942: VC opID 0f90c616 maps to vmkernel opID c86f69bb

2018-05-10T12:35:38.225Z cpu0:2098648 opID=c86f69bb)VC: 4616: Device rescan time 38 msec (total number of devices 4)

2018-05-10T12:35:38.225Z cpu0:2098648 opID=c86f69bb)VC: 4619: Filesystem probe time 50 msec (devices probed 4 of 4)

2018-05-10T12:35:38.225Z cpu0:2098648 opID=c86f69bb)VC: 4621: Refresh open volume time 0 msec

2018-05-10T12:36:01.982Z cpu0:2099252 opID=b22879f6)World: 11942: VC opID 0f90c64e maps to vmkernel opID b22879f6

2018-05-10T12:36:01.982Z cpu0:2099252 opID=b22879f6)NVDManagement: 1478: No nvdimms found on the system

2018-05-10T12:36:14.372Z cpu0:2099251 opID=8b2af442)World: 11942: VC opID 0f90c65c maps to vmkernel opID 8b2af442

2018-05-10T12:36:14.372Z cpu0:2099251 opID=8b2af442)LVM: 12674: LVMProbeDevice failed with status "Device does not contain a logical volume".

2018-05-10T12:36:14.374Z cpu0:2097701)ScsiDeviceIO: 3029: Cmd(0x459a40b72580) 0x16, CmdSN 0x484 from world 0 to dev "naa.5000c50051394f65" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2018-05-10T12:36:14.375Z cpu0:2099251 opID=8b2af442)LVM: 10419: LVMProbeDevice failed on (3444132192, naa.5000c50051394f65:1): Device does not contain a logical volume

2018-05-10T12:36:14.375Z cpu0:2099251 opID=8b2af442)FSS: 2350: Failed to create FS on dev [naa.5000c50051394f65:1] fs [test] type [vmfs6] fbSize 1048576 => Not supported

 

 

It looks like a VMFS partition has been created when I look again at the disk properties, but there's isn't any datastore:

 

esxi-disk.PNG

 

esxi-datastore.PNG

 

 

I found lots of articles about clearing existing partitions on the disk which I did multiple times but it's not helping.

 

I tried to create datastore over iSCSI but it fails just like with FC.

I can see the disk but datastore creation doesn't work.

What am I missing?

Is it the onboard sata controller of my ML110 G6?

In the past I made it work with my Synology and iSCSI and the same host.


Can someone help?

Marc Smith

unread,
May 11, 2018, 10:49:47 PM5/11/18
to esos-...@googlegroups.com
Can you share your SCST configuration file (/etc/scst.conf) from your
ESOS machine? At first glance, it sounds like you're using the SCST
pass-through handler (dev_disk), but I could be wrong.

--Marc
> I found lots of articles about clearing existing partitions on the disk which I did multiple times but it's not helping.
>
>
>
> I tried to create datastore over iSCSI but it fails just like with FC.
>
> I can see the disk but datastore creation doesn't work.
>
> What am I missing?
>
> Is it the onboard sata controller of my ML110 G6?
>
> In the past I made it work with my Synology and iSCSI and the same host.
>
>
> Can someone help?
>
> --
> You received this message because you are subscribed to the Google Groups "esos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Lucien Courcol

unread,
May 12, 2018, 3:25:50 AM5/12/18
to esos-...@googlegroups.com
Hello Marc,

You're absolutely right and that was the problem.
I actually found the solution last night: I was messing around with all the options in ESOS and at some point I realize that I was always using the same type of device when going through the config:  dev_disk
I'm really not familiar with ESOS, still learning, and for days I was stuck because I thought the ESOS config I had was correct since I could see the disk in ESXi...
So last night I changed the device type to  vdisk_blockio and I was able to create a datastore :)
I think it was very irritating at the end but I now know this important difference when creating devices which I suppose also exists when working with other san storage system...

Lucien

> To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+unsubscribe@googlegroups.com.

> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "esos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+unsubscribe@googlegroups.com.

h...@energy.dk

unread,
May 12, 2018, 7:44:38 AM5/12/18
to esos-users
Hi Marc
I had some similar problems but my problem was that I have setup ZFS to use blocksize different from 4K (128K)
You get an error: cannot create a datastore.
You get an error in vmkernel.log saying that only 4K or 512byte is allowed.
So I changed it to this but set SCST to use 4K. That did not work you have to set it to 512bytes.

So in conclusion ZFS to 4K and SCST to 512bytes 
Have you seen something similar?

I use ESXi 6.7


Regards
Henning
> To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.

> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "esos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.

Marc Smith

unread,
May 12, 2018, 2:04:31 PM5/12/18
to esos-...@googlegroups.com
Setting the block size for a SCST device controls the "logical block
size" as seen by the initiators. The "physical block size" comes from
that attribute of a block device when using vdisk_blockio... so if you
create a ZFS volume that has a volume block size of "128K" then the
block device will have the same physical block size value, and this is
passed down to the initiators.

ESXi is complaining about the physical block size value. You could try
using a ZFS volume that has a 128K volume block size, and then use the
vdisk_fileio handler for the SCST device, which will always report a
physical block size of 4K to the initiators (at least it should, test
to confirm). This differs from the behavior of vdisk_blockio as
described above.

See this thread for more information:
https://groups.google.com/d/msg/esos-users/PdUldJ_7DTc/4IXayE1BBQAJ

--Marc

Henning Svane

unread,
May 12, 2018, 5:05:13 PM5/12/18
to esos-...@googlegroups.com
Hi Marc
From what I have read vdisk_blockio have better write speed so that was why I chose that.

The thread you directed me to have the same conclusion.

Have you seen any improvement for vdisk_fileio.
Setting zfs up with 128K instead of 4K gives a lot of extra space, but if it at the expend of write speed then it is not a god.

Regards
Henning
-----Oprindelig meddelelse-----
Fra: esos-...@googlegroups.com <esos-...@googlegroups.com> På vegne af Marc Smith
Sendt: 12. maj 2018 20:04
Til: esos-...@googlegroups.com
Emne: Re: fail to create VMFS datastore but can see disk - new setup with esos on ML110 G6 with FC HBAs
You received this message because you are subscribed to a topic in the Google Groups "esos-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/esos-users/3iTwPTSi4Hg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to esos-users+...@googlegroups.com.

Allen Underdown

unread,
May 14, 2018, 9:55:48 AM5/14/18
to esos-...@googlegroups.com

So please share your final config –

 

I’m attempting to do the same thing.

 

Allen

> To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.


> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "esos-users" group.

To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.


For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "esos-users" group.

To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.

Reply all
Reply to author
Forward
0 new messages