nvme - and esos

393 views
Skip to first unread message

Alasdair Smith

unread,
Nov 24, 2016, 6:18:38 PM11/24/16
to esos-users
Hi Esos-users,

looking for some help, got a couple of nice new shiny NVMe intel PCIE SSD... unfortunately struggling on the best way to set them up. Has anyone had any experience? I'm using the latest master version.

Regards

Alasdair Smith

Marc Smith

unread,
Nov 24, 2016, 7:02:27 PM11/24/16
to esos-...@googlegroups.com
Hi Alasdair,

Can you send the output from the dmesg command? Do you see them as block devices?


Marc

--
You received this message because you are subscribed to the Google Groups "esos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Alasdair Smith

unread,
Nov 24, 2016, 7:20:53 PM11/24/16
to esos-users
Hi Marc,

Thanks for the quick response, I did an in-place upgrade between master versions and seem to have issues ssh login. So struggling to get files off. So far have removed all the keys from /etc/ssh and rebooted. Bit of a strange one... Any ideas, once I have that working I should be able to get the dmesg output.

Cheers

Alasdair

Marc Smith

unread,
Nov 24, 2016, 7:54:41 PM11/24/16
to esos-...@googlegroups.com
Yeah, we did a round of package updates recently, and in OpenSSH,
"PermitRootLogin no" is now the default. Edit /etc/ssh/sshd_config and
set "PermitRootLogin yes" then restart sshd (/etc/rc.d/rc.sshd).

--Marc

Stuart Hopkins

unread,
Nov 25, 2016, 4:11:28 AM11/25/16
to esos-users
I have a few of the Intel 750's (PCIe-based) running in my storage box (running master) and they work great. I didn't carve mine up using LVM as I didn't want any performance overhead, so they are presented as vdisk_block through to my ESXi servers. You should see the base device appear under /dev as nvme[0-9]. As an example:

crw-------    1 root     root      248,   0 Oct 21 07:00 nvme0
brw-rw----    1 root     disk      259,   0 Oct 21 07:00 nvme0n1
brw-rw----    1 root     disk      259,   2 Oct 21 07:00 nvme0n1p1
crw-------    1 root     root      248,   1 Oct 21 07:00 nvme1
brw-rw----    1 root     disk      259,   1 Oct 21 07:00 nvme1n1
brw-rw----    1 root     disk      259,   3 Oct 21 07:00 nvme1n1p1
crw-------    1 root     root      248,   2 Oct 21 07:00 nvme2
brw-rw----    1 root     disk      259,   4 Oct 21 07:00 nvme2n1
brw-rw----    1 root     disk      259,   5 Oct 21 07:00 nvme2n1p1

The character device (nvme[0-9]) is what you use the nvme cli tool against (its not present in my ESOS build, though I am using a version from a few months ago, not sure if its been added). Assuming your device(s) come with a sensible initial layout you should have the nvme0n1, meaning the block device portion can be used. While with a SCSI device you use /dev/sd[a-z], NVME devices are slightly different in that you can't partition /dev/nvme[0] as its the character device (for configuration/in-band management) rather than a block device.

If you run 'parted -a optimal /dev/nvme0n1' you should be able to partition the device like a regular disk, and depending on the number of partitions you create you should see /dev/nvme0n1p[1-4] etc.

If you do use LVM on it make sure you configure LVM for thin-provisioning and to allow discards, otherwise no discard/trim command will ever be performed on the underlying device. While these devices are very fast, they do still need good housekeeping to keep them running at full speed.

I have had mine running for months now and they haven't missed a beat. Make sure your server has PCIe slots fast enough to keep them busy, and enjoy the performance :-)

@Marc, are the nvme tools part of the latest master? I haven't had time to check

Marc Smith

unread,
Nov 25, 2016, 10:51:28 AM11/25/16
to esos-...@googlegroups.com
Wow, thanks for the great information! Very useful.


>
> @Marc, are the nvme tools part of the latest master? I haven't had time to
> check

No, but I'll add it in today... is this the project you're referring
to: https://github.com/linux-nvme/nvme-cli

?

--Marc

>
>
> On Thursday, November 24, 2016 at 11:18:38 PM UTC, Alasdair Smith wrote:
>>
>> Hi Esos-users,
>>
>> looking for some help, got a couple of nice new shiny NVMe intel PCIE
>> SSD... unfortunately struggling on the best way to set them up. Has anyone
>> had any experience? I'm using the latest master version.
>>
>> Regards
>>
>> Alasdair Smith
>

Stuart Hopkins

unread,
Nov 25, 2016, 10:57:01 AM11/25/16
to esos-users
That's the project. When I set mine up initially they didn't have the NVMe partition defined (what gives you the n1/n2/n3 etc) so I had to sort this on a separate server. With the tool you can configure the device (physical partitions, firmware, overall settings), and also retrieve health values as well (potentially a candidate for any check script to make sure things are healthy).

Alasdair Smith

unread,
Nov 29, 2016, 7:16:06 AM11/29/16
to esos-users

Hi Stuart, 

Thank you for the information that's probably my issue as these cards have not been setup at all, just unboxed and plugged into the server. I would love to use these as vdisk_block.

Hi Marc,

Thanks for moving quickly on this again, can you let me know when the version with nvme-cli is ready to be tested... very happy to be a guinea pig for this.

Thank you both for your time and support.

Regards

Alasdair.

Alasdair Smith

unread,
Nov 29, 2016, 8:18:45 AM11/29/16
to esos-users

Hi Marc,

dmesg attached!

Cheers

Alasdair

On Friday, November 25, 2016 at 12:02:27 AM UTC, Marc Smith wrote:
san-02-dmesg.txt

Marc Smith

unread,
Nov 29, 2016, 3:57:00 PM11/29/16
to esos-...@googlegroups.com
Looks like you have (4) NVMe drives (from your kernel log):
[ 1.374942] nvme nvme0: pci function 0000:07:00.0
[ 1.375356] nvme nvme1: pci function 0000:08:00.0
[ 1.375745] nvme nvme2: pci function 0000:44:00.0
[ 1.376217] nvme nvme3: pci function 0000:45:00.0

The new ESOS "master" package just posted which contains the "nvme" utility:
http://download.esos-project.com/packages/master/esos-master_64294ed_dgvs.zip

I think its like Stuart said, you just need to use the nvme CLI tool
to enable/expose the block device (eg, /dev/nvmeXn1) for the NVMe
drives.

Let us know how it goes.


--Marc

Alasdair Smith

unread,
Nov 30, 2016, 4:44:23 PM11/30/16
to esos-users
Hi Marc,

The upgrade process went a lot smoother on this one, it automatically recognised the drive that needed to be upgraded and didn't ask me for raid drivers.

Thank you for adding the cli commands I can see the cards:
[root@abd2-eq1-h9-gsve-san-02 ~]# nvme list 
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev  
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     CVF85381003U4P0DGN-1 INTEL SSDPECME040T4                      1           2.00  TB /   2.00  TB    512   B +  0 B   8DV10171
/dev/nvme1n1     CVF85381003U4P0DGN-2 INTEL SSDPECME040T4                      1           2.00  TB /   2.00  TB    512   B +  0 B   8DV10171
/dev/nvme2n1     CVF8549000264P0DGN-1 INTEL SSDPECME040T4                      1           2.00  TB /   2.00  TB    512   B +  0 B   8DV10171
/dev/nvme3n1     CVF8549000264P0DGN-2 INTEL SSDPECME040T4                      1           2.00  TB /   2.00  TB    512   B +  0 B   8DV10171

I can write to them:
dev/nvme0n1:
Timing buffer-cache reads: hdparm: HDIO_DRIVE_CMD: Inappropriate ioctl for device
 9172 MB in 0.50 seconds = 18429633 kB/s
Timing buffered disk reads: 8074 MB in 3.00 seconds = 2755634 kB/s
hdparm: HDIO_DRIVE_CMD: Inappropriate ioctl for device
[root@abd2-eq1-h9-gsve-san-02 ~]# hdparm -tT /dev/nvme0n2
hdparm: can't open '/dev/nvme0n2': No such file or directory
[root@abd2-eq1-h9-gsve-san-02 ~]# hdparm -tT /dev/nvme1n1 

/dev/nvme1n1:
Timing buffer-cache reads: hdparm: HDIO_DRIVE_CMD: Inappropriate ioctl for device
 9128 MB in 0.50 seconds = 18339422 kB/s
Timing buffered disk reads: 8066 MB in 3.00 seconds = 2752945 kB/s
hdparm: HDIO_DRIVE_CMD: Inappropriate ioctl for device
[root@abd2-eq1-h9-gsve-san-02 ~]# hdparm -tT /dev/nvme2n1 

/dev/nvme2n1:
Timing buffer-cache reads: hdparm: HDIO_DRIVE_CMD: Inappropriate ioctl for device
 9106 MB in 0.50 seconds = 18293570 kB/s
Timing buffered disk reads: 8064 MB in 3.00 seconds = 2752410 kB/s
hdparm: HDIO_DRIVE_CMD: Inappropriate ioctl for device

However, esos isn't able to see the cards... only list the raid controller and SAS disks. Is this something that I need to manually configure in SCST?

Regards

Alasdair

Marc Smith

unread,
Nov 30, 2016, 4:51:50 PM11/30/16
to esos-...@googlegroups.com
Ah, yeah, I probably need to update the TUI code so it displays those
as choices in the block-device-selection dialog. Can you verify that
the /sys/block/nvmeXnN directories (sym links) exist? And if so, can
you run this for just one of the devices and provide the output:

find /sys/block/nvme0n1 -type f | xargs grep .

Also, what was the command you used with "nvme" to enable the block
device? Can you provide that here so we have an example.


Thanks,

Marc

Alasdair Smith

unread,
Nov 30, 2016, 5:10:43 PM11/30/16
to esos-users
Hi Marc,

To my shame, the block devices seemed to be there when I booted the system... Just had panicked as the TUI hadn't picked them up... I only formatted one of the devices and it hasn't seemed to make a difference.

Command that I had used:
* nvme list
* nvme format /dev/nvme0 -n 1


The command doesn't seem to return anything...

[root@abd2-eq1-h9-gsve-san-02 ~]# find /sys/block/nvme0n1 -type f | xargs grep .
[root@abd2-eq1-h9-gsve-san-02 ~]# find /sys/block/nvme0n1 -type f | xargs grep .
[root@abd2-eq1-h9-gsve-san-02 ~]# cd /sys/block/
[root@abd2-eq1-h9-gsve-san-02 block]# ls
loop0    loop2    loop4    loop6    nvme0n1  nvme2n1  ram0     ram10    ram12    ram14    ram2     ram4     ram6     ram8     sda      sdc      sr1
loop1    loop3    loop5    loop7    nvme1n1  nvme3n1  ram1     ram11    ram13    ram15    ram3     ram5     ram7     ram9     sdb      sr0
[root@abd2-eq1-h9-gsve-san-02 block]# find ./nvme0n1 -type f | xargs grep .

ls -lSh shows:
lrwxrwxrwx    1 root     root           0 Nov 30 22:03 nvme0n1 -> ../devices/pci0000:00/0000:00:03.0/0000:05:00.0/0000:06:01.0/0000:07:00.0/nvme/nvme0/nvme0n1
lrwxrwxrwx    1 root     root           0 Nov 30 22:04 nvme1n1 -> ../devices/pci0000:00/0000:00:03.0/0000:05:00.0/0000:06:02.0/0000:08:00.0/nvme/nvme1/nvme1n1
lrwxrwxrwx    1 root     root           0 Nov 30 22:04 nvme2n1 -> ../devices/pci0000:40/0000:40:02.0/0000:42:00.0/0000:43:01.0/0000:44:00.0/nvme/nvme2/nvme2n1
lrwxrwxrwx    1 root     root           0 Nov 30 22:04 nvme3n1 -> ../devices/pci0000:40/0000:40:02.0/0000:42:00.0/0000:43:02.0/0000:45:00.0/nvme/nvme3/nvme3n1

Regards

Alasdair

Marc Smith

unread,
Dec 4, 2016, 12:26:47 AM12/4/16
to esos-...@googlegroups.com
I've updated the TUI to support NVMe block devices, and pushed these
changes (bdf679d). The new package should be posted within the next
few hours.

--Marc
Reply all
Reply to author
Forward
0 new messages