I saw the good news about persistent reservation implemented in the
chunk, do we want roll out a new version with that?
Cheers.
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure
contains a definitive record of customers, application performance,
security threats, fraudulent activity, and more. Splunk takes this
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
Iscsitarget-devel mailing list
Iscsitar...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel
Well, it would really be great to see a stable release with persistent
reservation implemented!
p.s.: for us, who are using VMware, it would also be great to implement the
currently unsupported command, used by vSphere5:
kernel: iscsi_trgt: scsi_cmnd_start(1045) Unsupported 93
kernel: iscsi_trgt: cmnd_skip_pdu(459)
How much effort is needed to achieve this?
Kind regards, Marko Kobal
> kernel: iscsi_trgt: scsi_cmnd_start(1045) Unsupported 93
This one writes several times the same block at once. It should be
quite straightforward to implement but frankly, I have no idea, I
haven't looked at iet code for years :)
> kernel: iscsi_trgt: cmnd_skip_pdu(459)
What's this?
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <efl...@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
------------------------------------------------------------------------------
> Le Wed, 23 Nov 2011 09:25:10 +0100 vous écriviez:
>
> > kernel: iscsi_trgt: scsi_cmnd_start(1045) Unsupported 93
>
> This one writes several times the same block at once. It should be
> quite straightforward to implement but frankly, I have no idea, I
> haven't looked at iet code for years :)
It looks deceptively easy. Problem is that next it'll probably try to
send XCOPY which is a bit more complicated. I think these commands are
connected to VAAI:
When I read that I was wondering if you go VMware Advanced Settings
and change value of /DataMover/HardwareAcceleratedInit to 0 does it stop
filling log with those Unsupported 93 messages? It shouldn't send WRITE
SAME (93) after that. Also changing /DataMover/HardwareAcceleratedMove to
0 should stop XCOPY commands.
Anyway implementing these might be otherwise simple, but these might be
long running operations and might cause lots of side-effects. If I
understood correctly XCOPY can be used to make copy of virtual disk. If
it's tens or hundreds of gigabytes it will take a while and without some
throttling it'll probably have pretty big effects on performance. Then
again doing copy on storage side and not bouncing data around network is
huge timesaver. Same goes for that WRITE SAME which is used to wipe disks
etc.
Juhani
--
Juhani Rautiainen jra...@iki.fi
Are you using IET with Esxi 5 now? which version? and how well does it
performance?
Cheers.
2011/11/23 Marko Kobal <marko...@arctur.si>:
> ------------------------------------------------------------------------------
> All the data continuously generated in your IT infrastructure
> contains a definitive record of customers, application performance,
> security threats, fraudulent activity, and more. Splunk takes this
> data and makes sense of it. IT sense. And common sense.
> http://p.sf.net/sfu/splunk-novd2d
> _______________________________________________
> Iscsitarget-devel mailing list
> Iscsitar...@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel
>
>
------------------------------------------------------------------------------
Sorry for the late reply, I was travelling last week.
As pointed out before on several occasions: we very much rely on test
feedback from users for the PR code in trunk, so if anyone wants to
help us there by taking it for a test drive in a non-production
environment and reporting issues (or success :)) to the list it would
speed up things considerably and would be very much appreciated.
Cheers,
Arne
Marcel, Marko,
This basic testing is already very helpful, if only to make sure that
no regressions slip in. One specific test that I know of but don't
have here is the windows cluster validation. So if you happen to have
that I'd be very interested in the results.
Thanks,
Arne
Test passes with warnings from SCSI page 83h VPD. I had different
ScsiId's for both disks as can seen from sdparm --inquiry:
/dev/sde: IET VIRTUAL-DISK 0
Device identification VPD page:
Addressed logical unit:
designator type: T10 vendor identification, code set: Binary
vendor id: IET
vendor specific: ClusDisk1
/dev/sdf: IET VIRTUAL-DISK 0
Device identification VPD page:
Addressed logical unit:
designator type: T10 vendor identification, code set: Binary
vendor id: IET
vendor specific: ClusDisk2
Don't know what it reads from there and makes it decide they are same.
Anyway here are relevant sections from the report:
---------
Validate SCSI device Vital Product Data (VPD)
Validate that storage supports necessary inquiry data (SCSI page
83h VPD descriptors) and that they are unique.
Validating that for each cluster disk supporting SCSI page 83h
VPD descriptors, all nodes return the same descriptors.
Getting SCSI page 83h VPD descriptors for cluster disk 0 from
node clusbdc.clustest.local
Storage does not support SCSI page 83h VPD descriptors for cluster disk 0
Getting SCSI page 83h VPD descriptors for cluster disk 1 from
node clusbdc.clustest.local
Storage does not support SCSI page 83h VPD descriptors for cluster disk 1
Validating that for each cluster disk supporting SCSI page 83h
VPD descriptors, the descriptors are globally unique.
Getting SCSI page 83h VPD descriptors for cluster disk 0 from
node clusbdc.clustest.local
Getting SCSI page 83h VPD descriptors for cluster disk 1 from
node clusbdc.clustest.local
Getting SCSI page 83h VPD descriptors for cluster disk 0 from
node cluspdc.clustest.local
Getting SCSI page 83h VPD descriptors for cluster disk 1 from
node cluspdc.clustest.local
Validate SCSI-3 Persistent Reservation
Validate that storage supports the SCSI-3 Persistent Reservation commands.
Validating Cluster Disk 0 for Persistent Reservation support
Registering PR key for cluster disk 0 from node clusbdc.clustest.local
Putting PR reserve on cluster disk 0 from node clusbdc.clustest.local
Attempting to read PR on cluster disk 0 from node clusbdc.clustest.local.
Attempting to preempt PR on cluster disk 0 from unregistered node
cluspdc.clustest.local. Expecting to fail
Registering PR key for cluster disk 0 from node cluspdc.clustest.local
Putting PR reserve on cluster disk 0 from node cluspdc.clustest.local
Unregistering PR key for cluster disk 0 from node cluspdc.clustest.local
Trying to write to sector 11 on cluster disk 0 from node
clusbdc.clustest.local
Trying to read sector 11 on cluster disk 0 from node cluspdc.clustest.local
Attempting to read drive layout of Cluster disk 0 from node
cluspdc.clustest.local while the disk has PR on it
Trying to write to sector 11 on cluster disk 0 from node
cluspdc.clustest.local
Registering PR key for cluster disk 0 from node cluspdc.clustest.local
Trying to write to sector 11 on cluster disk 0 from node
cluspdc.clustest.local
Trying to read sector 11 on cluster disk 0 from node cluspdc.clustest.local
Unregistering PR key for cluster disk 0 from node cluspdc.clustest.local
Releasing PR reserve on cluster disk 0 from node clusbdc.clustest.local
Attempting to read PR on cluster disk 0 from node clusbdc.clustest.local.
Unregistering PR key for cluster disk 0 from node clusbdc.clustest.local
Registering PR key for cluster disk 0 from node cluspdc.clustest.local
Putting PR reserve on cluster disk 0 from node cluspdc.clustest.local
Attempting to read PR on cluster disk 0 from node cluspdc.clustest.local.
Attempting to preempt PR on cluster disk 0 from unregistered node
clusbdc.clustest.local. Expecting to fail
Registering PR key for cluster disk 0 from node clusbdc.clustest.local
Putting PR reserve on cluster disk 0 from node clusbdc.clustest.local
Unregistering PR key for cluster disk 0 from node clusbdc.clustest.local
Trying to write to sector 11 on cluster disk 0 from node
cluspdc.clustest.local
Trying to read sector 11 on cluster disk 0 from node clusbdc.clustest.local
Attempting to read drive layout of Cluster disk 0 from node
clusbdc.clustest.local while the disk has PR on it
Trying to write to sector 11 on cluster disk 0 from node
clusbdc.clustest.local
Registering PR key for cluster disk 0 from node clusbdc.clustest.local
Trying to write to sector 11 on cluster disk 0 from node
clusbdc.clustest.local
Trying to read sector 11 on cluster disk 0 from node clusbdc.clustest.local
Unregistering PR key for cluster disk 0 from node clusbdc.clustest.local
Releasing PR reserve on cluster disk 0 from node cluspdc.clustest.local
Attempting to read PR on cluster disk 0 from node cluspdc.clustest.local.
Unregistering PR key for cluster disk 0 from node cluspdc.clustest.local
Cluster Disk 0 supports Persistent Reservation
Validating Cluster Disk 1 for Persistent Reservation support
Registering PR key for cluster disk 1 from node clusbdc.clustest.local
Putting PR reserve on cluster disk 1 from node clusbdc.clustest.local
Attempting to read PR on cluster disk 1 from node clusbdc.clustest.local.
Attempting to preempt PR on cluster disk 1 from unregistered node
cluspdc.clustest.local. Expecting to fail
Registering PR key for cluster disk 1 from node cluspdc.clustest.local
Putting PR reserve on cluster disk 1 from node cluspdc.clustest.local
Unregistering PR key for cluster disk 1 from node cluspdc.clustest.local
Trying to write to sector 11 on cluster disk 1 from node
clusbdc.clustest.local
Trying to read sector 11 on cluster disk 1 from node cluspdc.clustest.local
Attempting to read drive layout of Cluster disk 1 from node
cluspdc.clustest.local while the disk has PR on it
Trying to write to sector 11 on cluster disk 1 from node
cluspdc.clustest.local
Registering PR key for cluster disk 1 from node cluspdc.clustest.local
Trying to write to sector 11 on cluster disk 1 from node
cluspdc.clustest.local
Trying to read sector 11 on cluster disk 1 from node cluspdc.clustest.local
Unregistering PR key for cluster disk 1 from node cluspdc.clustest.local
Releasing PR reserve on cluster disk 1 from node clusbdc.clustest.local
Attempting to read PR on cluster disk 1 from node clusbdc.clustest.local.
Unregistering PR key for cluster disk 1 from node clusbdc.clustest.local
Registering PR key for cluster disk 1 from node cluspdc.clustest.local
Putting PR reserve on cluster disk 1 from node cluspdc.clustest.local
Attempting to read PR on cluster disk 1 from node cluspdc.clustest.local.
Attempting to preempt PR on cluster disk 1 from unregistered node
clusbdc.clustest.local. Expecting to fail
Registering PR key for cluster disk 1 from node clusbdc.clustest.local
Putting PR reserve on cluster disk 1 from node clusbdc.clustest.local
Unregistering PR key for cluster disk 1 from node clusbdc.clustest.local
Trying to write to sector 11 on cluster disk 1 from node
cluspdc.clustest.local
Trying to read sector 11 on cluster disk 1 from node clusbdc.clustest.local
Attempting to read drive layout of Cluster disk 1 from node
clusbdc.clustest.local while the disk has PR on it
Trying to write to sector 11 on cluster disk 1 from node
clusbdc.clustest.local
Registering PR key for cluster disk 1 from node clusbdc.clustest.local
Trying to write to sector 11 on cluster disk 1 from node
clusbdc.clustest.local
Trying to read sector 11 on cluster disk 1 from node clusbdc.clustest.local
Unregistering PR key for cluster disk 1 from node clusbdc.clustest.local
Releasing PR reserve on cluster disk 1 from node cluspdc.clustest.local
Attempting to read PR on cluster disk 1 from node cluspdc.clustest.local.
Unregistering PR key for cluster disk 1 from node cluspdc.clustest.local
Cluster Disk 1 supports Persistent Reservation
Validating that PR clear command works for cluster disk 0 from
node clusbdc.clustest.local
Validating that PR clear command works for cluster disk 0 from
node cluspdc.clustest.local
Validating that PR clear command works for cluster disk 1 from
node clusbdc.clustest.local
Validating that PR clear command works for cluster disk 1 from
node cluspdc.clustest.local
---------
Juhani
--
Juhani Rautiainen jra...@iki.fi
------------------------------------------------------------------------------
Currently I'm running istgt with Esxi5 with reasonable stability and
speed, but I do hope I can switch back to IET someday.
If any one is interested , you can download istgt here
http://packages.debian.org/unstable/main/istgt , runs well on
ubuntu/debian It's implemented in pure user-mode and support PR, and
VAAI, the code size is rather small, I for one hope it can get merged
with IET somehow. http://www.peach.ne.jp/archives/istgt/
[ 2265.666402] iscsi_trgt: scsi_cmnd_start(1084) Unsupported 93
[ 2265.690431] iscsi_trgt: cmnd_skip_pdu(472) 77e70100 1c 93 512
WRITE_SAME command, harmless though.
------------------------------------------------------------------------------
Learn Windows Azure Live! Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for
developers. It will provide a great way to learn Windows Azure and what it
provides. You can attend the event by watching it streamed LIVE online.
Learn more at http://p.sf.net/sfu/ms-windowsazure
0x89 COMPARE AND WRITE
0x93 WRITE_SAME_16
0X83 EXTENDED COPY
the first one is hardwar assisted locking, this IMO is the most
important performance improvement for ESXi. I can probably spend some
time looking try to port istgt implementation to IETd, but can't
promise anything.
Cheers.
------------------------------------------------------------------------------
Systems Optimization Self Assessment
Improve efficiency and utilization of IT resources. Drive out cost and
improve service delivery. Take 5 minutes to use this Systems Optimization
Self Assessment. http://www.accelacomm.com/jaw/sdnl/114/51450054/
ESXi doesn't do PR.
> p.s.1 : it would be nice to have the" Unsupported 93" fixed ;)
> kernel: iscsi_trgt: scsi_cmnd_start(1084) Unsupported 93
> kernel: iscsi_trgt: cmnd_skip_pdu(472) 4bd50200 1c 93 512
Supporting the VMware hardware acceleration commands is a
nice thing to have, but it's not high on the list.
We definitely would accept code contributions that provide
this support though!
> p.s.2 : it would be even nicer if someone would implement the
> "hardware assisted locking" feature ;)
Same as above, hardware acceleration is a plus, but not a
must. That isn't to say we wouldn't look at what it would
take to implement, but probably in 1.6.
> Otherwise the trunk version seems to be stable, it should
> really make its way to the stable version soon
Just need to make sure it fits the SPC-3 minimum requirements
before releasing and gets some more testing.
Plus there are a couple of performance related things I would
like to get in before it's released.
-Ross
______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.
> ESXi doesn't do PR.
Well... obviously I've mixed the "SCSI reservations" and "SCSI persistent
reservations". ESXi definitely needs the "SCSI reservations", see
http://kb.vmware.com/kb/1005009.
I'm not a storage expert ... can anybody explain me (in "human" mode ;) ) the
difference between the "SCSI reservations" and "SCSI persistent reservations"?
Thanks!
Kind regards, Marko Kobal