The functionality of that is in package that is called 'multipath'.
For more traditional block based multipath that you are probably used to
you use dm-multipath, and you can search the internet for info on that
or read the multipath tools docs. All you should have to do is log into
all the iscsi portals, and then the multipath tools will create the
multipath device for you (you have to edit the /etc/multipath.conf
though). If you search for multipath in the current iscsi README you
should find some info on setting iscsi params when using dm-multipath.
If you are asking about iscsi's multiple connection sessions then you
will not find anything, because we do not support it.
--- On Mon, 7/21/08, Donald Williams <don.e.w...@gmail.com> wrote:
> Are you asking about how to create multiple logins to the
> same target within
> Open-iSCSI?
Yes, multiple logins to single target.
> In the /etc/iscsi/ifaces directory there is an example
> file on how to
> configure the egress port for iSCSI connections. By
> creating a file for
> each GbE interface you want to use, you'll have
> multiple logins that point
> to the same target. (I.e. /dev/sdb, /dev/sdc) Then you
> can layer on Linux
> dm-multipath to create an MPIO device.
This sounds like we're getting into the right area.
> What iSCSI target are you trying to connect to?
I'm using a Stardom 4 bay unit.
Is that because it is not implemented or there are other design
considerations inhibiting the support/implementation?
Thanks,
Shyam
I.e. Here's one of my targets. There are two connections to that volume
going to different GbE ports on the iSCSI array.
Target:
iqn.2001-05.com.equallogic:6-8a0900-8d1cb3401-c2f0002decc46730-dw-open-iscsi
-backup
Current Portal: 172.23.10.242:3260,1
Persistent Portal: 172.23.10.240:3260,1
**********
Interface:
**********
Iface Name: iface.eth0
Iface Transport: tcp
Iface Initiatorname: iqn.2005-03.org.open-iscsi:2009610cb1b
Iface IPaddress: 172.23.49.170
Iface HWaddress: 00:04:23:C7:E1:5A
Iface Netdev: default
SID: 3
iSCSI Connection State: LOGGED IN
iSCSI Session State: Unknown
Internal iscsid Session State: NO CHANGE
Current Portal: 172.23.10.246:3260,1
Persistent Portal: 172.23.10.240:3260,1
**********
Interface:
**********
Iface Name: iface.eth1
Iface Transport: tcp
Iface Initiatorname: iqn.2005-03.org.open-iscsi:2009610cb1b
Iface IPaddress: 172.23.49.171
Iface HWaddress: 00:04:23:C7:E1:5B
Iface Netdev: default
SID: 4
iSCSI Connection State: LOGGED IN
iSCSI Session State: Unknown
Internal iscsid Session State: NO CHANGE
And here's the multipath view of the devices.
#multipath -ll
dw-open-iscsi-vol0 (30690a0284060efffe05b746ecd3ac236) dm-1 EQLOGIC ,100E-00
[size=250G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 6:0:0:0 sdd 8:48 [active][ready]
\_ 7:0:0:0 sde 8:64 [active][ready]
backup-vol (30690a01840b31c8d3067c4ec2d00f0c2) dm-0 EQLOGIC ,100E-00
[size=200G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 9:0:0:0 sdb 8:16 [active][ready]
\_ 8:0:0:0 sdc 8:32 [active][ready]
So I can lose a path and still function, and the IO load is split across
both paths when they're running.
Then there's MC/S. Mutlitple Connections in the same Session. Which I
don't believe is supported.
Basically, the steps are define the egress ports, discover the and login to
the targets.
Configure multipath using the /etc/multipath.conf file and you're done. I
also create persistent device names I.e. 'backup-vol' is
/dev/mapper/backup-vol-par1 for consistency and easier admin.
Regards,
Don
I have no idea if the OiS community has any interest or plan in adding this
feature.
Regards,
Don
Just that the Multipath is not just iSCSI. It can work on SCSI JBOD, FC, etc. Therefore
the implementation of that has been done in a seperate package. And there isn't an
need to include this support in iSCSI as pretty much any distro includes multipath (along
with Open-ISCSI) so you get both of them out of the box.
Hmm.. do I remember correctly that there was some advantage for command
ordering (?) for loadbalancing that was better/easier with MC/s than with
multiple different sessions with multipath?
-- Pasi
You can configure how you want multiple targets to be logged in. I am not sure
what is meant by Multiple Connections, but what Open-ISCSI does is to make
one session per target. Which means that if the portal advertises four targets,
wherein each target has 10 LUNs, you end up having four TCP connections (perhaps
over four NIC - depending on your configuration) with a total of 40 SCSI disks showing up.
Multipath then iterates over all of those, figures out which ones return the same
World Wide Name (WWN), groups them together and runs a path priority program (there are different
types of those) to figure out which of the 'set' of SCSI disks have a higher
priority. The priority program (this also depends on the priority policy that is used) is run over
all of the disks and those that have the same priority are grouped together and when data
is written too - all of those are written too. This means that the target decides which of those
SCSI disks over a specific TCP connection should be written too.
Here is an example (I have one LUN allocated):
360060160752f1b01d1a088eb97f9dc11dm-45 DGC,RAID 5
[size=20G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][active]
\_ 14:0:0:0 sdn 8:208 [active][ready]
\_ 16:0:0:0 sdp 8:240 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 13:0:0:0 sdm 8:192 [active][ready]
\_ 15:0:0:0 sdo 8:224 [active][ready]
And there are four sessions
tcp: [3] 192.168.1.243:3260,4 iqn.1992-04.com.emc:ax.fcnpr064600172.b1
tcp: [4] 192.168.1.242:3260,3 iqn.1992-04.com.emc:ax.fcnpr064600172.a1
tcp: [5] 192.168.1.241:3260,2 iqn.1992-04.com.emc:ax.fcnpr064600172.b0
tcp: [6] 192.168.1.240:3260,1 iqn.1992-04.com.emc:ax.fcnpr064600172.a0
Which means a total of 4 SCSI disks (1 LUN * 4 sessions)
And when reading/writting to the /dev/dm-45, the data is cloned to go to
snd and sdp (higher priority).
So the load-balancing is decided on the target side which can figure out which of its
disks are heavily used and switch over to a different session.
In regards to command ordering you got me there. I don't think the kernel groups the commands
in any fashion explicitly - that is unless you use the SCSI generic interface and set the
SCSI commands yourself. But my knowledge in that area is at infancy period - I need to
find some good SCSI command books and take the time to the read the SCSI specs before I can
answer this little question truthfully.
Yep.. I'm using this kind of multipath configuration myself too :)
Both with software open-iscsi and qla4xxx iSCSI HBA's.
> In regards to command ordering you got me there. I don't think the kernel groups the commands
> in any fashion explicitly - that is unless you use the SCSI generic interface and set the
> SCSI commands yourself. But my knowledge in that area is at infancy period - I need to
> find some good SCSI command books and take the time to the read the SCSI specs before I can
> answer this little question truthfully.
>
I remember reading something about this subject earlier.. maybe on this list, or then on ietd list..
Let's try with an example:
You have 2 sessions to the same target/disk. So you have /dev/sda and
/dev/sdb on your initiator host. You multipath them together, with
round-robin loadbalancing, both paths active/ready.. and now you have let's
say /dev/mapper/mpath0.
Now how is the data on disk (on target) kept consistent when IO commands
are split over both paths? Meaning how is the command execution order preserved?
Does multipath software (on initiator) take care of that?
Or does the target handle that somehow? Block until all the commands coming
from first session/path are executed, then execute from the second
session/path? This doesn't sound very clever..
If I remember correctly, there was some advantage from MC/s related to this
situation..
Hopefully someone knows this better and can comment :)
(And I'm not trying to say multipath is bad.. after all, that's what I'm
using myself:)
-- Pasi
Some links:
It seems you might be able to get better performance with MC/s than with multipath:
http://osdir.com/ml/kernel.device-mapper.devel/2005-04/msg00014.html
Some discussion about MC/s and ERL=2 etc:
http://osdir.com/ml/iscsi.iscsi-target.devel/2006-02/msg00183.html
Nice presentation about various iSCSI (opensource) implementations:
http://www.usenix.org/event/lsf08/tech/IO_bellinger.pdf
Blog entry about different choises for connectivity:
http://www.meta.net.nz/~daniel/blog/2008/06/05/linux-iscsi-stacks-and-multiple-initiators-per-target-lun/
-- Pasi
Yep, I just read about ERL=2 etc.. would be nice feature to have. I understood
that it provided faster error recovery also.
> Tradeoff between a cross transport Multipath and iSCSI optimized MC/s
> implementation should be best left to the user's discretion.
>
open-iscsi doesn't support MC/s so basicly you can't use it in Linux :)
So you're left with multipath..
-- Pasi
Hmm..
http://www.usenix.org/event/lsf08/tech/IO_bellinger.pdf
"ISCSI provides Command Sequence Number (CmdSN)
ordering to ensure delivery of tasks from Initiator to Target
Port."
I wonder if these CmdSN numbers are per session?
-- Pasi
http://osdir.com/ml/iscsi.iscsi-target.devel/2006-02/msg00183.html
"One of the areas of work recently done with Core-iSCSI has been getting
the more advanced features (namely MC/S) validated with 3rd party iSCSI
targets. With mailing list members help, I believe Core-iSCSI has
become the world's first Linux Initiator to prove MC/S interopt across
multiple iSCSI Target implementations."
Then there's the iSCSI target from the same people also supporting MC/s and
ERL=2:
http://linux-iscsi.org/index.php/LIO-Target
Info also here:
http://www.usenix.org/event/lsf08/tech/IO_bellinger.pdf
-- Pasi
Thanks.
There's some discussion about command ordering etc on this mail:
http://linux.derkeiler.com/Mailing-Lists/Kernel/2008-07/msg04931.html
Doesn't seem to be very simple subject after all :)
-- Pasi
Don
-----Original Message-----
From: open-...@googlegroups.com [mailto:open-...@googlegroups.com] On
Behalf Of Shyam...@Dell.com
Sent: Thursday, July 24, 2008 4:38 AM
To: open-...@googlegroups.com
I guess not many of the commercial targets support it..
It seems core-iscsi initiator in Linux supports it, and their target
(lio-target) supports that aswell..
http://www.linux-iscsi.org/index.php/LIO-Target
Too bad core-iscsi is not included in upstream Linux kernel..
One of the reasons it cannot be merged upstream is because it support
MC/s :) The linux-scsi developers will not allow that feature upstream.
As far as different error recovery levels goes, erl1 should get done
soon, because it would be very nice for iscsi tape.
Mostly a lurker on this thread, but as I work for a small company which
sells a commercial iSCSI target which supports multiple connections per
session and ERL2 (strong focus on iSCSI Tape support), so I thought I'd
throw my two cents in.
It would be very nice to see support in Linux for both MC/s and ERL1.
It's obvious the benefits of ERL1 to people deploying iSCSI Tape, but
MC/s is also useful. The latest generation of tape drives (LTO4, DLT-S4
etc) support ordered tagged commands, thus these could be load-balanced
over multiple links, to do this however ordered command delivery over
the SCSI transport is vital as multiple sessions can not provide
transport command ordering.
Steve
Thanks! This was exactly what I was thinking.. there was some good reason
for MC/s related to command ordering:)
So you can't do that at all when you use multiple sessions instead of MC/s?
-- Pasi
And this is a great feature to have in virtual environments where different guest Oses can just have a T.C.P. connection and not worry iSCSI.
I believe we should encourage implementation of something that is already in the RFC because not implementing always causes interoperability challenges and also inhibits innovative usecases that could be visualized in this space.
You seem to be familiar with this stuff so could you please explain in more
detail how round-robin loadbalanced multipath works when using two separate
iSCSI sessions to the same target/LUN.. ?
Is there some performance hit because multiple sessions cannot provide
command ordering or?
Just trying to understand how things work for real :)
Thanks!
-- Pasi