Multi-Path Connecitons

54 views
Skip to first unread message

Mail Man

unread,
Jul 19, 2008, 2:36:55 PM7/19/08
to open-iscsi
I've been looking through the documentation and can find no reference
for multi-path. Is this feature available under a different name?

Konrad Rzeszutek

unread,
Jul 21, 2008, 9:36:46 AM7/21/08
to open-...@googlegroups.com
On Sat, Jul 19, 2008 at 11:36:55AM -0700, Mail Man wrote:
>
> I've been looking through the documentation and can find no reference
> for multi-path. Is this feature available under a different name?

The functionality of that is in package that is called 'multipath'.

Mike Christie

unread,
Jul 21, 2008, 1:13:52 PM7/21/08
to open-...@googlegroups.com
Mail Man wrote:
> I've been looking through the documentation and can find no reference
> for multi-path. Is this feature available under a different name?

For more traditional block based multipath that you are probably used to
you use dm-multipath, and you can search the internet for info on that
or read the multipath tools docs. All you should have to do is log into
all the iscsi portals, and then the multipath tools will create the
multipath device for you (you have to edit the /etc/multipath.conf
though). If you search for multipath in the current iscsi README you
should find some info on setting iscsi params when using dm-multipath.

If you are asking about iscsi's multiple connection sessions then you
will not find anything, because we do not support it.

Donald Williams

unread,
Jul 21, 2008, 5:12:52 PM7/21/08
to open-...@googlegroups.com
Are you asking about how to create multiple logins to the same target within Open-iSCSI?
 
 In the /etc/iscsi/ifaces directory there is an example file on how to configure the egress port for iSCSI connections.  By creating a file for each GbE interface you want to use, you'll have multiple logins that point to the same target.  (I.e. /dev/sdb, /dev/sdc)  Then you can layer on Linux dm-multipath to create an MPIO device. 
 
 What iSCSI target are you trying to connect to?
 
 Regards,
 
 Don

No Spam

unread,
Jul 21, 2008, 7:14:43 PM7/21/08
to open-...@googlegroups.com

Thanks for the answers.

--- On Mon, 7/21/08, Donald Williams <don.e.w...@gmail.com> wrote:
> Are you asking about how to create multiple logins to the
> same target within
> Open-iSCSI?

Yes, multiple logins to single target.

> In the /etc/iscsi/ifaces directory there is an example
> file on how to
> configure the egress port for iSCSI connections. By
> creating a file for
> each GbE interface you want to use, you'll have
> multiple logins that point
> to the same target. (I.e. /dev/sdb, /dev/sdc) Then you
> can layer on Linux
> dm-multipath to create an MPIO device.

This sounds like we're getting into the right area.

> What iSCSI target are you trying to connect to?

I'm using a Stardom 4 bay unit.


Shyam...@dell.com

unread,
Jul 22, 2008, 2:42:46 AM7/22/08
to open-...@googlegroups.com

> If you are asking about iscsi's multiple connection sessions then you
will not find anything, because we do not support it.

Is that because it is not implemented or there are other design
considerations inhibiting the support/implementation?

Thanks,
Shyam

Don Williams

unread,
Jul 22, 2008, 9:22:13 AM7/22/08
to open-...@googlegroups.com
We need to make sure that we're talking about the same thing. There is
MPIO, which absolutely you can do with Open-iSCSI (OiS).

I.e. Here's one of my targets. There are two connections to that volume
going to different GbE ports on the iSCSI array.

Target:
iqn.2001-05.com.equallogic:6-8a0900-8d1cb3401-c2f0002decc46730-dw-open-iscsi
-backup
Current Portal: 172.23.10.242:3260,1
Persistent Portal: 172.23.10.240:3260,1
**********
Interface:
**********
Iface Name: iface.eth0
Iface Transport: tcp
Iface Initiatorname: iqn.2005-03.org.open-iscsi:2009610cb1b
Iface IPaddress: 172.23.49.170
Iface HWaddress: 00:04:23:C7:E1:5A
Iface Netdev: default
SID: 3
iSCSI Connection State: LOGGED IN
iSCSI Session State: Unknown
Internal iscsid Session State: NO CHANGE
Current Portal: 172.23.10.246:3260,1
Persistent Portal: 172.23.10.240:3260,1
**********
Interface:
**********
Iface Name: iface.eth1
Iface Transport: tcp
Iface Initiatorname: iqn.2005-03.org.open-iscsi:2009610cb1b
Iface IPaddress: 172.23.49.171
Iface HWaddress: 00:04:23:C7:E1:5B
Iface Netdev: default
SID: 4
iSCSI Connection State: LOGGED IN
iSCSI Session State: Unknown
Internal iscsid Session State: NO CHANGE

And here's the multipath view of the devices.

#multipath -ll
dw-open-iscsi-vol0 (30690a0284060efffe05b746ecd3ac236) dm-1 EQLOGIC ,100E-00
[size=250G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 6:0:0:0 sdd 8:48 [active][ready]
\_ 7:0:0:0 sde 8:64 [active][ready]
backup-vol (30690a01840b31c8d3067c4ec2d00f0c2) dm-0 EQLOGIC ,100E-00
[size=200G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 9:0:0:0 sdb 8:16 [active][ready]
\_ 8:0:0:0 sdc 8:32 [active][ready]

So I can lose a path and still function, and the IO load is split across
both paths when they're running.


Then there's MC/S. Mutlitple Connections in the same Session. Which I
don't believe is supported.

Basically, the steps are define the egress ports, discover the and login to
the targets.

Configure multipath using the /etc/multipath.conf file and you're done. I
also create persistent device names I.e. 'backup-vol' is
/dev/mapper/backup-vol-par1 for consistency and easier admin.

Regards,

Don

Shyam...@dell.com

unread,
Jul 22, 2008, 9:27:24 AM7/22/08
to open-...@googlegroups.com
>Then there's MC/S. Mutlitple Connections in the same Session. Which I
don't believe is supported.

I am talking about this scenario. Want to know if there any design
considerations inhibiting this support.

Don Williams

unread,
Jul 22, 2008, 9:36:13 AM7/22/08
to open-...@googlegroups.com
None that I'm aware of. However, MC/S is more of Microsoft thing. *I*
don't see it anywhere else and even with MS you can use standard MPIO and
achieve the same result. You end up logging into the target for each GbE
interface you want to use. I don't know of any benefit to MC/S over MPIO.
When you Google it, they're mentioned in the same breath. Since ultimately
they provide the same result. Redundant connections to a volume.

I have no idea if the OiS community has any interest or plan in adding this
feature.

Regards,

Don

Konrad Rzeszutek

unread,
Jul 22, 2008, 11:25:08 AM7/22/08
to open-...@googlegroups.com
On Tue, Jul 22, 2008 at 06:57:24PM +0530, Shyam...@Dell.com wrote:
>
> >Then there's MC/S. Mutlitple Connections in the same Session. Which I
> don't believe is supported.
>
> I am talking about this scenario. Want to know if there any design
> considerations inhibiting this support.

Just that the Multipath is not just iSCSI. It can work on SCSI JBOD, FC, etc. Therefore
the implementation of that has been done in a seperate package. And there isn't an
need to include this support in iSCSI as pretty much any distro includes multipath (along
with Open-ISCSI) so you get both of them out of the box.

Pasi Kärkkäinen

unread,
Jul 23, 2008, 7:53:26 AM7/23/08
to open-...@googlegroups.com

Hmm.. do I remember correctly that there was some advantage for command
ordering (?) for loadbalancing that was better/easier with MC/s than with
multiple different sessions with multipath?

-- Pasi

Konrad Rzeszutek

unread,
Jul 23, 2008, 10:06:25 AM7/23/08
to open-...@googlegroups.com

You can configure how you want multiple targets to be logged in. I am not sure
what is meant by Multiple Connections, but what Open-ISCSI does is to make
one session per target. Which means that if the portal advertises four targets,
wherein each target has 10 LUNs, you end up having four TCP connections (perhaps
over four NIC - depending on your configuration) with a total of 40 SCSI disks showing up.

Multipath then iterates over all of those, figures out which ones return the same
World Wide Name (WWN), groups them together and runs a path priority program (there are different
types of those) to figure out which of the 'set' of SCSI disks have a higher
priority. The priority program (this also depends on the priority policy that is used) is run over
all of the disks and those that have the same priority are grouped together and when data
is written too - all of those are written too. This means that the target decides which of those
SCSI disks over a specific TCP connection should be written too.

Here is an example (I have one LUN allocated):
360060160752f1b01d1a088eb97f9dc11dm-45 DGC,RAID 5
[size=20G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][active]
\_ 14:0:0:0 sdn 8:208 [active][ready]
\_ 16:0:0:0 sdp 8:240 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 13:0:0:0 sdm 8:192 [active][ready]
\_ 15:0:0:0 sdo 8:224 [active][ready]

And there are four sessions
tcp: [3] 192.168.1.243:3260,4 iqn.1992-04.com.emc:ax.fcnpr064600172.b1
tcp: [4] 192.168.1.242:3260,3 iqn.1992-04.com.emc:ax.fcnpr064600172.a1
tcp: [5] 192.168.1.241:3260,2 iqn.1992-04.com.emc:ax.fcnpr064600172.b0
tcp: [6] 192.168.1.240:3260,1 iqn.1992-04.com.emc:ax.fcnpr064600172.a0

Which means a total of 4 SCSI disks (1 LUN * 4 sessions)

And when reading/writting to the /dev/dm-45, the data is cloned to go to
snd and sdp (higher priority).

So the load-balancing is decided on the target side which can figure out which of its
disks are heavily used and switch over to a different session.

In regards to command ordering you got me there. I don't think the kernel groups the commands
in any fashion explicitly - that is unless you use the SCSI generic interface and set the
SCSI commands yourself. But my knowledge in that area is at infancy period - I need to
find some good SCSI command books and take the time to the read the SCSI specs before I can
answer this little question truthfully.

Pasi Kärkkäinen

unread,
Jul 24, 2008, 4:28:07 AM7/24/08
to open-...@googlegroups.com

Yep.. I'm using this kind of multipath configuration myself too :)

Both with software open-iscsi and qla4xxx iSCSI HBA's.

> In regards to command ordering you got me there. I don't think the kernel groups the commands
> in any fashion explicitly - that is unless you use the SCSI generic interface and set the
> SCSI commands yourself. But my knowledge in that area is at infancy period - I need to
> find some good SCSI command books and take the time to the read the SCSI specs before I can
> answer this little question truthfully.
>

I remember reading something about this subject earlier.. maybe on this list, or then on ietd list..

Let's try with an example:

You have 2 sessions to the same target/disk. So you have /dev/sda and
/dev/sdb on your initiator host. You multipath them together, with
round-robin loadbalancing, both paths active/ready.. and now you have let's
say /dev/mapper/mpath0.

Now how is the data on disk (on target) kept consistent when IO commands
are split over both paths? Meaning how is the command execution order preserved?

Does multipath software (on initiator) take care of that?

Or does the target handle that somehow? Block until all the commands coming
from first session/path are executed, then execute from the second
session/path? This doesn't sound very clever..

If I remember correctly, there was some advantage from MC/s related to this
situation..

Hopefully someone knows this better and can comment :)

(And I'm not trying to say multipath is bad.. after all, that's what I'm
using myself:)

-- Pasi

Shyam...@dell.com

unread,
Jul 24, 2008, 4:38:25 AM7/24/08
to open-...@googlegroups.com

> (And I'm not trying to say multipath is bad.. after all, that's what
I'm using myself:)

True. Also, in specific scenarios where the user is simply interested in
iSCSI traffic MC/s have an advantage in exploiting the ERL levels > 0
for better error recovery.

Tradeoff between a cross transport Multipath and iSCSI optimized MC/s
implementation should be best left to the user's discretion.

-Shyam Iyer



Pasi Kärkkäinen

unread,
Jul 24, 2008, 4:46:17 AM7/24/08
to open-...@googlegroups.com

Some links:

It seems you might be able to get better performance with MC/s than with multipath:
http://osdir.com/ml/kernel.device-mapper.devel/2005-04/msg00014.html

Some discussion about MC/s and ERL=2 etc:
http://osdir.com/ml/iscsi.iscsi-target.devel/2006-02/msg00183.html

Nice presentation about various iSCSI (opensource) implementations:
http://www.usenix.org/event/lsf08/tech/IO_bellinger.pdf

Blog entry about different choises for connectivity:
http://www.meta.net.nz/~daniel/blog/2008/06/05/linux-iscsi-stacks-and-multiple-initiators-per-target-lun/

-- Pasi

Pasi Kärkkäinen

unread,
Jul 24, 2008, 4:48:20 AM7/24/08
to open-...@googlegroups.com
On Thu, Jul 24, 2008 at 02:08:25PM +0530, Shyam...@Dell.com wrote:
>
>
> > (And I'm not trying to say multipath is bad.. after all, that's what
> I'm using myself:)
>
> True. Also, in specific scenarios where the user is simply interested in
> iSCSI traffic MC/s have an advantage in exploiting the ERL levels > 0
> for better error recovery.
>

Yep, I just read about ERL=2 etc.. would be nice feature to have. I understood
that it provided faster error recovery also.

> Tradeoff between a cross transport Multipath and iSCSI optimized MC/s
> implementation should be best left to the user's discretion.
>

open-iscsi doesn't support MC/s so basicly you can't use it in Linux :)

So you're left with multipath..

-- Pasi

Pasi Kärkkäinen

unread,
Jul 24, 2008, 4:50:38 AM7/24/08
to open-...@googlegroups.com
On Thu, Jul 24, 2008 at 11:28:07AM +0300, Pasi Kärkkäinen wrote:
>
> > In regards to command ordering you got me there. I don't think the kernel groups the commands
> > in any fashion explicitly - that is unless you use the SCSI generic interface and set the
> > SCSI commands yourself. But my knowledge in that area is at infancy period - I need to
> > find some good SCSI command books and take the time to the read the SCSI specs before I can
> > answer this little question truthfully.
> >
>
> I remember reading something about this subject earlier.. maybe on this list, or then on ietd list..
>
> Let's try with an example:
>
> You have 2 sessions to the same target/disk. So you have /dev/sda and
> /dev/sdb on your initiator host. You multipath them together, with
> round-robin loadbalancing, both paths active/ready.. and now you have let's
> say /dev/mapper/mpath0.
>
> Now how is the data on disk (on target) kept consistent when IO commands
> are split over both paths? Meaning how is the command execution order preserved?
>
> Does multipath software (on initiator) take care of that?
>
> Or does the target handle that somehow? Block until all the commands coming
> from first session/path are executed, then execute from the second
> session/path? This doesn't sound very clever..
>
> If I remember correctly, there was some advantage from MC/s related to this
> situation..
>
> Hopefully someone knows this better and can comment :)
>

Hmm..

http://www.usenix.org/event/lsf08/tech/IO_bellinger.pdf

"ISCSI provides Command Sequence Number (CmdSN)
ordering to ensure delivery of tasks from Initiator to Target
Port."

I wonder if these CmdSN numbers are per session?

-- Pasi

Pasi Kärkkäinen

unread,
Jul 24, 2008, 4:58:11 AM7/24/08
to open-...@googlegroups.com
On Tue, Jul 22, 2008 at 09:36:13AM -0400, Don Williams wrote:
>
> None that I'm aware of. However, MC/S is more of Microsoft thing. *I*
> don't see it anywhere else and even with MS you can use standard MPIO and
> achieve the same result. You end up logging into the target for each GbE
> interface you want to use. I don't know of any benefit to MC/S over MPIO.
> When you Google it, they're mentioned in the same breath. Since ultimately
> they provide the same result. Redundant connections to a volume.
>
> I have no idea if the OiS community has any interest or plan in adding this
> feature.
>

http://osdir.com/ml/iscsi.iscsi-target.devel/2006-02/msg00183.html

"One of the areas of work recently done with Core-iSCSI has been getting
the more advanced features (namely MC/S) validated with 3rd party iSCSI
targets. With mailing list members help, I believe Core-iSCSI has
become the world's first Linux Initiator to prove MC/S interopt across
multiple iSCSI Target implementations."


Then there's the iSCSI target from the same people also supporting MC/s and
ERL=2:

http://linux-iscsi.org/index.php/LIO-Target

Info also here:
http://www.usenix.org/event/lsf08/tech/IO_bellinger.pdf

-- Pasi

Vladislav Bolkhovitin

unread,
Jul 24, 2008, 5:09:54 AM7/24/08
to open-...@googlegroups.com, pa...@iki.fi
You forgot about this thread:
http://lkml.org/lkml/2008/7/14/273

> -- Pasi
>
> >
>

Pasi Kärkkäinen

unread,
Jul 24, 2008, 5:12:42 AM7/24/08
to Vladislav Bolkhovitin, open-...@googlegroups.com

Thanks.

There's some discussion about command ordering etc on this mail:
http://linux.derkeiler.com/Mailing-Lists/Kernel/2008-07/msg04931.html

Doesn't seem to be very simple subject after all :)

-- Pasi

Vladislav Bolkhovitin

unread,
Jul 24, 2008, 5:26:09 AM7/24/08
to open-...@googlegroups.com
That's the same thread :)

Don Williams

unread,
Jul 24, 2008, 9:51:23 AM7/24/08
to open-...@googlegroups.com
Re: ERL > 0 Except that basically no one supports it. The initiator and
target both must support it.

Don

-----Original Message-----
From: open-...@googlegroups.com [mailto:open-...@googlegroups.com] On
Behalf Of Shyam...@Dell.com
Sent: Thursday, July 24, 2008 4:38 AM
To: open-...@googlegroups.com

Pasi Kärkkäinen

unread,
Jul 24, 2008, 10:54:08 AM7/24/08
to open-...@googlegroups.com
On Thu, Jul 24, 2008 at 09:51:23AM -0400, Don Williams wrote:
>
> Re: ERL > 0 Except that basically no one supports it. The initiator and
> target both must support it.
>

I guess not many of the commercial targets support it..

It seems core-iscsi initiator in Linux supports it, and their target
(lio-target) supports that aswell..

http://www.linux-iscsi.org/index.php/LIO-Target

Too bad core-iscsi is not included in upstream Linux kernel..

Donald Williams

unread,
Jul 24, 2008, 1:13:25 PM7/24/08
to open-...@googlegroups.com
Hello,
 
 I was referring more to commerical iSCSI targets and initiators.
 
 Thx.
 
 Don

Mike Christie

unread,
Jul 24, 2008, 1:13:39 PM7/24/08
to open-...@googlegroups.com
Pasi Kärkkäinen wrote:
> On Thu, Jul 24, 2008 at 09:51:23AM -0400, Don Williams wrote:
>> Re: ERL > 0 Except that basically no one supports it. The initiator and
>> target both must support it.
>>
>
> I guess not many of the commercial targets support it..
>
> It seems core-iscsi initiator in Linux supports it, and their target
> (lio-target) supports that aswell..
>
> http://www.linux-iscsi.org/index.php/LIO-Target
>
> Too bad core-iscsi is not included in upstream Linux kernel..
>

One of the reasons it cannot be merged upstream is because it support
MC/s :) The linux-scsi developers will not allow that feature upstream.

As far as different error recovery levels goes, erl1 should get done
soon, because it would be very nice for iscsi tape.

Steven Hayter

unread,
Aug 6, 2008, 3:45:29 PM8/6/08
to open-...@googlegroups.com

Mostly a lurker on this thread, but as I work for a small company which
sells a commercial iSCSI target which supports multiple connections per
session and ERL2 (strong focus on iSCSI Tape support), so I thought I'd
throw my two cents in.

It would be very nice to see support in Linux for both MC/s and ERL1.

It's obvious the benefits of ERL1 to people deploying iSCSI Tape, but
MC/s is also useful. The latest generation of tape drives (LTO4, DLT-S4
etc) support ordered tagged commands, thus these could be load-balanced
over multiple links, to do this however ordered command delivery over
the SCSI transport is vital as multiple sessions can not provide
transport command ordering.

Steve

Pasi Kärkkäinen

unread,
Aug 6, 2008, 4:39:52 PM8/6/08
to open-...@googlegroups.com

Thanks! This was exactly what I was thinking.. there was some good reason
for MC/s related to command ordering:)

So you can't do that at all when you use multiple sessions instead of MC/s?

-- Pasi

Shyam...@dell.com

unread,
Aug 7, 2008, 3:32:42 AM8/7/08
to open-...@googlegroups.com

And this is a great feature to have in virtual environments where different guest Oses can just have a T.C.P. connection and not worry iSCSI.

I believe we should encourage implementation of something that is already in the RFC because not implementing always causes interoperability challenges and also inhibits innovative usecases that could be visualized in this space.

Pasi Kärkkäinen

unread,
Aug 7, 2008, 9:50:11 AM8/7/08
to open-...@googlegroups.com

You seem to be familiar with this stuff so could you please explain in more
detail how round-robin loadbalanced multipath works when using two separate
iSCSI sessions to the same target/LUN.. ?

Is there some performance hit because multiple sessions cannot provide
command ordering or?

Just trying to understand how things work for real :)

Thanks!

-- Pasi

Reply all
Reply to author
Forward
0 new messages