iSCSI Multiqueue

46 views
Skip to first unread message

Bobby

unread,
Jan 15, 2020, 10:16:48 AM1/15/20
to open-iscsi

Hi all,

I have a question regarding multi-queue in iSCSI. AFAIK, scsi-mq has been functional in kernel since kernel 3.17. Because earlier,
the block layer was updated to multi-queue blk-mq from single-queue. So the current kernel has full-fledged multi-queues.

The question is:

How an iSCSI initiator uses multi-queue? Does it mean having multiple connections? I would like 
to see where exactly that is achieved in the code, if someone can please me give me a hint. Thanks in advance :)

Regards

The Lee-Man

unread,
Jan 23, 2020, 4:51:49 PM1/23/20
to open-iscsi
open-iscsi does not use multi-queue specifically, though all of the block layer is now converted to using multi-queue. If I understand correctly, there is no more single-queue, but there is glue that allows existing single-queue drivers to continue on, mapping their use to multi-queue. (Someone please correct me if I'm wrong.)

The only time multi-queue might be useful for open-iscsi to use would be for MCS -- multiple connections per session. But the implementation of multi-queue makes using it for MCS problematic. Because each queue is on a different CPU, open-iscsi would have to coordinate the multiple connections across multiple CPUs, making things like ensuring correct sequence numbers difficult.

Hope that helps. I _believe_ there is still an effort to map open-iscsi MCS to multi-queue, but nobody has tried to actually do it yet that I know of. The goal, of course, is better throughput using MCS.

Donald Williams

unread,
Jan 23, 2020, 7:51:31 PM1/23/20
to open-...@googlegroups.com
Hello  

 Thanks for sending this.  I too believe this is how it works and given the current performance of OiS it's certainly not single threaded per iSCSI session, and with multiple iSCSI sessions over different NICs, connecting into multipathd,  performance and redundancy needs are met for the vast majority of SAN applications.  
 
 Often the bottleneck is the backend storage given the interface speeds available today for iSCSI.    Especially as you add more hosts. since the IO load as seen by storage is typically very random.

 Regards, 
Don 


 

--
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/open-iscsi/8f236c4a-a207-4a0e-8dff-ad14a74e57dc%40googlegroups.com.

Vladislav Bolkhovitin

unread,
Jan 24, 2020, 3:43:55 AM1/24/20
to open-...@googlegroups.com, The Lee-Man

On 1/23/20 1:51 PM, The Lee-Man wrote:
> On Wednesday, January 15, 2020 at 7:16:48 AM UTC-8, Bobby wrote:
>
>
> Hi all,
>
> I have a question regarding multi-queue in iSCSI. AFAIK, *scsi-mq*
> has been functional in kernel since kernel 3.17. Because earlier,
> the block layer was updated to multi-queue *blk-mq* from
> single-queue. So the current kernel has full-fledged *multi-queues*.
>
> The question is:
>
> How an iSCSI initiator uses multi-queue? Does it mean having
> multiple connections? I would like 
> to see where exactly that is achieved in the code, if someone can
> please me give me a hint. Thanks in advance :)
>
> Regards
>
>
> open-iscsi does not use multi-queue specifically, though all of the
> block layer is now converted to using multi-queue. If I understand
> correctly, there is no more single-queue, but there is glue that allows
> existing single-queue drivers to continue on, mapping their use to
> multi-queue. (Someone please correct me if I'm wrong.)
>
> The only time multi-queue might be useful for open-iscsi to use would be
> for MCS -- multiple connections per session. But the implementation of
> multi-queue makes using it for MCS problematic. Because each queue is on
> a different CPU, open-iscsi would have to coordinate the multiple
> connections across multiple CPUs, making things like ensuring correct
> sequence numbers difficult.
>
> Hope that helps. I _believe_ there is still an effort to map open-iscsi
> MCS to multi-queue, but nobody has tried to actually do it yet that I
> know of. The goal, of course, is better throughput using MCS.

From my old iSCSI target development days, MS is fundamentally not
friendly to multi-queue, because it requires by the iSCSI spec to
preserve order of commands inside the session across multiple
connections. Commands serialization => shared lock or atomic => no
multi-queue benefits.

Hence, usage of MS for multi-queue would be beneficial only if to drop
(aka violate) this iSCSI spec requirement.

Just a small reminder. I have not looked in the updated iSCSI spec for a
while, but don't remember this requirement was anyhow eased there.

In any case, multiple iSCSI sessions per block level "session" would
always be another alternative that would require virtually zero changes
in open-iscsi and in-kernel iSCSI driver[1] as opposed to complex
changes required to start supporting MS in it as well as in many iSCSI
targets around that currently do not[2]. If I would be working on iSCSI
MQ, I would consider this as the first and MUCH more preferable option.

Vlad

1. Most likely, completely zero.
2. Where requirement to preserve commands order would similarly kill all
the MQ performance benefits.

Vladislav Bolkhovitin

unread,
Jan 24, 2020, 3:49:07 AM1/24/20
to open-...@googlegroups.com, The Lee-Man
Oops, 'MCS' must be everywhere instead of 'MS'. Something "corrected"
this "for me" behind my back.

Sorry,
Vlad

Paul Koning

unread,
Jan 24, 2020, 9:29:25 AM1/24/20
to open-...@googlegroups.com


> On Jan 24, 2020, at 3:43 AM, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
>
>
>> ...
>
> From my old iSCSI target development days, MS is fundamentally not
> friendly to multi-queue, because it requires by the iSCSI spec to
> preserve order of commands inside the session across multiple
> connections. Commands serialization => shared lock or atomic => no
> multi-queue benefits.
>
> Hence, usage of MS for multi-queue would be beneficial only if to drop
> (aka violate) this iSCSI spec requirement.
>
> Just a small reminder. I have not looked in the updated iSCSI spec for a
> while, but don't remember this requirement was anyhow eased there.
>
> In any case, multiple iSCSI sessions per block level "session" would
> always be another alternative that would require virtually zero changes
> in open-iscsi and in-kernel iSCSI driver[1] as opposed to complex
> changes required to start supporting MS in it as well as in many iSCSI
> targets around that currently do not[2]. If I would be working on iSCSI
> MQ, I would consider this as the first and MUCH more preferable option.
>
> Vlad
>
> 1. Most likely, completely zero.
> 2. Where requirement to preserve commands order would similarly kill all
> the MQ performance benefits.

My reaction, from a similar background, matches yours. iSCSI makes things quite hard by requiring ordering across the connections that make up a session. That discourages implementation of multi-connection support in targets (it's optional). In some cases, it entirely rules it out; for example, in the EqualLogic storage arrays it would be pretty useless to support multi-connection since the connections could not be spread over multiple arrays, and for that reason we ruled out that feature.

By contrast, MPIO (several independent sessions used by the storage stack as a wider and/or more fault tolerant pipe to the storage) requires essentially no work at the target and gives at least as much benefit as MCS for a lot less work.

paul


Reply all
Reply to author
Forward
0 new messages