Inefficiant grouping of targets and hosts?

64 views
Skip to first unread message

Oskar

unread,
May 19, 2023, 1:04:31 PM5/19/23
to esos-users
Hi!

I'm setting up ESOS on 4 servers at work and have come to the point where I need to configure targets and initiators. And I can't comprehend how tedious it is to set this up if you have a few servers and multiple FC connections on each server for redundancy.

At the moment I have 4 ESOS servers with 2 FC cards each and each of these cards have 4 links. So each ESOS server have 8 targets. And then I have 4 initiators/hosts with 2 FC cards each too, but these have only 2 links each instead of 4. So 4 initiator addresses per server. Except for one that has 8 links.

These servers are all connected to 2 fabric switches for redundancy.

So in total I have 32 target addresses and 20 initiator addresses. That should all be able to reach each other.

And as far as my knowledge goes, if I want to setup multi-path on these servers I need to pair up each target address with each initiator address. which means that I have to configure 640 links/connections (32*20) And doing that by hand isn't really realistic... So I assume that I have to write a script or something similar to generate these commands? Or is there already some way to do this in a more automatic manner than what I've found so far?

- - -

I was thinking a little bit about it and i can't seem to get why grouping is done the way it is? I mean if instead of creating a group for each target and then adding initiators and devices to that group. Couldn't we instead move the group concept one level higher? So that we create groups and then add all targets, initiators and devices to that group instead? that way we would only have to do this once instead of doing it for each target?

I assume that the way this is handled at the moment is a straight up legacy from how the scstadmin binary is setup? But I can't see why anyone would want to do it that way over my way? Am I missing something?

Looking forward to hear your thoughts on this! :)

Best regards
Oskar

Marc Smith

unread,
May 19, 2023, 11:59:48 PM5/19/23
to esos-...@googlegroups.com
On Fri, May 19, 2023 at 1:04 PM Oskar <os...@stahls.se> wrote:
>
> Hi!
>
> I'm setting up ESOS on 4 servers at work and have come to the point where I need to configure targets and initiators. And I can't comprehend how tedious it is to set this up if you have a few servers and multiple FC connections on each server for redundancy.
>
> At the moment I have 4 ESOS servers with 2 FC cards each and each of these cards have 4 links. So each ESOS server have 8 targets. And then I have 4 initiators/hosts with 2 FC cards each too, but these have only 2 links each instead of 4. So 4 initiator addresses per server. Except for one that has 8 links.
>
> These servers are all connected to 2 fabric switches for redundancy.
>
> So in total I have 32 target addresses and 20 initiator addresses. That should all be able to reach each other.
>
> And as far as my knowledge goes, if I want to setup multi-path on these servers I need to pair up each target address with each initiator address. which means that I have to configure 640 links/connections (32*20) And doing that by hand isn't really realistic... So I assume that I have to write a script or something similar to generate these commands? Or is there already some way to do this in a more automatic manner than what I've found so far?

You zone the initiator ports with the desired target ports on your FC
switches, then they can communicate with each other on the FC fabric.
Using the common MPIO stack for your initiator handles the work of
utilizing all available paths to the target; these typically use block
device and/or SCSI volume information that the initiator receives to
figure out paths (eg, based on disk, and then all paths to that
device).

Are the initiators Linux? If so, then using dm-multipath is good.


>
> - - -
>
> I was thinking a little bit about it and i can't seem to get why grouping is done the way it is? I mean if instead of creating a group for each target and then adding initiators and devices to that group. Couldn't we instead move the group concept one level higher? So that we create groups and then add all targets, initiators and devices to that group instead? that way we would only have to do this once instead of doing it for each target?

Are you referring to the host/initiator/security group stuff in the TUI?


>
> I assume that the way this is handled at the moment is a straight up legacy from how the scstadmin binary is setup? But I can't see why anyone would want to do it that way over my way? Am I missing something?

Not sure I understand this either? You can use 'scstadmin' from the
root shell, or the ESOS TUI handles the SCSI target configuration. It
uses block devices, files, etc. for the back-end storage (in ESOS).


>
> Looking forward to hear your thoughts on this! :)
>
> Best regards
> Oskar
>
> --
> You received this message because you are subscribed to the Google Groups "esos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/esos-users/82ad24d4-2575-4a18-8dea-e16f5e015a1cn%40googlegroups.com.

Oskar

unread,
May 20, 2023, 12:21:38 PM5/20/23
to esos-users
lördag 20 maj 2023 kl. 05:59:48 UTC+2 skrev Marc Smith:
On Fri, May 19, 2023 at 1:04 PM Oskar <os...@stahls.se> wrote:
>
> Hi!
>
> I'm setting up ESOS on 4 servers at work and have come to the point where I need to configure targets and initiators. And I can't comprehend how tedious it is to set this up if you have a few servers and multiple FC connections on each server for redundancy.
>
> At the moment I have 4 ESOS servers with 2 FC cards each and each of these cards have 4 links. So each ESOS server have 8 targets. And then I have 4 initiators/hosts with 2 FC cards each too, but these have only 2 links each instead of 4. So 4 initiator addresses per server. Except for one that has 8 links.
>
> These servers are all connected to 2 fabric switches for redundancy.
>
> So in total I have 32 target addresses and 20 initiator addresses. That should all be able to reach each other.
>
> And as far as my knowledge goes, if I want to setup multi-path on these servers I need to pair up each target address with each initiator address. which means that I have to configure 640 links/connections (32*20) And doing that by hand isn't really realistic... So I assume that I have to write a script or something similar to generate these commands? Or is there already some way to do this in a more automatic manner than what I've found so far?

You zone the initiator ports with the desired target ports on your FC
switches, then they can communicate with each other on the FC fabric.
Using the common MPIO stack for your initiator handles the work of
utilizing all available paths to the target; these typically use block
device and/or SCSI volume information that the initiator receives to
figure out paths (eg, based on disk, and then all paths to that
device).

Are the initiators Linux? If so, then using dm-multipath is good.
 

I'm aware of the zoning settings on the switch. I just merged all links I would have to setup on each server into one number, which might have made it unclear, sorry for that!

Skimmed through the wiki link on dm-multipath and from what I found there, the dm-multipath is just the kernel part of the multipath stack right?
 

>
> - - -
>
> I was thinking a little bit about it and i can't seem to get why grouping is done the way it is? I mean if instead of creating a group for each target and then adding initiators and devices to that group. Couldn't we instead move the group concept one level higher? So that we create groups and then add all targets, initiators and devices to that group instead? that way we would only have to do this once instead of doing it for each target?

Are you referring to the host/initiator/security group stuff in the TUI?
 

Yes! I'm talking about the TUI! This whole post was basically about the TUI. :)


>
> I assume that the way this is handled at the moment is a straight up legacy from how the scstadmin binary is setup? But I can't see why anyone would want to do it that way over my way? Am I missing something?

Not sure I understand this either? You can use 'scstadmin' from the
root shell, or the ESOS TUI handles the SCSI target configuration. It
uses block devices, files, etc. for the back-end storage (in ESOS).


Alright I'll try to make this clearer! :)

Today the TUI configuration (and scstadmin) lets you create groups on each target and then you can add initiators and devices to this group like below:

Target: XX:XX:XX:XX:XX:XX:XX:XX
        Group: group_name
                Initiator: YY:YY:YY:YY:YY:YY:YY:YY

                Initiator: YY:YY:YY:YY
:ZZ:ZZ:ZZ:ZZ
                Initiator: YY:YY:YY:YY:XX:XX:XX:XX

                LUN: 0 (block_device)

Target: XX:XX:XX:XX:ZZ:ZZ:ZZ:ZZ
        Group: group_name

                Initiator: YY:YY:YY:YY:YY:YY:YY:YY

                Initiator: YY:YY:YY:YY
:ZZ:ZZ:ZZ:ZZ
                Initiator: YY:YY:YY:YY:XX:XX:XX:XX

                LUN: 0 (block_device)

Target: XX:XX:XX:XX:YY:YY:YY:YY
        Group: group_name

                Initiator: YY:YY:YY:YY:YY:YY:YY:YY

                Initiator: YY:YY:YY:YY
:ZZ:ZZ:ZZ:ZZ
                Initiator: YY:YY:YY:YY:XX:XX:XX:XX

                LUN: 0 (block_device)
 
Notice that I have the exact same configuration for each target. Which is necessary to make the device available through each target.
But What I'm suggesting is that we instead switch the position of the group and target in this tree. So that it would look something like this instead:

Group: group_name
        Target:
XX:XX:XX:XX:XX:XX:XX:XX
        Target:
XX:XX:XX:XX:ZZ:ZZ:ZZ:ZZ
        Target:
XX:XX:XX:XX:YY:YY:YY:YY
        Initiator: YY:YY:YY:YY:YY:YY:YY:YY
        Initiator: YY:YY:YY:YY
:ZZ:ZZ:ZZ:ZZ
        Initiator: YY:YY:YY:YY:XX:XX:XX:XX

        LUN: 0 (block_device)

This way we could abstract away the need to create the same config for each target. As this would then result in the same config as the first example.

While we're at it,  could add the possibility to add targets and initiators to their own groups and then we could add these groups of initiators and targets to the main group. Which could look something like this:

Target_group: this_esos_server (Could be a standard group, but might need a better name! ;) )
        Target: XX:XX:XX:XX:XX:XX:XX:XX
        Target: XX:XX:XX:XX:YY:YY:YY:YY
        Target:
XX:XX:XX:XX:ZZ:ZZ:ZZ:ZZ

Initiator_group: host_1
        Initiator: YY:YY:YY:YY:XX:XX:XX:XX

        Initiator: YY:YY:YY:YY:YY:YY:YY:YY
        Initiator: YY:YY:YY:YY
:ZZ:ZZ:ZZ:ZZ

Initiator_group: host_2
        Initiator:
ZZ:ZZ:ZZ:ZZ:XX:XX:XX:XX
        Initiator: ZZ:ZZ:ZZ:ZZ:YY:YY:YY:YY
        Initiator:
ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ

Group: group_name
        Target_group:
this_esos_server
        Initiator_group: host_1
        Initiator_group: host_2
        LUN: 0 (block_device)
        LUN: 1 (block_device_2)

This way we don't have to remember all id:s of all initiators and targets which would be good for a self-documenting setup!
And if we have different SAN:s etc. that we would like to take into consideration, we could just create more groups of initiators with names that reflect that they are in different networks. eg: host01_san01, host01_san02, host02_san01 etc.

If we still want to be able to see the id:s if the initiators we could just display that below the initiator_group, like:

Group: group_name
        Target_group: this_esos_server
            XX:XX:XX:XX:XX:XX:XX:XX
            XX:XX:XX:XX:YY:YY:YY:YY
            XX:XX:XX:XX:ZZ:ZZ:ZZ:ZZ
        Initiator_group: host_1
            YY:YY:YY:YY:XX:XX:XX:XX
            YY:YY:YY:YY:YY:YY:YY:YY
            YY:YY:YY:YY:ZZ:ZZ:ZZ:ZZ
        Initiator_group: host_2
            ZZ:ZZ:ZZ:ZZ:XX:XX:XX:XX
            ZZ:ZZ:ZZ:ZZ:YY:YY:YY:YY
           
ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ
        LUN: 0 (block_device)
        LUN: 1 (block_device_2)

Best regards
Oskar

Marc Smith

unread,
May 22, 2023, 4:11:33 PM5/22/23
to esos-...@googlegroups.com
On Sat, May 20, 2023 at 12:21 PM Oskar <os...@stahls.se> wrote:
>
>
>
> lördag 20 maj 2023 kl. 05:59:48 UTC+2 skrev Marc Smith:
>
> On Fri, May 19, 2023 at 1:04 PM Oskar <os...@stahls.se> wrote:
> >
> > Hi!
> >
> > I'm setting up ESOS on 4 servers at work and have come to the point where I need to configure targets and initiators. And I can't comprehend how tedious it is to set this up if you have a few servers and multiple FC connections on each server for redundancy.
> >
> > At the moment I have 4 ESOS servers with 2 FC cards each and each of these cards have 4 links. So each ESOS server have 8 targets. And then I have 4 initiators/hosts with 2 FC cards each too, but these have only 2 links each instead of 4. So 4 initiator addresses per server. Except for one that has 8 links.
> >
> > These servers are all connected to 2 fabric switches for redundancy.
> >
> > So in total I have 32 target addresses and 20 initiator addresses. That should all be able to reach each other.
> >
> > And as far as my knowledge goes, if I want to setup multi-path on these servers I need to pair up each target address with each initiator address. which means that I have to configure 640 links/connections (32*20) And doing that by hand isn't really realistic... So I assume that I have to write a script or something similar to generate these commands? Or is there already some way to do this in a more automatic manner than what I've found so far?
>
> You zone the initiator ports with the desired target ports on your FC
> switches, then they can communicate with each other on the FC fabric.
> Using the common MPIO stack for your initiator handles the work of
> utilizing all available paths to the target; these typically use block
> device and/or SCSI volume information that the initiator receives to
> figure out paths (eg, based on disk, and then all paths to that
> device).
>
> Are the initiators Linux? If so, then using dm-multipath is good.
>
>
>
> I'm aware of the zoning settings on the switch. I just merged all links I would have to setup on each server into one number, which might have made it unclear, sorry for that!
>
> Skimmed through the wiki link on dm-multipath and from what I found there, the dm-multipath is just the kernel part of the multipath stack right?

Yes, kernel DM driver and multipath-tools (multipathd).


>
>
>
> >
> > - - -
> >
> > I was thinking a little bit about it and i can't seem to get why grouping is done the way it is? I mean if instead of creating a group for each target and then adding initiators and devices to that group. Couldn't we instead move the group concept one level higher? So that we create groups and then add all targets, initiators and devices to that group instead? that way we would only have to do this once instead of doing it for each target?
>
> Are you referring to the host/initiator/security group stuff in the TUI?
>
>
>
> Yes! I'm talking about the TUI! This whole post was basically about the TUI. :)
>
>
> >
> > I assume that the way this is handled at the moment is a straight up legacy from how the scstadmin binary is setup? But I can't see why anyone would want to do it that way over my way? Am I missing something?

Using either should yield the same configuration... you have SCST
"security" (host/initiator) groups. You make a group for each target
port (the groups are per target port) -- can be the same group name
per target port (doesn't really matter, it's naming for your benefit,
the group name isn't visible to initiators). Then add SCST devices
(eg, vdisk_blockio devices) as LUNs to each group, then add initiators
to each group. I usually add a group named "default" to each target
port, then add a wild card initiator name ('*') to it, then add all
SCST devices mapped as LUNs to it (start LUN numbering at 0). Repeat
for each target group.
This would require a change in SCST and how it handles groups (right
now they are per target interface).

That said, we could abstract this in the ESOS TUI and just
automatically create the same group on all target interfaces, etc. But
then obviously we lose the flexibility of using specific ports for
specific LUNs being exposed / LUN masking for target-initiator combos.
Yes, if just wanting to display it this way in the TUI, that wouldn't
be too much of a lift (maybe separate dialogs for different views).

--Marc


> Best regards
> Oskar
>
> --
> You received this message because you are subscribed to the Google Groups "esos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to esos-users+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/esos-users/9069d362-cf35-4591-b2c4-034eabfaedccn%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages