> I started using/studying open-iscsi recently. I was wondering if there
> is any hard-coded limit to number of targets devices that can
> discovered per iSCSI target? Is there any hard-coded limit on the
> number of LUNs supported per target device (assuming that somehow HBA
> on iscsi target doesn't present any limit) ?
It is a module parameter, check modinfo.
Stefan
Mmm... I wanted to know this last week. And in the SCSI module there is
written you can go up to 16k. But I suggest if you go above 4k to review
the time that is needed to rescan ;)
I don't know about targets, I only checked for luns :)
Stefan
You have actually 100 SCSI devices on your target?
> this was pushed into the /etc/ietd.conf file and service iscsi-target
> was restarted. (Yup, I am using iscsi-target-0.4.16 on the target
> side.)
> 3. This gave me 100 targets, each having 10 LUNs - starting from
> mydisk1 to mydisk100 (as target names).
>
> On the initiator:
> 1. I set the discovery on this target machine IP. and tried to log in.
>
> result:
> Discovery itself is not able to add more than 89 (twice I got 90)
> devices. It simply doesn't get any information about the top 11
> devices from the ietd.conf file.
Which is that the first target wouldn't show up right?
> On the target, the message log contained something like "Dropping key
> (target ___ )" which, I found, was appearing from text_key_add()
> function from the iscsi-target package.
>
> The code is something like:
> ...
> if (conn->rsp.datasize + len > INCOMING_BUFSIZE) {
> log_warning("Dropping key (%s=%s)", key, value);
> return;
> }
Interesting. That does seem like a limitation in that code. Did you
try to change the INCOMING_BUFSIZE to a higher value? Or to decrease
the length of the target's name? So: iqn.2008.com:t1, iqn.2008.com:t2,
and so on?
> ...
>
> Initiator doesn't show any error, and infact the node list (iscsiadm -
> m node) doesn't display any device starting from mydevice1 -
> mydevice11 in the list.
Which would imply that the target didn't send that IQN to the initiator.
>
> I tried this experiment many times by
> 1. changing the number of LUNs,
> 2. changing the device being pointed to by the soft links,
> 3. changing the number of target devices defined in ietd,
So you added more targets and that didn't change it, and when
you lowered the target count it would be a lower amount
of block disks showing up on the initiator side?
> 4. making half the links point to one device, and other half to other
> 5. Shuffling the order of ietd defined target names (and each time
> starting 10-11, according to definition in ietd.conf, are not send by
> target, whatever there name be).
Yeah, it probably sorts them.
>
> Each time the result was same (atleast) - 89-90 can only be added.
>
> I am still trying to find what could be the possible bottleneck that
> is preventing me to add more devices - till then, I would really
Looks like a bug in iscsi-target. You probably show post this question
on its mailing list as well.
When you say devices, you mean targets right?
open-iscsi-2.0-869 should work. I believe iscsi-target-0.4.16 does not
support lots of targets. You shuold post to the IET list. I think there
are threads on this already, so actually you should search their list to
make sure it is not fixed in the svn tree.
As others pointed out there is a module param for the max lun for
iscsi_tcp, but that should actually be limited by a libiscsi/scsi-ml
limit. libiscsi uses a function named int_to_scsilun in
drivers/scsi/scsi_scan.c and as you can see that only implments 2
levels. libiscsi/iscsi_tcp also does not do the device scanning (the
iscsi layer only finds the targets) so we are limited by scsi_scan.c. To
get the limits for that do "modinfo scsi_mod".
There is also a limit on the number of targets. Because we allocate a
scsi_host per session, and the scsi layer uses a unsigned short for the
host number the number of targets is 2^16.
If you are using 869.* then the problem is most likely on the IET side.
Like I said before I think you need to upgrade. I had the same problem
when I was testing my fixes for lots of targets support.
Oh yeah, I said this in some other mail, but will say it here so people
can search for it. There is another target limit and that is the number
of files a process can have open. We open a tcp socket for each session
and we do a session to each target portal. We also will open other files
so the exact number is not known. I think it is probably around 4000. I
do not have a setup that supports that many targets yet to know exactly
though. Let us know what you find out.
> Are some parameters which I am missing which I need to configure or
> take care of?
> I tried tinkering "node.conn[0].iscsi.MaxRecvDataSegmentLength =
> 65536" but there was no difference. This actually is making me think
> that target is acting as a bottleneck.
>
That is just for normal sessions. You would want to set
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength
but like I said that will not completely fix the problem.
Thanks for the reply.
Hmm, that sounds quite a large value as against what I am
experiencing. I will try and find out what is happening in my case -
probably the iscsi-target is acting like some kind of bottleneck (the
max size of PDU is my initial guess) - or may be I am missing
something.
Nevetheless, thanks for the info - i really appreciate your help.
>
>> Are some parameters which I am missing which I need to configure or
>> take care of?
>> I tried tinkering "node.conn[0].iscsi.MaxRecvDataSegmentLength =
>> 65536" but there was no difference. This actually is making me think
>> that target is acting as a bottleneck.
>>
>
> That is just for normal sessions. You would want to set
> discovery.sendtargets.iscsi.MaxRecvDataSegmentLength
Oops - I should have noticed. Thanks once again. I will try tinkering with this.
>
> but like I said that will not completely fix the problem.
Well, this should give me something very good to start with. I will
post whatever my finding are (as soon as I find something ;) ). Thanks
all.
Shreyansh