We had a similar problem. Not specifically with the QPADEV* devices,
though they definitely make life more interesting (hint: IBM, please
enable some control of the IP address to device name ... both for
this sort of thing and simply identification of the user workstation!)
Originally we all had sessions in QINTER, but decided to split out
the programmer jobs into QPGMR for performance reasons. This was
originally done by adding workstation entries into the QPGMR subsystem.
This was not very satisfactory, as once varied on, a device seems to
stay allocated to whatever subsystem it is in (yes, you have to use
TFRJOB to move it).
Additionally, for the workstation entries to work as expected, you
can't have any conflicts. I.e. if you have an entry of *ALL in
QINTER, then QINTER is just as likely to try to allocate the device
when it's varied on as QPGMR is (there are NO guarantees as to which
subsystem will get the device, unless you feel like specifically naming
each one, and we have far too many for that to be convenient).
I decided to change the setup, and added the memory pool used by QPGMR
to QINTER as (in our case) pool 3. I then added a routing entry called
PROGRAMMER to the QINTER subsystem that routed these jobs into pool 3
(the "QPGMR" pool). All that was then required was to change our
JOBD's to specify this routing entry. This means that wherever I sign
on to the system, I end up in the desired pool, and if anyone else
borrows my terminal, they don't automatically end up in the "QPGMR"
memory pool either.
Now, if you want programmers in the QPGMR subsystem for some other
reason (an obvious one would be control ... you might want to terminate
the QINTER subsystem so you can ensure that users are not online when
doing a backup of user data, but at the same time, leave the QPGMR
subsystem up), then the above strategy won't help ... maybe someone
else could come up with an elegant solution for that. My only idea
would be some sort of initial job which routed the job into the
correct subsystem before doing anything else. I experimented a little
some time ago with replacing the normal routing entry in the subsystem
for the purpose of routing jobs into specific pools (this was for
batch) and found that I could do this, then TFRCTL to QCMD (I think
this is what I did) and it could then receive the request message as
per normal. (We had an overloaded machine at the time, and I was
wondering if we could get more throughput from the batch (where the
bottleneck was) by isolating jobs into their own memory pools, and
dynamically assigning each incoming job to a free pool ... we had
quite a few JOBQ's, all entering QBATCH. In any case, I never found
out as we ended up getting a major upgrade brought forward, and the
need sort of went away ...
----------------------------------------
Ian Stewart i...@jigsaw.southern.co.nz