Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

JES2 not purging output on PURGE queue

1,480 views
Skip to first unread message

Peter Hunkeler

unread,
Jan 6, 2015, 10:32:37 AM1/6/15
to


I'm about to open a PMR but just thought I might overlook someting. 


This is happening on a z/OS V2.1 JES2 parallel sysplex with 4 systems, two work systems and two GDPS controlling images. Both work systems have been IPLed recently (December 30 and 31).

We have the need to take spool volume SPL902 offline and have drained it. We're purging output elder than 21 days by issuing $POJQ,ALL,DAYS>21 on a daily basis. Today, I wondering that the spool volume still was quite full despite the fact that I can see only few smaller outputs in the hold and output queues (SDSF). the I found more than 1000 jobs are still in the PURGE queue, awaiting purge. JES2 does not seem to be purging and I cannot find out why. When I try to purge jobs on the purge queue, some are actually purge some not. I cannot find out why some jobs stay on the queue.

I could go on purging the jobs manually, but I would rather like to understand why purging is stuck. I'd appreciate any hint what to lock at or what to try. Below is the output of some commands (display and purge) I issued with some comments inbetween.


--
Peter Hunkeler

Status of the spool volumes as shown by $DSPL,LONG                                                      

$HASP893 VOLUME(SPL901)                                          
$HASP893 VOLUME(SPL901)  STATUS=ACTIVE,DSNAME=SYS1.HASPACE,      
$HASP893                 SYSAFF=(ANY),TGNUM=327275,              
$HASP893                 TGINUSE=18901,TRKPERTGB=3,PERCENT=5,    
$HASP893                 RESERVED=NO,MAPTARGET=NO                
$HASP893 VOLUME(SPL902)                                          
$HASP893 VOLUME(SPL902)  STATUS=DRAINING,AWAITING(JOBS),        
$HASP893                 DSNAME=SYS1.HASPACE,SYSAFF=(ANY),      
$HASP893                 TGNUM=349335,TGINUSE=65497,TRKPERTGB=3,
$HASP893                 PERCENT=18,RESERVED=NO,MAPTARGET=NO    
$HASP646 5.7752 PERCENT SPOOL UTILIZATION                                                                                                                                                                                        



The status of the PURGE PCEs is as follows:
                                                                                                                                                             
$DPCE(PURGE),LONG,DET                                          

$HASP653 PCE(PURGE)                                            
$HASP653 PCE(PURGE)     NAME=PURGE,WAIT=PURGE,INHIBIT=NO,      
$HASP653                MOD=HASPTRAK,SEQ=17246200,              
$HASP653                TIME=(2015.006,13:24:06.264660),        
$HASP653                ACTIVE=0,I/O=0,                        
$HASP653                NAME=PURGE,WAIT=PURGE,INHIBIT=NO,      
$HASP653                MOD=HASPTRAK,SEQ=17246200,              
$HASP653                TIME=(2015.006,13:24:06.264700),        
$HASP653                ACTIVE=0,I/O=0                          

It seems the PCEs are waiting for work, but none is assigned even though the purge queue over 1000 jobs long.

Here is the result of a $D command for two jobs on the purge queue. Some jobs show the additonal information PURGE=YES,CANCEL=YES, some not (What does this tell me?)

$DS(489149)                                                    
$HASP890 JOB(BMCDBC)                                            
$HASP890 JOB(BMCDBC)    STATUS=(AWAITING PURGE),CLASS=STC,      
$HASP890                PRIORITY=1,SYSAFF=(IND,ANY),HOLD=(NONE)

$DS(496983)                                                    
$HASP890 JOB(C2PACMON)                                          
$HASP890 JOB(C2PACMON)  STATUS=(AWAITING PURGE),CLASS=STC,      
$HASP890                PRIORITY=1,SYSAFF=(IND,ANY),          
$HASP890                HOLD=(NONE),PURGE=YES,CANCEL=YES        


A $DS...,LONG does not show anything surprising:

$DS(496983),LONG                                                
$HASP890 JOB(C2PACMON)                                          
$HASP890 JOB(C2PACMON)  STATUS=(AWAITING PURGE),CLASS=STC,      
$HASP890                PRIORITY=1,SYSAFF=(IND,ANY),          
$HASP890                HOLD=(NONE),PURGE=YES,CANCEL=YES,      
$HASP890                CMDAUTH=(LOCAL),OFFS=(),SECLABEL=,      
$HASP890                USERID=C2PSUSER,SPOOL=(VOLUMES=(SPL901,
$HASP890                2),TGS=2,PERCENT=0.0002),ARM_ELEMENT=NO,
$HASP890                CARDS=2,REBUILD=NO,CC=(COMPLETED,RC=0),
$HASP890                DELAY=(),CRTIME=(2014.342,12:21:43)    

$DS(493612),LONG                                                
$HASP890 JOB(BMCDBC)                                            
$HASP890 JOB(BMCDBC)    STATUS=(AWAITING PURGE),CLASS=STC,      
$HASP890                PRIORITY=1,SYSAFF=(IND,ANY),HOLD=(NONE),
$HASP890                CMDAUTH=(LOCAL),OFFS=(),SECLABEL=,      
$HASP890                USERID=TECBMC01,SPOOL=(VOLUMES=(SPL902),
$HASP890                TGS=5,PERCENT=0.0007),ARM_ELEMENT=NO,  
$HASP890                CARDS=2,REBUILD=NO,CC=(COMPLETED,RC=0),
$HASP890                DELAY=(),CRTIME=(2014.337,11:32:39)    


Lizette Koehler

unread,
Jan 6, 2015, 10:52:20 AM1/6/15
to
Why purge Spool? Why not use the MIGRATION of spool volumes?

I hear it works very well for moving data from one spool volume to another.

Check out $MSPL commands

http://www-03.ibm.com/systems/z/os/zos/features/jes2/


Lizette


-----Original Message-----
From: JES2 discussion group [mailto:JES...@listserv.vt.edu] On Behalf Of
Peter Hunkeler
Sent: Tuesday, January 06, 2015 8:21 AM
To: JES...@LISTSERV.VT.EDU
Subject: JES2 not purging output on PURGE queue




I'm about to open a PMR but just thought I might overlook someting.


This is happening on a z/OS V2.1 JES2 parallel sysplex with 4 systems, two
w= ork systems and two GDPS controlling images. Both work systems have been
IPL= ed recently (December 30 and 31).

We have the need to take spool volume SPL902 offline and have drained it.
We= 're purging output elder than 21 days by issuing $POJQ,ALL,DAYS>21 on a
dail= y basis. Today, I wondering that the spool volume still was quite full
despi= te the fact that I can see only few smaller outputs in the hold and
output q= ueues (SDSF). the I found more than 1000 jobs are still in the
PURGE queue, = awaiting purge. JES2 does not seem to be purging and I cannot
find out why. = When I try to purge jobs on the purge queue, some are
actually purge some no= t. I cannot find out why some jobs stay on the
queue.

I could go on purging the jobs manually, but I would rather like to
understa= nd why purging is stuck. I'd appreciate any hint what to lock at
or what to = try. Below is the output of some commands (display and purge) I
issued with = some comments inbetween.


--
Peter Hunkeler

Status of the spool volumes as shown by $DSPL,LONG
=


$HASP893 VOLUME(SPL901)
$HASP893 VOLUME(SPL901) STATUS=3dACTIVE,DSNAME=3dSYS1.HASPACE,
$HASP893 SYSAFF=3d(ANY),TGNUM=3d327275,
$HASP893 TGINUSE=3d18901,TRKPERTGB=3d3,PERCENT=3d5,
$HASP893 RESERVED=3dNO,MAPTARGET=3dNO
$HASP893 VOLUME(SPL902)
$HASP893 VOLUME(SPL902) STATUS=3dDRAINING,AWAITING(JOBS),
$HASP893 DSNAME=3dSYS1.HASPACE,SYSAFF=3d(ANY),
$HASP893 TGNUM=3d349335,TGINUSE=3d65497,TRKPERTGB=3d3,
$HASP893 PERCENT=3d18,RESERVED=3dNO,MAPTARGET=3dNO
$HASP646 5.7752 PERCENT SPOOL UTILIZATION
=

=




The status of the PURGE PCEs is as follows:

=

=

$DPCE(PURGE),LONG,DET

$HASP653 PCE(PURGE)
$HASP653 PCE(PURGE) NAME=3dPURGE,WAIT=3dPURGE,INHIBIT=3dNO,
$HASP653 MOD=3dHASPTRAK,SEQ=3d17246200,
$HASP653 TIME=3d(2015.006,13:24:06.264660),
$HASP653 ACTIVE=3d0,I/O=3d0,
$HAS
$HASP653 NAME=3dPURGE,WAIT=3dPURGE,INHIBIT=3dNO,
$HASP653 MOD=3dHASPTRAK,SEQ=3d17246200,
$HASP653 TIME=3d(2015.006,13:24:06.264700),
$HASP653 ACTIVE=3d0,I/O=3d0

It seems the PCEs are waiting for work, but none is assigned even though
the= purge queue over 1000 jobs long.

Here is the result of a $D command for two jobs on the purge queue. Some
job= s show the additonal information PURGE=3dYES,CANCEL=3dYES, some not
(What do= es this tell me?)

$DS(489149)
$HASP890 JOB(BMCDBC)
$HASP890 JOB(BMCDBC) STATUS=3d(AWAITING PURGE),CLASS=3dSTC,
$HASP890 PRIORITY=3d1,SYSAFF=3d(IND,ANY),HOLD=3d(NONE)

$DS(496983)
$HASP890 JOB(C2PACMON)
$HASP890 JOB(C2PACMON) STATUS=3d(AWAITING PURGE),CLASS=3dSTC,
$HASP890 PRIORITY=3d1,SYSAFF=3d(IND,ANY),
$HASP890 HOLD=3d(NONE),PURGE=3dYES,CANCEL=3dYES


A $DS...,LONG does not show anything surprising:

$DS(496983),LONG
$HASP890 JOB(C2PACMON)
$HASP890 JOB(C2PACMON) STATUS=3d(AWAITING PURGE),CLASS=3dSTC,
$HASP890 PRIORITY=3d1,SYSAFF=3d(IND,ANY),
$HASP890 HOLD=3d(NONE),PURGE=3dYES,CANCEL=3dYES,
$HASP890 CMDAUTH=3d(LOCAL),OFFS=3d(),SECLABEL=3d,
$HASP890 USERID=3dC2PSUSER,SPOOL=3d(VOLUMES=3d(SPL901,
$HASP890 2),TGS=3d2,PERCENT=3d0.0002),ARM=5fELEMENT=3dNO,
$HASP890 CARDS=3d2,REBUILD=3dNO,CC=3d(COMPLETED,RC=3d0),
$HASP890 DELAY=3d(),CRTIME=3d(2014.342,12:21:43)

$DS(493612),LONG
$HASP890 JOB(BMCDBC)
$HASP890 JOB(BMCDBC) STATUS=3d(AWAITING PURGE),CLASS=3dSTC,
$HASP890 PRIORITY=3d1,SYSAFF=3d(IND,ANY),HOLD=3d(NONE),
$HASP890 CMDAUTH=3d(LOCAL),OFFS=3d(),SECLABEL=3d,
$HASP890 USERID=3dTECBMC01,SPOOL=3d(VOLUMES=3d(SPL902),
$HASP890 TGS=3d5,PERCENT=3d0.0007),ARM=5fELEMENT=3dNO,
$HASP890 CARDS=3d2,REBUILD=3dNO,CC=3d(COMPLETED,RC=3d0),
$HASP890 DELAY=3d(),CRTIME=3d(2014.337,11:32:39)

Lizette Koehler

unread,
Jan 6, 2015, 10:54:03 AM1/6/15
to
Also check out

$da,xeq

You may have jobs still running on the spool. They only get removed when
they are no longer active.

Lizette

Peter Hunkeler

unread,
Jan 6, 2015, 12:01:14 PM1/6/15
to


> Why purge Spool? Why not use the MIGRATION of spool volumes? 


Ther was no hurry and so I just drained the volume to be emptied. The daily $POJQ... for output elder than 21 days would empty the volume over time. 


> You may have jobs still running on the spool. They only get removed when they are no longer active. 


True, but I drained the volume two weeks ago, and the systems have been IPLed end of December. Therefore no executing jobs should have spool space allocations on the drained volume.


Anyway, I would expect JES2 to get rid of outut that is on the PURGE queue (oh, BTW, there are also jobs on the purge queue which have spool space allocations on volumes other than the drained one). 


--

Peter Hunkeler

Tom Wasik

unread,
Jan 6, 2015, 12:59:03 PM1/6/15
to
I noticed that the jobs are in independent mode ... SYSAFF=(IND.ANY). Is your member set to independent mode (ie $D MEMBER shows IND=YES?). If not, this is likely your problem. Try a $TJ command to remove independent mode from the job (SYSAFF=(-IND)) and see if the job purges. Jobs that are marked independent can only be processing on member that are in independent mode.

You could change the member to independent mode but then any work that enters the system while it is in independent mode will be marked as independent mode (creating more problem jobs).

Tom Wasik
JES2 Development

Peter Hunkeler

unread,
Jan 6, 2015, 1:12:41 PM1/6/15
to

> I noticed that the jobs are in independent mode ... SYSAFF=(IND.ANY). Is your member set to independent mode (ie $D MEMBER shows IND=YES?). If not, this is likely your problem. Try a $TJ command to remove independent mode from the job (SYSAFF=(-IND)) and see if the job purges. Jobs that are marked independent can only be processing on member that are in independent mode.


I saw the independent mode indicator, and thought I should have a look if this might be the indication I'm looking for. Obviously my aging brain forgot to remember me ;-)


I'll check tomorrow and will post the result. Thanks for the hint.


--

Peter Hunkeler 

Peter Hunkeler

unread,
Jan 7, 2015, 2:43:01 AM1/7/15
to

> I noticed that the jobs are in independent mode ... SYSAFF=(IND.ANY). Is your member set to independent mode (ie $D MEMBER shows IND=YES?). If not, this is likely your problem.


No member currently is in independent mode. I did a $TJOBQ,Q=PURGE,SYSAFF=-IND and surprise, surprise, the output is being purged. Thanks Tom.


I'll have to find out why those jobs were assigned SYSAFF=(IND,ANY). Since the dates are wide spread, it must happen every now and then.... Something is wrong in our setup.


--


Peter Hunkeler


Peter Hunkeler

unread,
Jan 7, 2015, 11:19:44 AM1/7/15
to

> I'll have to find out why those jobs were assigned SYSAFF=(IND,ANY). Since the dates are wide spread, it must happen every now and then.... Something is wrong in our setup.


It turned out, that nothing is wrong in our setup, we've just implented what TWS recommends as one out of three options to solve the problem that TWS will miss job events when the TWS tracker is stopped but jobs are still running. See below for details.


This is bad TWS design and a missuse of the JES2 independent mode option. 


--

Peter Hunkeler



The text below is from "IBM Tivoli Workload Scheduler for z/OS Version 8.6 - Managing the Workload (SC32-1263-07)". We seem to have chosen option 1.


Chapter "Overview of job tracking on z/OS" - Topic "How to make sure that events are not lost" 

Use one of these methods to ensure that events are not missed between the time 
Tivoli Workload Scheduler for z/OS is taken down and JES (JES2 commands are 
given in the example) is taken down: 

Method 1 
1. Remove the system being stopped from the JES2 MAS by placing it into 
independent mode (issue $T MEM,IND=Y). 
2. Allow the jobs currently running on this system to complete. 
3. Stop the tracker (P OPCx). 
4. Stop JES. 
5. Re-IPL. 
6. Restart JES. 
7. Restart the tracker. 
8. Resume normal work (issue $T MEM,IND=N). 

Method 2 
1. Abend JES2 ($PJES2,ABEND). 
2. Stop the tracker (P OPCx). 
3. Re-IPL. 
4. Restart JES. This will be a hot start. 
5. Restart the tracker. 

Method 3 
1. Take the tracker down (P OPCx). 
2. Bring it up again under the master scheduler (S OPCx,SUB=MSTR). 
Remember that a tracker cannot use JES services (SYSOUT data sets, 
JCC) if it runs under the master scheduler. 
3. Bring down JES. 
4. Take the tracker down (P OPCx). 



Leonard D Woren

unread,
Jan 7, 2015, 3:00:35 PM1/7/15
to
This is bad TWS design and a missuse of the JES2 independent mode option.

Yeah, and their suggested workarounds are horrid.  Wow.  I don't think I'd survive where I work now if I put out a bad design with such a terrible work-around.  It appears that the only reasonable solution would be to always run TWS under MSTR.  Disclaimer:  I don't know anything at all about TWS.  And I'm fine with keeping it that way.

BTW, with Method 3, wouldn't you still lose events if jobs end in the window between steps 1 and 2?

/Leonard

Robert A. Rosenberg

unread,
Jan 7, 2015, 3:08:52 PM1/7/15
to
At 08:42 +0100 on 01/07/2015, Peter Hunkeler wrote about AW: Re: JES2
not purging output on PURGE queue:

> > I noticed that the jobs are in independent mode ... SYSAFF=3d(IND.ANY). I=
>s your member set to independent mode (ie $D MEMBER shows IND=3dYES?). If n=
>ot, this is likely your problem.
>
>
>
>No member currently is in independent mode. I did a $TJOBQ,Q=3dPURGE,SYSAFF=
>=3d-IND and surprise, surprise, the output is being purged. Thanks Tom.
>
>
>I'll have to find out why those jobs were assigned SYSAFF=3d(IND,ANY). Since=
> the dates are wide spread, it must happen every now and then.... Something =
>is wrong in our setup.

If you have no reason why a job should be marked as SYSAFF=(IND) a
quick and dirty fix is to add a command to the start-up deck to issue
a periodic $TJ1-9999,SYSAFF=(-IND). This will prevent the issue from
happening in the future while you track down why IND jobs are being
submitted. Knowing which jobs this occurred with can help since you
can go to the JCL that was used for the submission to see if the IND
is coded there - If so, you can fix it there. Another cause might be
the INTRDR (which might be a dynamic allocation) being used to submit
being defined to set that setting (assuming that this is possible).

Davidson, Ivan E. , RET-DAY

unread,
Jan 7, 2015, 4:53:18 PM1/7/15
to
No, you would not lose events. The SMF exits and JES exits are still collecting the data in ECSA. The TWS tracker just picks them up and writes them to the event dataset.

-----Original Message-----
From: JES2 discussion group [mailto:JES...@listserv.vt.edu] On Behalf Of Leonard D Woren
Sent: Wednesday, January 07, 2015 2:49 PM
To: JES...@listserv.vt.edu
Subject: Re: AW: Re: JES2 not purging output on PURGE queue

This is a multi-part message in MIME format.
--------------070307080907010505030702
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit

> This is bad TWS design and a missuse of the JES2 independent mode
> option.

Yeah, and their suggested workarounds are horrid. Wow. I don't think I'd survive where I work now if I put out a bad design with such a terrible work-around. It appears that the only reasonable solution would be to always run TWS under MSTR. Disclaimer: I don't know anything at all about TWS. And I'm fine with keeping it that way.

BTW, with Method 3, wouldn't you still lose events if jobs end in the window between steps 1 and 2?

/Leonard


Peter Hunkeler wrote on 1/7/2015 8:18 AM:
>
--------------070307080907010505030702
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta content=3D"text/html; charset=3Dwindows-1252"
http-equiv=3D"Content-Type">
</head>
<body bgcolor=3D"#FFFFFF" text=3D"#000000">
<div class=3D"moz-cite-prefix">
<blockquote type=3D"cite">This is bad TWS design and a missuse of
the JES2 independent mode option. </blockquote>
<br>
Yeah, and their suggested workarounds are horrid.=A0 Wow.=A0 I don'= t
think I'd survive where I work now if I put out a bad design with
such a terrible work-around.=A0 It appears that the only reasonable=

solution would be to always run TWS under MSTR.=A0 Disclaimer:=A0 I=

don't know anything at all about TWS.=A0 And I'm fine with keeping
it that way.<br>
<br>
BTW, with Method 3, wouldn't you still lose events if jobs end in
the window between steps 1 and 2?<br>
<br>
/Leonard<br>
<br>
<br>
Peter Hunkeler wrote on 1/7/2015 8:18 AM:<br>
</div>
<blockquote cite=3D"mid:ez-106519231...@gmx.ch" type=3D"cite= ">
<meta http-equiv=3D"Content-Type" content=3D"text/html;
charset=3Dwindows-1252">
<div id=3D"newMessage"
style=3D"font-size:16px;font-family:Helvetica;color:#000000;"><br=
>
</div>
<p>&gt; I'll have to find out why those jobs were assigned
SYSAFF=3D(IND,ANY). Since the dates are wide spread, it must
happen every now and then.... Something is wrong in our setup.</p=
>
<p><br>
</p>
<p>It turned out, that nothing is wrong in our setup, we've just
implented what TWS recommends as one out of three options to
solve the problem that TWS will miss job events when the TWS
tracker is stopped but jobs are still running. See below for
details.</p>
<p><br>
</p>
<p>This is bad TWS design and a missuse of the JES2 independent
mode option.=A0</p>
<p><br>
</p>
<p>--</p>
<p>Peter Hunkeler</p>
<p><br>
</p>
<p><br>
</p>
<p>The text below is from "<font style=3D"-webkit-text-size-adjust:=

auto; background-color: rgba(255, 255, 255, 0);">IBM Tivoli
Workload Scheduler for z/OS Version 8.6 - Managing the
Workload (SC32-1263-07)". We seem to have chosen option 1.</fon=
t></p>
<p><font color=3D"#000000"><span style=3D"-webkit-text-size-adjust:=

auto; background-color: rgba(255, 255, 255, 0);"><br>
<font>Chapter "Overview of job tracking on z/OS" - Topic
"How to make sure that events are not lost"</font>=A0<br>
<br>
<font>Use one of these methods to ensure that events are not
missed between the time</font>=A0<br>
<font>Tivoli Workload Scheduler for z/OS is taken down and
JES (JES2 commands are</font>=A0<br>
<font>given in the example) is taken down:</font>=A0<br>
<br>
<font>Method 1</font>=A0<br>
<font>1. Remove the system being stopped from the JES2 MAS
by placing it into</font>=A0<br>
<font>independent mode (issue $T MEM,IND=3DY).</font>=A0<br>
<font>2. Allow the jobs currently running on this system to
complete.</font>=A0<br>
<font>3. Stop the tracker (P OPCx).</font>=A0<br>
<font>4. Stop JES.</font>=A0<br>
<font>5. Re-IPL.</font>=A0<br>
<font>6. Restart JES.</font>=A0<br>
<font>7. Restart the tracker.</font>=A0<br>
<font>8. Resume normal work (issue $T MEM,IND=3DN).</font>=A0= <br>
<br>
<font>Method 2</font>=A0<br>
<font>1. Abend JES2 ($PJES2,ABEND).</font>=A0<br>
<font>2. Stop the tracker (P OPCx).</font>=A0<br>
<font>3. Re-IPL.</font>=A0<br>
<font>4. Restart JES. This will be a hot start.</font>=A0<br>=

<font>5. Restart the tracker.</font>=A0<br>
<br>
<font>Method 3</font>=A0<br>
<font>1. Take the tracker down (P OPCx).</font>=A0<br>
<font>2. Bring it up again under the master scheduler (S
OPCx,SUB=3DMSTR).</font>=A0<br>
<font>Remember that a tracker cannot use JES services
(SYSOUT data sets,</font>=A0<br>
<font>JCC) if it runs under the master scheduler.</font>=A0<b=
r>
<font>3. Bring down JES.</font>=A0<br>
<font>4. Take the tracker down (P OPCx).</font>=A0<br>
<br>
<br>
<br>
</span></font></p>
</blockquote>
<br>
</body>
</html>

--------------070307080907010505030702--

Robert A. Rosenberg

unread,
Jan 7, 2015, 10:39:14 PM1/7/15
to
At 21:42 +0000 on 01/07/2015, Davidson, Ivan E. (RET-DAY) wrote about
Re: AW: Re: JES2 not purging output on PURGE queue:

>No, you would not lose events. The SMF exits and JES exits are still
>collecting the data in ECSA. The TWS tracker just picks them up and
>writes them to the event dataset.

Both methods 1 and 2 involve a re-IPL which would lose the
information in the ECSA. Only method 3 would live the data accessible
for the tracker to access.

Peter Hunkeler

unread,
Jan 8, 2015, 3:11:23 PM1/8/15
to

> If you have no reason why a job should be marked as SYSAFF=(IND) a
quick and dirty fix is to add a command to the start-up deck to issue
a periodic $TJ1-9999,SYSAFF=(-IND). 


Yes, I thought about this solution for a moment. But, I decided against it. 


I'm pretty sure that any job (STC, TSU) that got the IND flag set during the last IPL's independent mode phase will be purge during the next IPL phase. After all, we didn't find any such jobs on the production plex. The more than 1000 "IND" jobs I discussed initially were all on the HOLD queue on the maint plex before we purged them. They most probably would have been purged during the next IPL. I simply wasn't patient enough.

--
Peter Hunkeler
0 new messages