I am trying to optimize a JCL and cut down its running time (the job
runs for more than an hour now). Are there any specific points I need
to keep in mind while trying to look for bottle-necks in the JCL? Is
there any clue I should be looking for in the job's output spool so as
to figure out whether any optimization is required?... like..
1) should I follow any rules while allocating a dataset. If it is a
tape, can I do something to speed up the loading.
2) Are there any rules for sorting files? ie; if the file is huge,
then will it do any good by splitting them and sorting.
If any of you guys follow certain basic rules while writing a JCL, pls
let me in on your secret.
I have so far aimed at writing jcls that work. Perhaps your thuoghts
on this should be an eye-opener for me to writing more efficient ones.
Thanks in advance,
Arun
:>I am trying to optimize a JCL and cut down its running time (the job
:>runs for more than an hour now). Are there any specific points I need
:>to keep in mind while trying to look for bottle-necks in the JCL? Is
:>there any clue I should be looking for in the job's output spool so as
:>to figure out whether any optimization is required?... like..
You are trying to optimize the job, not the JCL.
:>1) should I follow any rules while allocating a dataset. If it is a
:>tape, can I do something to speed up the loading.
Depends how your operations group is set up.
:>2) Are there any rules for sorting files? ie; if the file is huge,
:>then will it do any good by splitting them and sorting.
SORT does that automagically.
:>If any of you guys follow certain basic rules while writing a JCL, pls
:>let me in on your secret.
:>I have so far aimed at writing jcls that work. Perhaps your thuoghts
:>on this should be an eye-opener for me to writing more efficient ones.
What are the issues?
Too short batch window?
Too much CPU or too much elapsed?
--
Binyamin Dissen <bdi...@dissensoftware.com>
http://www.dissensoftware.com
Director, Dissen Software, Bar & Grill - Israel
As Binyamin correctly points out it is the job which you wish to
optimise, changes to the JCL may or may not help depending upon what the
issues are.
Without knowing the precise situation the only thing I can comment on is
what typically causes delay, and probably the biggest culprit is file
BLKSIZE in respect of non-VSAM files, and CI-size for VSAM. Changing
these should only be done however with due regard to other possible data
set uses there are fundamental differences between online and batch
requirements.
Any change to BLKSIZE, CI-Size or the number of buffers will increase
Virtual Storage usage, which may increase Paging Rates so if Storage is
a problem these changes could make that worse.
There may be conflict with other jobs within the system.
Having said all of the above reviewing BLKSIZE, or equivalent and the
number of Buffers would be a good starting point the less I/O performed
the faster the job will run, but be aware of the other issues mentioned.
Monitor any change not just in respect of this job, but for any impact
elsewhere.
Regards - Terry
--
posted via MFF : http://www.MainFrameForum.com - USENET Gateway
I was given a job written by someone else and asked to check for
bottle-necks it may have. This job essentially has a number of sort
steps interspersed with PROGRAM steps. I wanted to have a sort of (no
pun intented :-) )checklist before I plunged in. Your pointers will
certainly help.
Thanks again,
Arun
If the DD is writing to two or more tape volumes, then you
can code the following:
[//ddname dd dsn=......,disp=(new,...), ] <<== start of the
DD statement
// UNIT=(????,2,DEFER) <<== insert in the
DD statement
Where ???? is the ESOTERIC for the tape units (e.g., TAPE,
CART, SQTP), or the GENERIC (e.g., 3490, 3480).
What this will do, assuming that you have enough tape units
available at the time, is the first tape will mount and be
ready to be written to while the second tape is being
mounted. You do not have to wait for the tape to rewind and
unload before you start writing on the second volume this
way. This alone can cut 5-10 minutes out of your *elapsed*
time for processing 2 tapes!
Now if you are READING a file, unless you know for sure that
the file is on tape, things can get ugly coding the UNIT
parameter (all depends on SMS, and the ACS rules in place,
etc.).
<snip>
> 2) Are there any rules for sorting files? ie; if the file is huge,
> then will it do any good by splitting them and sorting.
<snip>
IF and ONLY IF you are doing a MAXSORT (SYNCSORT terminology
here), get the SYNCSORT manual and read on how to optimize
this type of sort. However, in the shop I just did a lot of
tuning for, SYNCSORT for the LARGEST VSAM files was running
less than 1 minute ELAPSED time (KSDS/ESDS files > 6GB).
Otherwise, find out which sort you are using (e.g., DFSORT
from IBM), and read the manuals for it and how to tune
things for LARGE [read that, GIGANTIC] files.
<snip>
> If any of you guys follow certain basic rules while writing a JCL, pls
> let me in on your secret.
<snip>
As some one else said, become good friends w/ your systems
programmers. They can help you with all kinds of tuning
items. Ask if you are using PAVs (Parallel Access Volumes)
or if you can (some shops can use them, but have not
installed them because of $$$, or your files are on the
wrong "DASD" device(s)). These will allow multiple
simultaneous I/O to the same volume from the same LPAR/MVS
image.
Then there are things such as HyperBuf (CA product), BLSR
(Batch LSR, which is done from the JCL for VSAM files).
REGION size and BUFNO settings, assuming that you have
sufficient REAL (C-STORE) storage, will also help you out.
Then there is how your JOBs are defined to WLM... Does SMS
have your files compressed? How about striped?
All of these things are what your systems programmers work
with on a regular basis. Make them happy and they can make
your life pleasant. Piss them off and life gets real
difficult (how do I know? Because I is one!).
--
Steve Thompson
www.vsstrategies.com
330/335-7228 office
Notice: By sending UCE/BCE (a/k/a, spam) to this address,
you are accepting and agreeing to our charging a $1000 fee,
per spam item, for handling and processing, and you agree to
pay any and all costs incurred (including court costs and/or
attorney's fees) for collecting this fee.
Analysis may help you here. I have seen the same file sorted the same way in
different steps of a job. I have seen sorts that don't exclude unwanted data
that have to be read in by the program.
Figure out how to get as small of files as you can for each program - this may
involve changing order of runs (if that is allowable). Do a business analysis
of your needs. Then test your findings.
1 Determine purpose of each job step.
2 Check the production schedule. Determine when the job runs.
3 Check the job dependencies. Determine when prior jobs run
which create required input files and/or update required databases.
4 In particular, consider breaking earlier jobs up into smaller jobs,
if that will permit the earlier release of a required database.
That is a common bottleneck - large jobs tying up files unnecessarily.
5 In addition, consider breaking this job up into smaller jobs,
if that will permit the earlier release of any database.
This job me be the cause of a bottleneck, as well as the victim of one.
6 Review database access calls for efficiency, particularly Get Uniques
being used where Get Nexts would be more efficient.
7 Consider extracting one or more databases to flat files, and possibly sorting them,
possibly in a new job, possibly run earlier in the batch window.
This may be more efficient, depending on structure of database,
and whether or not there is update access or read-only access.
8 Consider creating print files instead of printouts,
and extract those print steps to a new job/jobs.
9 The production support analysts and the on-call developers
will all thank you if you remove as many forced abends as you can
in favor of exception reports and/or emails to a support group.
10 THINK.
The above is by no means comprehensive. It is just to make you aware of
and thinking about some of the possibilities.
You need to be analyzing and asking questions about !every! aspect of the process,
such as "what does this do, why is it done, and can it be eliminated or restructured ",
and "when is this done, and can it be rescheduled, earlier or later".