Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

QSAM file buffering questions

48 views
Skip to first unread message

Alan Hinds 312-738-1121

unread,
Apr 25, 1994, 6:15:08 PM4/25/94
to
Harold Mains <CO1....@TS3.TEALE.CA.GOV> asks:
>
> ENVIRONMENT: MVS/ESA version 3.1.3
>
> I am trying to tune some batch job steps to achieve maximum performance
> --least elapsed time to execute the steps.
>
> I am looking a buffering the QSAM files: both read and write files.
>
> The default number of buffers is 5.
>
> -- much deleted --

I don't know how COBOL manages files, and whether you can open and close
files at will, or how much of the file structure is specified in code and
how much in JCL. However, in my experience (assembler, Fortran, PL/I):

* Choice of optimal blocksize is the most important for I/O performance.
Usually, the best blocksize is near half a track. Optimal blocking
will minimize the number of rotations necessary to read or write a file,
and minimize the ammount of waste disk space in interblock gaps (thus
also minimizing the number of tracks required to hold the data).

* After optimizing blocksize, increasing buffers will further improve
performance. Whether you need to divide your memory between different
files depends on whether they are concurrently open. When you close a
file, the space devoted to the file's buffers is released, and can be
reallocated to another file's buffers. Many programmers have ignored
this fact, and leave files open after the files have been processed.
Adding strategic open's and close's to a program can substantially
reduce its memory requirements by minimizing the number of files that
are concurrently open. Note that if files need to be opened repeatedly,
your CPU time could increase, as open's and close's are not free.

* File placement can significantly affect performance. If your files are
on heavily used devices or channels, you will suffer from I/O contention.

* If you have more than one file concurrently open, say because you are
writing one as you read the other, you will contend with yourself
unless the files are on different devices. It is not so easy as it once
was to force files to allocate on different devices (JCL SEP parameter
is no longer), but you can still force separation by coding specific
volume serial numbers.

* If your files are fragmented (split into many non-contiguous allocations
on disk), you will get poor elapsed times due to excessive seek time.
You can fix this by copying to a CONTIG allocation (usually impractical
for large files) or to a device with large contiguous extents. Usually
there are devices set aside for scratch use, which do have large
contiguous extents. If you must process a permanent dataset which is
badly fragmented, copy it to a scratch device at the beginning of your
job.

* If COBOL permits it, you could use disk-striping (splitting a single file
across multiple disks, so multiple I/O streams can be concurrently
active on a single file).

- Alan Hinds U32472@UICVM

F. Clark Jennings

unread,
Apr 26, 1994, 9:54:18 AM4/26/94
to
> SCO Systems Development Division
> Technical Support


> ENVIRONMENT: MVS/ESA version 3.1.3

> I am trying to tune some batch job steps to achieve maximum performance
> --least elapsed time to execute the steps.

> I am looking a buffering the QSAM files: both read and write files.

( lots of detail deleted...)

Instead of optimizing the I/O by using more buffers, how about using VIO or
Hiperbatch to eliminate I/O?

Clark Jennings
Reynolds Metals Company
Richmond, Virginia

0 new messages