We are running CICS v4.1 and looking at trying to tune our regions if
possible some more before starting to work on the migration to CICS TS
v1.3. We do not have Omegamon, TMON or any other performance monitor
specifically for CICS so I believe all I'm going to have to work with is
the DFHSTUP Summary Statistics reports to analyze the performance of
our regions. I've looked through the Performance Guide which tells you
what's in the report but doesn't seem to provide any guidelines as to
what to look for that may indicate a problem. I've also checked the
Redbooks site and it doesn't appear that there's any kind of reference
that would provide this information there either.
Can anyone point me somewhere that would provide rule of thumb info for
the 4.1 Summary Statistics report? TIA.
Dave Spring
Technical Support Specialist
VIPS, Inc.
Towson, MD.
Sent via Deja.com http://www.deja.com/
Before you buy.
Here are some performance tuning notes. They're for TS 1.3 but a lot of the
stuff still applies for 4.1 :-
Kevin
Section 1 : Introduction
- For performance tuning use :
- CICS statistics including interval, end-of-day, requested and
unsolicited
- CICS monitoring data including STAT transaction
- CICS internal and auxiliary trace
- Interval statistics. The default is 3 hours. It should be set to 5 or 10
minutes. Use a batch job or the STAT transaction to access these.
- Don't rely on the CICS end-of-day stats for your tuning decisions.
- STAT transaction. This is a 'must have' for CICS TS. It needs assembling.
The information from the STAT transaction is better than from the batch
report.
- There are 3 different kinds of TRACE facilities :
1. Internal trace - table in main storage above 16 MB line
2. Auxiliary - usually BSAM datasets on disk
3. GTF - see Operations Guide. Note that GTF is cheaper than Aux. trace
- Controlling the cost of Trace :
- see SIT parms STNTR and SPCTR
- STNTR parm controls the standard level of tracing for CICS as a whole
- The default for tracing is to record trace information for level 1
(i.e. all CICS components). Reduce the cost of producing Trace stats by
setting STNTR = OFF (i.e. standard
tracing = no) then use override parms to just turn on tracing for the
components that you are really interested in. Specify STNxx where xx is the
component.
e.g. STNTREI=1 turns on trace for Exec Interface
- SPCTR parm in SIT specifies the level of standard tracing for all CICS
components used by a transaction, terminal or both selected for special
tracing. The default (1,2)
specifies special tracing for levels 1 and 2 for all CICS components. Set
to OFF to disable it. Use SPCTRxx for selective special tracing.
- reduce the cost of trace even further by changing the transaction
definition :
CEDA DEF TRANS (....)
TRACE (YES / NO)
- TRACE (YES) means level 1 (all components). Use TRACE (NO) to
suppress recording .
- You should consider turning off the trace option at transaction level if
a transaction is robust, especially in Production to save CPU cycles.
- If you encounter major problems you can turn on special trace via CETR
transaction :
- use SPCTR = (1,3)
- Don't use special tracing in Production !
- Buffers for CICS aux. trace are allocated dynamically from MVS free
storage below the line. Aux. trace is activated when SIT parm AUXTR is on.
Buffer allocations also
takes place at execution time in response to a CETR or CEMT (SET AUXTRACE)
transaction.
- Size of trace table is specified by the TRTABSZ SIT par.
- Internal tracing controlled via CEMT SET INTRACE
- The cost of tracing overhead depends on the workload used but can cause up
to 16 % CPU overhead :
- Exec Interface (EI) = 4.1%
- AP domain = 3 %
- FC domain = 2.5 %
- TC domain = 2.5 %
- BMS = 2.5 %
- DS (dispatcher) = 1 %
- LD = 1 %
- everything else = trivial
- The conclusion to be drawn from the above is make sure that you only use
TRACE selectively
Section 2 : MVS Interface
- MVS has 16 protect keys
- CICS/ESA uses :
- key 8 for CICS key areas
- key 9 for user key areas if storage protection supported
- key 0 for re-entrant programs that are protected
- Storage protection is turned on via STGPROT = YES / NO
- RENTPGM = PROTECT for Key 0
- Real storage goes up by 9K per task when using storage protection
- Certain features require expanded storage rather than Central - e.g.
hiperspaces
- Paging datasets are specified in IEASYSxx members of SYS1.PARMLIB
- Data Spaces - are a competitor to MRO with a file owning region with
mirror transactions. CICS support is implemented by the Shared Data Table
feature.
- The problem with Data Tables is that they are restricted to the local MVS
system
- Hiperspaces provide another fast way of retrieving data from expanded
storage. CICS supports Cache only hiperspaces via DFP (VSAM). Hiperspaces
were very useful when Central storage was limited but in the systems in use
today, the answer tends to be to buy more real storage.
- The Coupling Facility provides a dedicated data sharing processor with
high speed links to each MVS image in a Sysplex.
- 3 data models are supported in Coupling Facility storage :
- lock structures - for resource serialisation and data integrity
- cache structures - for cross system buffer management - e.g. DB2 data
sharing
- list structures - for communication and shared data - e.g. VTAM generic
resources, temp. storage, MVS logger, DB2 etc
- CICS Coupling Facility Data Tables (CFDT) are new with CICS/TS 1.3.
Shared data tables are faster but CFDTs are sysplex enabled.
- CFDTs are ideal for low volume short term data which needs to be globally
shared
- Non-recoverable TS queues can be stored in a Coupling Facility list
structure. These are faster than function shipping but not as fast as local
TS queues. The CF doesn't
support recoverable queues.
Section 3 : CICS Initialisation
- With CICS TS, there can be up to 32 initialisation tasks (CSSY) running
concurrently when CICS is loading.
- CSD file, local catalog and global catalog are all traditional VSAM files.
The default buffering for VSAM files is only 1 index buffer and 2 data
buffers.
- Buffers for the CSD are specified via the CSDBUFND and CSDBUFNI parms in
the SIT. You can also specify them in the CICS JCL.
There is also a parm called CSDLSRNO to specify the LSR pool for the CSD.
- Check buffering of the global catalog. This is not controlled by a SIT
parm.
For optimum buffering specify BUFND and BUFNI parameters in your CICS JCL
for the GCD via the AMP= parameter :
AMP=('BUFND=34,BUFNI=33')
- Check for message DFHCC0202 in your CICS startup. You should avoid this at
all costs. This error indicates secondary extents and that your space
allocations are wrong.
Ensure that you allocate 1 cyl for the local catalog.
- Global catalog - the number of internal control records depends on whether
TS queues are defined as recoverable or not
- In CICS TS, some of the records in the global catalog are used during a
cold start, so it is not possible to delete / redefine it for a cold start.
- If the GCD is deleted/ redefined, IBM say that an initial (arctic) start
must be performed.
- Speed up cold and initial starts by using Cold-copy. On cold and initial
starts, CICS deletes all the definitions from the global catalog. Instead do
the following :
- Before a cold start run DFHRMUTL utility with
SET_AUTO_START=AUTOCOLD,COLD_COPY. This creates a copy of the GCD
containing only those
records required for a cold start.
- Before an initial start, run DFHRMUTL utility with
SET_AUTO_START=AUTOINIT,COLD_COPY. This creates a copy of the GCD
containing only those
records required for an initial start.
- Use Autoinstall for programs to speed up the time for CICS startup.
Specify SIT parm PGAICTLG = NONE so that autoinstalled programs are not
catalogued.
This gives a faster CICS restart (warm and emergency) because CICS does
not reinstall definitions from the global catalog. Definitions are
autoinstalled on first reference.
- The PGAIPGM parm turns on/off the program autoinstall function
- LSR pool processing - note that if the only file within an LSR pool
closes, the LSR pool must be closed and recreated
- The CSD is initially opened with 32 strings. Once it is closed, it is then
subsequently reopened with the no. of strings defined in the CSDSTRNO parm.
Therefore ensure that there is some file with a Disp of open in the same
LSR pool
- Ensure that DFHVTAM group precedes any TERMINAL / TYPETERM definition in
the GRPLIST to avoid programs used to build the TCT being loaded for each
terminal.
- Do not have more than 100 resources to be defined in any group defined in
the CSD. It causes unnecessary overhead.
- If you don't intend using the CICS Web interface specify WEB=NO in the SIT
so that the Web domain is not activated.
- If you don't intend using the CICS Web interface or SSL (Secure Sockets
Layer), specify TCPIP=NO in the SIT so that the Sockets domain task is not
activated.
Section 4 : Dispatching
- Input to SRM is specified in SYS1.PARMLIB members :
- IEAIPSxx - used to set dispatching priority and storage isolation for a
performance group
- IEAOPTxx - multiprogramming thresholds
- IEAICSxx - links started task /job to a performance group
- CICS is designated as non-swappable
- User application code traditionally runs under the QR TCB but now also J8
and H8
- QR TCB can only dispatch one task at a time. A CPU bound task can
monopolise the TCB therefore these should be given low priority.
- Priorities should only be set in CEDA Define Transaction in the range 0 -
255.
- Use a few priorities as this will influence the priority ageing mechanism.
- A THREADSAFE program is capable of being invoked on multiple TCBs
concurrently.
- When you first move to CICS TS, ensure that FORCEQR = YES and
CONCURRENCY=NO
- MVS dispatching priority recommendations :
GRS
TRACE
LLA
VLF
RASP
IRLM
IMSCTL
DSNMSTR
CICS
DSNDBAS
IMSMPR
TSO
BATCH
- CICS loves MTTW dispatching (Mean Time To wait) as opposed to FP (Fixed
Priority)
- Review value specified for SIT parm ICV. This SIT parm specifies the
maximum time that CICS releases control to the operating system when there
are no transactions
ready to resume processing. Default is 1000 millisec. For systems with low
activity this is too low - set to 3000.
- MXT - Always set to at least 32 because CICS uses 32 concurrent tasks to
initialise.
- MXT - Don't code too large because of the impact of storage used by kernel
tasks and performance blocks used by WLM
- To determine a value for MXT - On a test system change MXT to 999 and then
see what the ceiling is.
- Tuning tip - Check your CICS log for number of AKCC abends - this
indicates that the PURGETHRESH was exceeded. This is used to limit the no.
of transactions queuing
in the transaction class.
- Never use PRTYAGE < 400
- PRTYAGE = 32 K has no impact
- Experiment with lower values of PRTYAGE for VSAM systems
- Turn on PRTYAGE to get additional storage management when Storage Manager
short (< 256 K) or critical ( < 128 K)
- ICVR SIT PARM. Set this as low as possible to trap looping code. This is
a system wide parameter. Use transaction level RUNAWAY parm for high CPU
transactions.
- DTIMOUT deadlock timeout parm. Use this to prevent system from stalling.
The default is not to use it. Must also specify SPURGE (YES).
- Check log for AKCS abends
Section 5 : Storage Management
- LPA contains :
- Fixed LPA (FLPA) IEAFIXxx
- Modified LPA (MLPA) IEALPAxx
- Pageable LPA (PLPA) LPALSTxx
- SIT parm LPA = NO / YES specifies whether any CICS or user modules can be
used from the link pack area
- USERMOD DFHŁUMOD contains a list of the CICS modules that are read-only
and therefore eligible for reference in the LPA
- Use RMF to avoid duplicates in the LPA.
- Tune LPA by :
- removing unused components
- minimise what goes into MPLPA as you get 2 copies in virtual storage
- Beware of the 1 Mb boundary in LPA ---> if you go 1 byte over the limit
it results in an extra 1 Mb chunk being allocated. This can reduce the size
of the Private area and cause 822 abends.
- Default region size is 32 Mb and is specified in the exit IEFUSI
- Ways to reduce LSQA consumption :
- use new initiator for CICS. If you run CICS as a batch job, drain the
initiator to maximise the address space before loading CICS
- run CICS as a started task (saves 1 TCB) - this also ensures that a
clean address space is used (no S822 abend)
- limit the number of DB2 / DBCTL threads by reducing the number of TCBs
- limit the number of MAXOPENTCBS and SSLTCBS
- Ways to reduce SWA :
- use dynamic allocation for files (rather than specifying in CICS JCL) to
use less storage
- code fewer DD statements
- if possible, specify SWA = ABOVE to get virtual storage constraint
relief. Note that you may need PTFs to achieve this and it will not be
possible if you have 24-bit code.
- Solutions to DSA problems :
- shorten task residency time (eliminate waits, I/O etc)
- reduce storage requirements for a task
- minimise use of resident programs
- limit MXT
- use TRANCLASS for resource hungry transactions
- monitor subpool usage
- increase the size of DSA if there is sufficient storage available
- CICS/LE Support
- New SIT parm RUWAPOOL specifies the option for allocating a storage pool
the first time an LE program runs in a task. Set to YES to create a pool of
storage that reduces the need to GETMAIN and FREEMAIN run-unit work areas
for every EXEC CICS LINK request
Section 6 : Loader
- Storage short occurs when there is less than 256 K of free storage in DSA
excluding the storage cushion
- Storage Critical occurs when there is less than 128 K of free storage in a
DSA excluding the storage cushion
- When no. of free pages available is less than the cushion size for a DSA,
the storage cushion is released and CICS goes SOS
- CICS does dynamic self-tuning of program storage to keep popular programs
in memory
- Programs link-edited with the RENT attribute go into the read only DSAs
(RDSA and ERDSA)
- Don't use VLF (Virtual Lookaside Facility) unless you are having severe
problems with program storage compression and you can't make DSAs bigger
- Speed up program loading by :
- specifying 28K blksize for RPL datasets (2 extents on 1 track of a 3390)
- ensuring that RPL libraries are not in secondary extents
- placing popular RPL datasets near to top of the concatenation
- Check for program definitions that specify Transient ----> this should
only be specified for infrequently used programs as it causes program
storage to be released when the use count becomes zero. e.g. use for PLTPI
modules
CEDA Def Prog
Usage = Normal / Transient
Section 7 : Network
- For an immediate 2 % CPU saving, check that SIT parm HPO = YES on all
services. This turns on VTAM High Performance Option that uses a shorter
path length for CICS terminal control operations.
- RAPOOL parm needs tuning - this can cause transaction delays in high
activity systems :
- default is RAPOOL=(2,1)
- 1st value is for Non-HPO, 2nd value is for HPO
- for non-HPO systems, 2 is sufficient for any size network
- for HPO systems, to get good throughput we need multiple RAIA control
blocks that all get picked up on 1 scan by CICS terminal control (CSTP) e.g.
RAPOOL=(100,20)
- set RAPOOL so that the maximum is never reached
- see CICS VTAM statistics
- RAMAX parm. This specifies the size in bytes of the I/O area that is to be
allocated for each VTAM receive-any operation
- ensure it fits into a 4 K page size
- for most systems the default of 256 is adequate
- a high RAMAX value can waste real storage; a low RAMAX value can waste
CPU cycles
- TYPETERM definition
- check SEND and RECEIVE sizes - see RDO manual. These determine the
outbound and inbound chain element size.
- SENDSIZE - if you specify a low CEDA SENDSIZE value, this causes
additional processing and storage to be used to break the single logical
message into multiple parts
- RECEIVESIZE - default of 256 is usually adequate
- to tune run 5 mins of terminal trace and then check for MIC (middle of
chain) in the message which indicates that messages are being split. If all
messages contain OIC (only message in chain) this indicates that the
settings are ok.
- IOAREALEN parm on TYPETERM defines the size of the TIOA passed to the
application. If you specify ATI(YES), you must specify an IOAREALEN of at
least 1 byte. Recommendation used to be (2K,4K) but this does not fit into
an IBM 4 K page. Therefore specify (2024, 4072).
- Check for IOAREALEN=(0,*) and IOAREALEN=(1,*) ----> these settings are
now obsolete and should not be used anymore ----> change to (2024,4072) to
preallocate
- OPNDLIM SIT parm is ignored with VTAM 3.2 and above - all storage comes
from subpool 229
- Check Profile definitions in CEDA :
- CEDA Def TRANS ( )
Profile (DFHCICST) - is the IBM default
- CEDA Def
Profile (DFHCICST)
Msginteg(No)
Protect (No)
- MSGINTEG parm changes Terminal Control from using Exception Response
Protocol to Definite Response Protocol. MSGINTEG (Yes) causes overhead - DSA
gets
bigger because control blocks are held for longer. Use MSGINTEG (NO)
which is the default
- PROTECT provides the use of Definite Response protocol (as per
MSGINTEG) but also includes extra message logging. Use PROTECT (NO) which is
the default
- ICV SIT parm - this determines the longest time that CICS will go to
sleep when it has nothing to do and the longest time that can elapse between
dispatches of TCP task.
The default value of 1000 (millisec) is definitely too low for VTAM only
networks. Optimum value is probably of the order of 3000 or higher.
- Autoinstall
- AIQMAX SIT parm specifies max. no. of VTAM terminals and APPC
connections that can be queued concurrently. This needs to be set high for
emergency restarts (CATA
logon trans, CATD logoff trans). To tune check Autoinstall stats for '
Peak concurrent attempts' and 'Times the peak was reached'.
- AILDELAY SIT parm specifies delay period before terminal is deleted
after a session has ended. Use AILDELAY = 0.
- AIRDELAY SIT parm - specifies delay period after an emergency restart
before autoinstalled terminal entries not in session are deleted. Use
AIRDELAY=0. (default is
0700)
- AIEXIT SIT parm - contains name of your customised Autoinstall program
- control CATA and CATD transactions by putting them in their own TCLASS.
This will stop them flooding the system
- New SIT parms to introduce a timeout mechanism for hung terminals when
CICS closes down.
- See TCSWAIT and TCSACTN parms
- TCSWAIT specifies how long CICS should wait before assuming a non-shutdown
terminal has hung
- TCSACTN controls how CICS handles hung terminals
- for no action specify TCSACTN = NONE
- for force-close specify TCSACTN=UNBIND (CICS attempts to unbind but
things can still be in session from a VTAM perspective)
- Recommendation - set SIT parm STATRCD=OFF then run the STAT transaction
via PLTSD
- Unsolicited statistics - use the XSTOUT exit to suppress unwanted
unsolicited terminal statistics (e.g. ones produced everytime a user logs on
/ off)
- VTAM 4.3 and above can do its own data compression. Investigate with VTAM
team whether this can be used at your site.
Section 8 : MRO / ISC
- To tune look at the traffic via the session statistics and system entry
statistics
- Don't use same no. of send sessions and receive sessions
- Hard code send and receive size for sessions - should be high - typically
04096
- Set QUEUELIMIT parm on Connection definition on TORs to stop S.O.S
- Check default priority of mirror transactions in combined AOR / FOR
regions
- Use MROLRM = YES in FOR where large no. of reads in the same transaction.
This minimises the overhead of attaching new mirror tasks.
- MROBTCH SIT parm - it is dangerous to use any value other than MROBTCH=1
(i.e. no MRO batching of function requests)
Section 9 : TCPIPSERVICES
- CICS expects to be invoked under the following URL :
http://ipname:port/converter/alias/program?token
where ipname - IP address of the MVS system where CICS is running
port - is a TCP/IP port number for CWI to listen on
converter - use CICS
alias - is the alias transaction id (default CWBA)
program - is the CICS application program name to be called
- WEBDELAY parm - change default. Use something like WEBDELAY=(5,5) to avoid
CPU spike.
- TCPSERVICE definition in RDO
- IPADDRESS - hard code the Ipaddress of MVS here. If there are multiple
TCP/IP stacks and CICS is listening to any, it can go into a recursive loop
if anyone of the IP
stacks is closed.
- TSQPREFIX - specify DFHWEB. This allows you to Sysplex enable web
requests.
- BACKLOG - this parm should be set to something like MXT and also
SOMAXCONN
- specify SSL=YES for encryption and force sign-on via AUTHENTICATE parm
- SSLTCB. Default SSLTCBS = 8. A value of 0 means that secure sockets
cannot be used. You can have up to 255 S8 TCBs. If anymore are required, you
would need to
set up multiple WOR (web owning regions)
Section 10 : VSAM
- use small CI sizes for random processing to avoid concurrent lockouts
when updating
- use large CI sizes for sequential processing to get as many records as
possible into storage in 1 i/o operation
- for both random and sequential access, use the small CI size and buffering
to ensure satisfactory response for both access types
- check for IMBED on define cluster statements - this should NOT be used
with the storage devices of today
- check for REPLICATE on define cluster statements - this should not be used
with the storage devices of today
- check CI / CA split numbers
- use separate LSR pools for data and indexes
- review end-of -day stats for information on shared buffers
Section 11 : Journal Control and MVS Logger
- check DASD placement of CICS journals
- CICS V4 - AKPFREQ SIT parm specifies the no. of physical I/Os to the
system log before an activity keypoint is taken. Set to a value that allows
activity keypointing every
15 - 20 minutes.
- CICS TS : Log Manager
- each CICS region has one system log which is implemented as 2 log
streams, DFHLOG and DFHSHUNT
- both DFHLOG and DFHSHUNT have primary, secondary and tertiary storage
- duplex logs to staging datasets
- committed log records (completed UOWs) are deleted at activity keypoint.
An activity keypoint forces log tail deletion. Note that a Syncpoint does no
t remove Before
images from primary storage (although these are redundant at this
point)
- for DFHLOG ensure that a minimal amount of data is offloaded to
secondary storage. Activity keypoint process keeps 2 activity keypoints
worth of data on the primary -
monitor interval between keypoints on the CSMT log.
- if the offload dataset is coded too small (or allowed to default - 2
tracks) there maybe frequent switches to new extents which will have a
negative effect on performance
(causes DASD shift)
- during the offload process, CICS has a 3 second delay
- look at system logger activity reports - see IXGRPT in SYS1.SAMPLIB. This
reports on SMF 88 records. You may need to amend the sort sequence depending
on the level
of OS/390 that you are running. There are PTFs in this area.
- for DASD-only log streams, the main item that should be monitored is the
number of staging dataset full events. This indicates that
the logger cannot write data to the
secondary storage quickly enough to keep up with incoming data. This
causes CICS to wait.
If this occurs, increase size of primary storage or reduce the
HIGHOFFLOAD threshold percentage, the point at which the
Section 12 : Transient Data
- Extrapartition TD Queues - uses QSAM access method. While QSAM access
method can be reasonably fast, it synchronises CICS entirely so CICS cannot
do anything
during a QSAM /I/O. Therefore use Extrapartition TD sparingly and make
block size large to do logical I/O rather than physical I/O. Another
consideration is that large
block sizes can be potentially bad when writing out because there is no
recovery processing for extratransient TD queues.
- Intrapartition TD Queues - don't make them recoverable if you can easily
recreate them as this introduces logging / serialisation overhead
- default SIT parm for buffers and strings is TD=(3,3). Set high enough so
ythat multiple requests can be processed concurrently and ou don't get
string waits at peak times.
- maximum no. of buffers has been increased from 255 to 32767. More
buffers means more look aside processing occurs with more VSAM CIs in
storage. Control blocks
for strings are above the line so use them.
- buffers are obtained from ECDSA durining initialisation.
Section 13 : Temporary Storage
- Main TS data resides in ECDSA. It is ideal for small data items which are
of short duration. There is no recovery possible.
- Aux. TS data is not as fast, may require I/O, can be used for small or
large data but recovery is possible. It cannot be shared between MVS
systems. Reside on DFHTEMP
dataset which is a VSAM ESDS managed by CICS. Buffers reside in ECDSA.
CI size affects efficiency. Smaller CI size is best for random processing
and larger for
sequential. Maximum CI size is 64K.
- SIT parm TS used to specify buffers and strings. Look at CICS temporary
storage stats for tuning buffers. Large no. of buffers can increase paging.
- SIT parm TMGSET determines how many control blocks will be built to manage
queue items. Make it quite high (max 100) and divisible by 4. It is obsolete
in CICS TS.
- Non-recoverable TS queues can now be stored in a Coupling Facility list
structure. This offers high availability but is not as fast as local queues.
- For both TS and TD, specifying a large no. of buffers can improve CICS
performance by reducing VSAM I/O. There is a downside in that at CICS
shutdown all non-empty buffers have to be flushed sequentially which can
take a long time.
Section 14 : CICS DB2 Interface
- RCT is obsolete in CICS TS 1.3
- 3 Types of thread :
- command (COMD) - supports commands to DB2 from CICS
- POOL - general purpose transactions - good for Development. Only specify
a low volume in Production.
- ENTRY - dedicated or protected threads. Result in pre-allocated TCBs.
Good in Production.
- TYPE = INIT macro
- DPMODI - specifies default dispatching priority of DB2 subtasks
relative to CICS. Specify EQUAL or LOW. HIGH is dangerous as it pre-empts
CICS including logging.
- THRDMAX - max. no. of threads supported from CICS to DB2. Must tie up
with CTHREAD in DSNZPARM. Must be lower than CTHREAD which also includes
TSO, batch
etc.
- TOKENI - specifies whether a DB2 accounting record will be cut. It
uniquely identifies which transactions in CICS are using DB2 resource but
has a high CPU overhead
(5 or 6% CPU)
- TWAITI - default action when a thread is not available. TWAITI = YES or
POOL are ok. TWAITI causes abend. It would be better to queue in TCLASS
if you don't have
enough threads
- THRDA - set to average number of threads
- THRDM - set to max. (defaults to THRDA). Don't set too high because
control blocks are allocated from free MVS storage.
- THRDS - specifies no. of DB2 subtasks that are created at attach start
time - avoids DB2 control blocks being constantly deleted / defined. These
hang around for 2 purge
cycles (see PURGECYCLE parm in DB2 on INIT macro) but the trade-off is
that they use up space in the EDM pool. Therefore only code THRDS for high
volume (> 1 per
min arrival rate) CICS/DB2 transactions
- In CICS V4.1, you can use DSNC trans to modify no. of threads for busy
periods e.g. DSNC MODIFY TRANS TXID 8. This is obsolete in CICS TS.
- CICS TS
- DB2CONN is RDO replacement for old RCT INIT, POOL and COMD macros
- equivalent of THREADMAX is TCBLIMIT
- equivalent of DPMODI is PRIORITY
- equivalent of THRDA is THREADLIMIT
- no equivalent of THRDM or THRDS
- DB2ENTRY is RDO replacement for old RCT ENTRY macro
- equivalent of THRDS is PROTECTNUM
- equivalent of THRDA is THREADLIMIT
Section 15 : CICS DBCTL
- DRA runs as an MVS sub-task and controls the connection between CICS and
DBCTL, thread management and PSB scheduling. Each thread is also allocated
as a separate MVS sub-task. Sub-tasks are allocated from LSQA below the
line.
- with DBCTL there is no thread reuse capability
- if MINTHRD is frequently reached threads maybe dynamically created up to
the MAXTHRD value (specified in the DRA). Dynamically creating threads is an
expensive process. MINTHRD should be sufficient for normal workload.
- if MAXTHRD is frequently reached consider increasing the maximum number of
threads allowed between CICS and DBCTL. If tasks have to wait for an
available thread this will result in response time delays. MAXTHRD is for
unexepected overflow workload.
<david_...@my-deja.com> wrote in message
news:8ra8o5$mg8$1...@nnrp1.deja.com...
>We are running CICS v4.1 and looking at trying to tune our regions if
>possible some more before starting to work on the migration to CICS TS
>v1.3.
There are something on www.share.org (it is TS 1.3 performance but you
should be able to use some of the information).
What we have done here is:
1: Check if any transaction using DB/2 is performing poorly. If a
transation is using the wrong index it can be slow everything down.
2: Check if any files is having string waits, check for VSAM buffers
(index and data), index should be strings + 2(?), check if LSR could
be used when data is read random.
3: Check if there are temporary storage buffer/string waits.
We are looking at our I/O subsystem right now to see if there is
anything we can do there.
Med venlig hilsen
Frank Allan Rasmussen
Systemprogrammør
Fyns Amts EDB-central
Danmark
Thanks again!
Dave Spring
Technical Support Specialist
VIPS, Inc.
Towson, MD.
-----------------------------------------------------------------------
In article <39d98fc7....@news.tele.dk>,