Since CICS itself detected the storage violation, run the "ANALYZE CICS
DUMP" function of storage dump management on the SM0102 dump, with level
3 options for trace (TR) and storage management (SM) domains. The output
from that should be helpful in identifying the start of the corrupted
storage area - it may be tedious research if it is a large/active CICS
partition, but the info should be in there.
Analyzing the SR0001 dump may be superfluous after reviewing the
preceding SM0102 dump, since it is quite likely that the storage
violation caused the subsequent 0C1/AKEA. But if you're of a mind to
pursue it, run the ANALYZE CICS DUMP with level 3 option for the trace
domain. There will be a trace entry detail item with 0C1/AKEA that
contains the PSW and saved registers from the abend. This information is
also available else where in the dump, but the trace entry may be the
quickest way to extract it - just run the dump and search for the
0C1/AKEA trace entry.
Randy Evans, Viaserv, Inc.
> Subject: [148383] CICS/TS Storage Violation
>
> Based on the two system dumps taken (I don't have any
transaction
> dump), is it possible to determine where the TPDNLOAD program
(transaction
> TPS1) messed up? If so, I need someone to teach me how to do this.
> Thanks.
>
> I2 0124 DFHSM0102 CICSDATA A storage violation (code X'0D11') has been
> 08:13:37
> detected by module DFHSMMF . 08:13:37
> I2 0124 DFHME0116 CICSDATA 08:13:37
> (Module:DFHMEME) CICS symptom string for message DFHSM0102 is
> 08:13:37
> PIDS/564805400 LVLS/411 MS/DFHSM0102 RIDS/DFHSMMF PTFS/VSE411
> 08:13:37
> PRCS/00000D11. 08:13:37
> I2 0124 DFHDU0201 CICSDATA ABOUT TO TAKE SDUMP. DUMPCODE: SM0102 ,
> DUMPID:
> 345/0003 08:13:37
> I2 0124 0S24I AN SDUMP OR SDUMPX MACRO WAS ISSUED 08:13:37
> I2 0124 0S29I DUMP STARTED 08:13:37
> I2 0124 0S30I DUMP STARTED. MEMBER=DI200764.DUMP IN
SUBLIB=SYSDUMP.SUBLIB
>
> I2 0124 1I51I DUMP COMPLETE 08:13:43
> I2 0124 DFHDU0202 CICSDATA SDUMPX COMPLETE. SDUMPX RETURN CODE X'00'
> 08:13:43
> ------- --------------------------------------------------------------
> --------
> I2 0124 DFHSR0001 CICSDATA An abend (code 0C1/AKEA) has occurred at
offset
>
> X'FFFFFFFF' in program TPDNLOAD. 08:13:44
> I2 0124 DFHME0116 CICSDATA 08:13:44
> (Module:DFHMEME) CICS symptom string for message DFHSR0001 is
> 08:13:44
> PIDS/564805400 LVLS/411 MS/DFHSR0001 RIDS/DFHSRP PTFS/VSE411
> AB/S00C1
> AB/UAKEA RIDS/TPDNLOAD ADRS/FFFFFFFF. 08:13:44
> I2 0124 DFHDU0201 CICSDATA ABOUT TO TAKE SDUMP. DUMPCODE: SR0001 ,
> DUMPID:
> 345/0004 08:13:44
> I2 0124 0S24I AN SDUMP OR SDUMPX MACRO WAS ISSUED 08:13:44
> I2 0124 0S29I DUMP STARTED 08:13:44
> I2 0124 0S30I DUMP STARTED. MEMBER=DI200765.DUMP IN
SUBLIB=SYSDUMP.SUBLIB
>
> I2 0124 1I51I DUMP COMPLETE 08:13:50
> I2 0124 DFHDU0202 CICSDATA SDUMPX COMPLETE. SDUMPX RETURN CODE X'00'
> 08:13:50
> ------- --------------------------------------------------------------
> --------
> I2 0124 DFHPEP: TPS1 ABEND ASRA IN TPDNLOAD ON BY CICSUSER
08:13:50
> LAST FN=X'0E08' RC=X'000000' RS=000 R2=000 DS=DAPSYSF 08:13:50
>
> Sincerely,
>
> Dave Clark
>
> WinWholesale Group Services
> 3110 Kettering Boulevard
> Dayton, Ohio 45439 USA
> (937) 294-5331
>
Do you have storage protection turned on?
Tony Thigpen
Maybe-
If it has a table of item-length 16 where the item has
at some location a binary field with 4 or less digits followed by a
numeric field with five digits:
very likely.
--
Martin
--
XML2PDF - create PDFs from within z/VSE or z/OS; more at
http://www.pi-sysprog.de
> Do you have storage protection turned on?
I bet it is off. Why: the last modification (.?00712) was done to a page
which had no grumple zone. HW would have cought that at the moment it
happened as opposed to when the freemain for damaged piece is issued.
Tony Thigpen
-----Original Message -----
From: indust...@winwholesale.com
Hello Dave,
Since you know the area of storage that was over laid, and INFOANA.
You will have to eyeball what is there. I would think that 15323 is a victim and not the culprit.
If looks as if 15323 was finished and was freemaining the storage.
Eric is right about looking at the Storage violation first. Maybe they are related to each other.
// EXEC INFOANA,SIZE=3172K
SELECT DUMP MANAGEMENT
DUMP NAME SYSDUMP.F2.DF200053
RETURN
SELECT DUMP VIEWING
PRINT 600000 601000 Just put in the address range.
RETURN
SELECT END
From: owner-vs...@Lehigh.EDU [mailto:owner-vs...@Lehigh.EDU] On
Behalf Of indust...@winwholesale.com
Sent: Wednesday, January 28, 2009
1:47 PM
To: VSE Discussion
List
Subject: Re: CICS/TS Storage
Violation
I have no idea what hardware you have that can wait.
All I can say is that if you run in USERKEY and try to modify a page
that is unowned (and hence in CICS-KEY) you get a "protection exception".
This is controlled by a single bit in a CR. If you have a case where it
waited until later ....
In the case of no, the damaged area is
> not recovered and the task is canceled.
That is what I want.
Greetings,
We went from VSE 2.6.1 to z/VSE 4.1.0 awhile back and I have noticed that it seems to act differently with regards to using the resources of the physical machine.
We run under z/VM 5.1.
We have 5 z/VSE virtual machines, all running CICS/TS and if some batch jobs are running within one machine then it can slow down the whole system significantly. That didn’t seem to me occurring in the past with VSE 2.6.1. There always seemed to be some “juice” left for other users.
I have the feeling that z/VSE can more effectively use the resources given to it and this creates fewer resources available for other virtual machines effectively causing slow response time for CMS users.
I wish I could “cut” a VSE machine into separate virtual CPUs with one virtual CPU running VTAM,TCPIP and CICSTS and another virtual CPU running batch. Then it would also be nice if z/VM would allow you to specify SHARE values for each virtual CPU within a VSE virtual machine.
I know I can play with the PRTY command with z/VSE but can anyone think of how to give all interactive users (CICS and CMS) better response time than some batch jobs running in a z/VSE virtual machine.
One idea I had was just to create a VSE machine just running a CICS/TS system and another just running batch but with the overhead in having 2 virtual machines running instead of one (with also having access to the same files) as well as operational issues doesn’t make it very appealing.
Any ideas?
Mike
--
Rich Smrcina
VM Assist, Inc.
Phone: 414-491-6001
Ans Service: 360-715-2467
http://www.linkedin.com/in/richsmrcina
Catch the WAVV! http://www.wavv.org
WAVV 2009 - Orlando, FL - May 15-19, 2009
> z/VSE Version 4 introduced a requirement for z/VM 5.2,
not realy- only if you have the desire to run CMT (datacollection for
SCRT).
I've played with the SHARE but this I use for a group of batch
partitions. I always put POWER,VTAM,TCPIP,FAQS and CICSTS above the
other partitions
Mike
-----Original Message-----
From: owner...@Lehigh.EDU [mailto:owner...@Lehigh.EDU] On Behalf
Of Rich Smrcina
Sent: January 29, 2009 11:38 AM
To: VSE Discussion List
-----Original Message-----
From: owner...@Lehigh.EDU [mailto:owner...@Lehigh.EDU] On Behalf
Of Martin Truebner
Sent: January 29, 2009 11:45 AM
To: VSE Discussion List
Subject: Re: z/VSE performance change?
It was very clear in the announcement and if you bring it up under z/VM 5.1 you get this
little nastygram:
BG 0000 0J86I WARNING: VM RELEASE NOT SUPPORTED BY VSE 4.1 - Z/VM 5.2 OR LATER
REQUIRED
The message apparently wasn't changed for 4.2, the system is definitely z/VSE 4.2.
Also, you may get a little push back calling for support (maybe alot).
We use the z/VM (5.3) priority to control the high level resource usage.
(all z/VSE 4.1.2 systems)
PROD gets more that TEST/BETA/TECH/Install/ and some CMS users.
And then with in PROD the prty is set to allow more control of some
partitions.
PRTY G,R,FA,BG,F8,F7,F6,F5,C=I=H=F2,F4,L,F9,E,F3,FB,F1
Ed Martin
Aultman Health Foundation
330-588-4723
ext 40441
What sort of z/VM priority do you use? Do you use the SET SHARE?
Regards,
Mike
PROD is RELATIVE 7500 with no limit.
Yeah , I guess you’re right about that.
Thanks,
Mike
From: owner...@Lehigh.EDU [mailto:owner...@Lehigh.EDU] On Behalf Of Mark Pace
Sent: January 29, 2009 1:42 PM
To: VSE Discussion List
Hello Mike,
Did you change over to NOPDS for your VSE systems?
From: owner...@Lehigh.EDU [mailto:owner-vs...@Lehigh.EDU] On Behalf Of Mark Pace
Sent: Thursday, January 29, 2009 1:42 PM
To: VSE Discussion List
Subject: Re: z/VSE performance change?
Yup, a long time ago. We page about 1/sec
Hello Mike,
Yea, sound like you are doing well there. The other thing that I learned on the VM page is to separate out
The SPOOL area from the PAGE area from the User disks.
We have a separate 3390 for SPOOLING and another for PAGING only.
It does seem to have helped our system, but I don’t have hard facts.
From: owner-vs...@Lehigh.EDU [mailto:owner-vs...@Lehigh.EDU] On
Behalf Of Horlick, Michael
Sent: Thursday, January 29, 2009
2:24 PM
To: VSE Discussion
List
Subject: RE: z/VSE performance
change?
Yup, a long time ago. We page about 1/sec
From: owner-vs...@Lehigh.EDU [mailto:owner-vs...@Lehigh.EDU] On
Behalf Of Edward
M Martin
Sent: January 29, 2009 1:46 PM
To: VSE Discussion
List
Subject: RE: z/VSE performance
change?
Hello Mike,
Did you change over to NOPDS for your VSE systems?
Aultman Health Foundation
ext 40441
Yes, we have VM SPOOL and PAGE on different packs.
Thanks for your interest,
Mike
Hello Mike,
And (just to make sure), The spool and page should be separate from all other functions.
The way the spool and page CCW are constructed they are long running and tend to keep the track held for longer
That normal I/O would.
From:
owner...@Lehigh.EDU [mailto:owner...@Lehigh.EDU] On Behalf Of Horlick, Michael
Sent: Friday, January 30, 2009
7:39 AM
To: VSE Discussion
List
Subject: RE: z/VSE performance
change?
Yes, we have VM SPOOL and PAGE on different packs.
Thanks for your interest,
Mike
From:
owner...@Lehigh.EDU [mailto:owner...@Lehigh.EDU] On Behalf Of Edward M Martin
Sent: January 29, 2009 3:31 PM
To: VSE Discussion
List
Subject: RE: z/VSE performance
change?
Hello Mike,
Yea, sound like you are doing well there. The other thing that I learned on the VM page is to separate out
The SPOOL area from the PAGE area from the User disks.
We have a separate 3390 for SPOOLING and another for PAGING only.
It does seem to have helped our system, but I don’t have hard facts.
From: owner-vs...@Lehigh.EDU [mailto:owner-vs...@Lehigh.EDU] On
Behalf Of Horlick, Michael
Sent: Thursday, January 29, 2009
2:24 PM
To: VSE Discussion
List
Subject: RE: z/VSE performance
change?
Yup, a long time ago. We page about 1/sec
From:
owner...@Lehigh.EDU [mailto:owner...@Lehigh.EDU] On Behalf Of Edward M Martin
Sent: January 29, 2009 1:46 PM
To: VSE Discussion
List
Subject: RE: z/VSE performance
change?
Hello Mike,
Did you change over to NOPDS for your VSE systems?
Aultman Health Foundation
ext 40441