Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Stopping "dump tran"

590 views
Skip to first unread message

goo...@barski.org

unread,
Jan 6, 2009, 4:56:37 AM1/6/09
to
Hi folks,

I've got a somewhat weird problem with ASE 12.5.0.3:

the transaction log has grown to more than 35%, so I decided to dump
it. We usually do not save the log for some certain reasons (beyond
the scope...), so I issued a "dump tran <dbname> with truncate_only".
Normally this completes in about 30-60 mins, however it runs now since
25 hours, having now almost 50 millions physical io. Unfortunately
this process blocks many db activities (including the nightly backup
with "dump database"), so I really have to get rid of it.
Bouncing the ASE is unfortunately not an option: primarily because I
cannot afford the downtime (it's a live database), secondarily because
the ASE would surely UNDO and REDO the transaction log, which would
take at least half a day.
Killing the process with the sybase "kill" does not show any effect,
killing the client application does not show it either.

Is there any way to get rid of this "dump tran" process?

Thanks in advance
Dmitri

Manish Negandhi [TeamSybase]

unread,
Jan 6, 2009, 5:20:04 AM1/6/09
to

Usually how do you reduce tran log size ? Do you dump tran log
frequently or you have "trun log on chkpoint" option set for the
database ?

First thing to check is, if any error is reported in ASE or
backupserver log .Did you try to kill sybmultbuf process on host
level before killing it at ASE level ?, if not , try that. Kill the
sybmultbuf process first before ASE process , and see if it helps.
Finally try to recycle backupserver and see if that resolves the issue

-HTH
Manish Negandhi
[TeamSybase]

goo...@barski.org

unread,
Jan 6, 2009, 5:34:40 AM1/6/09
to
On Jan 6, 11:20 am, "Manish Negandhi [TeamSybase]"
<negandhi.man...@gmail.com> wrote:

> Usually how do you reduce tran log size ? Do you dump tran log
> frequently or you have "trun log on chkpoint" option set for the
> database ?

It is dumped every other hour usually by a script - it does not work
now, as I already have a "dump tran" running.

> First thing to check is,  if any error is reported in ASE or
> backupserver log .

Nothing relevant to the dump process.

> Did you try to kill sybmultbuf process on host
> level  

I see no "sybmultbuf" at host level (HP-UX PA-RISC box).

> before killing it at ASE level ?, if not ,  try that. Kill  the
> sybmultbuf process first before ASE process , and see if it helps.

I cannot kill ASE, because it is very likely to cause a long downtime,
which I cannot afford. Actually the ASE is running (though with a
reduced performance), however I cannot backup it.

> Finally try to recycle backupserver and see if that resolves the issue

Would it not affect the ASE?


TIA
Dmitri

Derek Asirvadem

unread,
Jan 6, 2009, 6:27:45 AM1/6/09
to
> On 2009-01-06 21:34:40 +1100, goo...@barski.org said:
>
>> Usually how do you reduce tran log size ? Do you dump tran log
>> frequently or you have "trun log on chkpoint" option set for the
>> database ?
> It is dumped every other hour usually by a script - it does not work
> now, as I already have a "dump tran" running.

I presume the script did not work, hence the "... more than 35%" and
the need for manual intervention.

Change the script into a threshold stored proc (sp_thresholdaction),
and improve the code quality. But the real problem which places the
production database at risk, is the mixed data & log ("... more than
35%"), which is not allowed for production dbs: separate that as soon
as you get a chance. (The db was unrecoverable all this time, you are
finding out the hard way just now.) After which you still need either
the script or threshold or TLC set, to maintain the separated log.

>> Did you try to kill sybmultbuf process on host level  
> I see no "sybmultbuf" at host level (HP-UX PA-RISC box).

Ok, it is gone.

>> before killing it at ASE level ?, if not ,  try that. Kill  the
>> sybmultbuf process first before ASE process , and see if it helps.
> I cannot kill ASE, because it is very likely to cause a long downtime,
> which I cannot afford. Actually the ASE is running (though with a
> reduced performance), however I cannot backup it.

I think Manish meant: kill the sybmultbuf (dump thread) first; then
kill the spid within ASE that was executing "dump tran". No one wants
you to kill ASE, for the reasons you have mentioned. Actually, it will
take 25 hours times 2.5 to finish the recovery cycle and make the db
available for use.

>> Finally try to recycle backupserver and see if that resolves the issue
> Would it not affect the ASE?

No, dataserver and backupserver are independent, they can each be
started/stopped independently (although the correct graceful shutdown
sequence is to shutdown backupserver first, then dataserver).

Since there is no sybmultbuf, kill backupserver at the UX level.

ASE should return to normal.

Then checkpoint and dump database (now you are safe but still unrecoverable).

Then when you have a window of more than 25 hours, dump tran with
truncate_only. And immediately separate the tran log.
--
Regards
Derek Asirvadem
Director / Senior Sybase DBA / Information Architect
Sybase BSA [OEM/VAR] Partner
Copyright © 2008 Software Gems Pty Ltd

Mark K

unread,
Jan 6, 2009, 7:16:56 AM1/6/09
to
When you use "with truncate_only" there is no interaction with Backup
Server, and thus no sybmultbuf process. BS doesn't even need to be running.

Can you provide the output from the following:
dbcc traceon(3604)
dbcc stacktrace(<spid of dump tran process>)
dbcc pss(0,<spid of dump tran process>)
go
-- wait 30 seconds
dbcc stacktrace(<spid of dump tran process>)
dbcc pss(0,<spid of dump tran process>)
dbcc traceoff(3604)
go

Have you opened a case with Tech Support?

Mark Kusma


<goo...@barski.org> wrote in message
news:8759a372-710d-4ca6...@r15g2000prh.googlegroups.com...

goo...@barski.org

unread,
Jan 6, 2009, 8:44:39 AM1/6/09
to
On Jan 6, 12:27 pm, Derek Asirvadem <derek.asirva...@gmail.com> wrote:

> I presume the script did not work, hence the "... more than 35%" and
> the need for manual intervention.

Correct.


>
> But the real problem which places the
> production database at risk, is the mixed data & log ("... more than
> 35%"), which is not allowed for production dbs: separate that as soon
> as you get a chance.

It IS separate. I have a separate log storage of about 52G for
transaction log, sp_spaceused syslogs tells me that about 20G are in
use, wich is about 38% of the available space. The data itself is
stored completely independently.

> No, dataserver and backupserver are independent, they can each be
> started/stopped independently (although the correct graceful shutdown
> sequence is to shutdown backupserver first, then dataserver).
>
> Since there is no sybmultbuf, kill backupserver at the UX level.
>
> ASE should return to normal.

I've killed the backupserver - no impact on the dataserver (good), and
neither on the running "dump tran" process (bad!).

Any ideas?

TIA
Dmitri

goo...@barski.org

unread,
Jan 6, 2009, 9:01:10 AM1/6/09
to
On Jan 6, 1:16 pm, "Mark K" <xxxx> wrote:
> When you use "with truncate_only" there is no interaction with Backup
> Server, and thus no sybmultbuf process. BS doesn't even need to be running.
Indeed - the dump tran process was not affected by bouncing BS in any
observable way.

> Have you opened a case with Tech Support?
>

No. Actually, I have already some open cases with Sybase tech support
since October and even earlier - not solved yet, though highest
priority (we have major impact on the production environment), so I do
not rely on the tech support solving anything at all. Left alone the
fact, that the tech support has repeatedly told me "your ASE is too
old, we are not supporting it well anymore" - and I cannot update due
to certification reasons.

> Can you provide the output from the following:
> dbcc traceon(3604)
> dbcc stacktrace(<spid of dump tran process>)
> dbcc pss(0,<spid of dump tran process>)
> go

(the database name has been replaced by <dbname>)
------------------------ Execute ------------------------
DBCC execution completed. If DBCC printed error messages, contact a
user with System Administrator (SA) role.
Start dbcc stacktrace...
pc: 0x40000000005092a0 upsleepgeneric+0x2b8()
pc: 0x40000000004fe90c bufread+0x18c()
pc: 0x400000000054f560 getpage_with_validation+0x398()
pc: 0x4000000000690184 dmp_send_logchain+0x5ec()
pc: 0x4000000000679e30 dmpx_scanlog+0x120()
[Handler pc: 0x40000000009b6b60 dpdb_handle installed by the following
function:-]pc: 0x400000000060a6ac dmpx__truncatemain+0x394()
pc: 0x40000000009b66ec dmpxact+0x9c()
pc: 0x400000000054ab20 s_execute+0x5350()
[Handler pc: 0x40000000008e8538 s_handle installed by the following
function:-]pc: 0x400000000057de58 sequencer+0x268()
pc: 0x4000000000463fb0 tdsrecv_language+0xb0()
[Handler pc: 0x40000000006eccd8 hdl_backout installed by the following
function:-][Handler pc: 0x400000000088d268 ut_handle installed by the
following function:-][Handler pc: 0x400000000088d268 ut_handle
installed by the following function:-]pc: 0x40000000004d4cfc conn_hdlr
+0x774()
pc: 0x4000000000543b60 proc_stack_switch+0x60()
dbcc stacktrace finished.
DBCC execution completed. If DBCC printed error messages, contact a
user with System Administrator (SA) role.
{

PSS (any state) for suid 0 - spid 308:

PSS at 0xc00000004bcefbd8

PSS Status fields :
pstat=0x10002 (0x00010000 (P_USERPROC), 0x00000002 (P_KILLYOURSELF))
p2stat=0x1010 (0x00001000 (P2_XLATE), 0x00000010 (P2_DEBUG))
p3stat=0x81a (0x00000800 (P3_PSS_ACTIVE), 0x00000010 (P3_CACHELIM),
0x00000008 (P3_NEWRNG), 0x00000002 (P3_CACHERNG))
p4stat=0x0 (0x00000000)
p5stat=0x0 (0x00000000)
HA session handle: sysid =0x0 HA session handle: session id =0x0
PSS Link (plink) : elink_next=0xc00000004a83ec08
elink_prev=0xc00000004a553bd8
pexcptr=0xc00000004bcf7480 pssobjp=0xc00000004bcefbc0

Process Ids & Database context info in PSS :
pkspid=571539823 pspid=308 pclient_kpid=571539823
parent_spid=308
puid=1 pgid=0 psuid=0 porigsuid=0
puid=1 pgid=0 psuid=0 porigsuid=0
punlen=0 puname=0xc00000004bcefc12 pbackground=0 ploginflags=15
ptdsversion=5.0.0.0
pdbtable=0xc000000048855380 pcurdb=4
ptmpdbtable=0xc00000004883fc00 ptmpcurdb=2 pdb_stat=1
porigsrids=0xc00000004bcefc84 psrids=0xc00000004bcefe80
porig_auth=0 pcur_auth=15
poffsets=0 pcmderrs=0 (0x00000000)

pmasterxact=0xc00000004d8a0670 ppss=0xc00000004bcefbd8
pplc=0xc000000048187748 pdeskept=0x0000000000000000

pdetached_xdes=0x0000000000000000, pbegints=0x0000 0x00000000
plastbranch=0x0000000000000000 (unknown)

psdes=0xc00000004bcf00a8
User SDES array: psdesp[] array=0xc00000004bcf00a8
SDES list starting from: 0xc00000004bcfcdd0
[ 0] SDES: (at 0xc00000004bcfcdd0) sdes_next=0xc00000004bcfd310
sdesp=0x0000000000000000
[ 1] SDES: (at 0xc00000004bcfd310) sdes_next=0xc00000004bcfd850
sdesp=0x0000000000000000
[ 2] SDES: (at 0xc00000004bcfd850) sdes_next=0xc00000004bcfdd90
sdesp=0x0000000000000000
[ 3] SDES: (at 0xc00000004bcfdd90) sdes_next=0xc00000004bcfe2d0
sdesp=0x0000000000000000
[ 4] SDES: (at 0xc00000004bcfe2d0) sdes_next=0xc00000004bcfe810
sdesp=0x0000000000000000
[ 5] SDES: (at 0xc00000004bcfe810) sdes_next=0xc00000004bcfed50
sdesp=0x0000000000000000
[ 6] SDES: (at 0xc00000004bcfed50) sdes_next=0xc00000004bcff290
sdesp=0x0000000000000000
[ 7] SDES: (at 0xc00000004bcff290) sdes_next=0xc00000004bcff7d0
sdesp=0x0000000000000000
[ 8] SDES: (at 0xc00000004bcff7d0) sdes_next=0xc00000004bcffd10
sdesp=0x0000000000000000
[ 9] SDES: (at 0xc00000004bcffd10) sdes_next=0xc00000004bd00250
sdesp=0x0000000000000000
[10] SDES: (at 0xc00000004bd00250) sdes_next=0xc00000004bd00790
sdesp=0x0000000000000000
[11] SDES: (at 0xc00000004bd00790) sdes_next=0xc00000004bd00cd0
sdesp=0x0000000000000000
[12] SDES: (at 0xc00000004bd00cd0) sdes_next=0xc00000004bd01210
sdesp=0x0000000000000000
[13] SDES: (at 0xc00000004bd01210) sdes_next=0xc00000004bd01750
sdesp=0x0000000000000000
[14] SDES: (at 0xc00000004bd01750) sdes_next=0xc00000004bd01c90
sdesp=0x0000000000000000
[15] SDES: (at 0xc00000004bd01c90) sdes_next=0xc00000004bd021d0
sdesp=0x0000000000000000
[16] SDES: (at 0xc00000004bd021d0) sdes_next=0xc00000004bd02710
sdesp=0x0000000000000000
[17] SDES: (at 0xc00000004bd02710) sdes_next=0xc00000004bd02c50
sdesp=0x0000000000000000
[18] SDES: (at 0xc00000004bd02c50) sdes_next=0xc00000004bd03190
sdesp=0x0000000000000000
[19] SDES: (at 0xc00000004bd03190) sdes_next=0xc00000004bd036d0
sdesp=0x0000000000000000
[20] SDES: (at 0xc00000004bd036d0) sdes_next=0xc00000004bd03c10
sdesp=0x0000000000000000
[21] SDES: (at 0xc00000004bd03c10) sdes_next=0xc00000004bd04150
sdesp=0x0000000000000000
[22] SDES: (at 0xc00000004bd04150) sdes_next=0xc00000004bd04690
sdesp=0x0000000000000000
[23] SDES: (at 0xc00000004bd04690) sdes_next=0xc00000004bd04bd0
sdesp=0x0000000000000000
[24] SDES: (at 0xc00000004bd04bd0) sdes_next=0xc00000004bd05110
sdesp=0x0000000000000000
[25] SDES: (at 0xc00000004bd05110) sdes_next=0xc00000004bd05650
sdesp=0x0000000000000000
[26] SDES: (at 0xc00000004bd05650) sdes_next=0xc00000004bd05b90
sdesp=0x0000000000000000
[27] SDES: (at 0xc00000004bd05b90) sdes_next=0x0000000000000000
sdesp=0x0000000000000000

System SDES array: psyssdesp[] array=0xc00000004bcf02a8
SDES list starting from: 0xc00000004bcf02a8
[ 0] SDES: (at 0xc00000004bcf02a8) sdes_next=0xc00000004bcf07e8
sdesp=0xc0000000487f4cf0 objuid=1 objid=8 objname=syslogs
[ 1] SDES: (at 0xc00000004bcf07e8) sdes_next=0xc00000004bcf0d28
sdesp=0xc0000000487f4cf0 objuid=1 objid=8 objname=syslogs
[ 2] SDES: (at 0xc00000004bcf0d28) sdes_next=0xc00000004bcf1268
sdesp=0xc0000000487f4cf0 objuid=1 objid=8 objname=syslogs
[ 3] SDES: (at 0xc00000004bcf1268) sdes_next=0xc00000004bcf17a8
sdesp=0xc0000000487f25f0 objuid=0 objid=99 objname=CashflowLog
[ 4] SDES: (at 0xc00000004bcf17a8) sdes_next=0xc00000004bcf1ce8
sdesp=0x0000000000000000
[ 5] SDES: (at 0xc00000004bcf1ce8) sdes_next=0xc00000004bcf2228
sdesp=0x0000000000000000
[ 6] SDES: (at 0xc00000004bcf2228) sdes_next=0xc00000004bcf2768
sdesp=0x0000000000000000
[ 7] SDES: (at 0xc00000004bcf2768) sdes_next=0xc00000004bcf2ca8
sdesp=0x0000000000000000
[ 8] SDES: (at 0xc00000004bcf2ca8) sdes_next=0xc00000004bcf31e8
sdesp=0x0000000000000000
[ 9] SDES: (at 0xc00000004bcf31e8) sdes_next=0xc00000004bcf3728
sdesp=0x0000000000000000
[10] SDES: (at 0xc00000004bcf3728) sdes_next=0xc00000004bcf3c68
sdesp=0x0000000000000000
[11] SDES: (at 0xc00000004bcf3c68) sdes_next=0xc00000004bcf41a8
sdesp=0x0000000000000000
[12] SDES: (at 0xc00000004bcf41a8) sdes_next=0xc00000004bcf46e8
sdesp=0x0000000000000000
[13] SDES: (at 0xc00000004bcf46e8) sdes_next=0xc00000004bcf4c28
sdesp=0x0000000000000000
[14] SDES: (at 0xc00000004bcf4c28) sdes_next=0xc00000004bcf5168
sdesp=0x0000000000000000
[15] SDES: (at 0xc00000004bcf5168) sdes_next=0xc00000004bcf56a8
sdesp=0x0000000000000000
[16] SDES: (at 0xc00000004bcf56a8) sdes_next=0xc00000004bcf5be8
sdesp=0x0000000000000000
[17] SDES: (at 0xc00000004bcf5be8) sdes_next=0xc00000004bcf6128
sdesp=0x0000000000000000
[18] SDES: (at 0xc00000004bcf6128) sdes_next=0xc00000004bcf6668
sdesp=0x0000000000000000
[19] SDES: (at 0xc00000004bcf6668) sdes_next=0x0000000000000000
sdesp=0x0000000000000000

ploglocked=0 premote=0x0000000000000000
pdbstack=0xc00000004bcf6be8 pdbindex=0
pcastcontext=0xc00000004bd060d0
pdone :
donestat=0x0 doneinfo=0 donecount=0

ptext=0x0000000000000000 pntext=36 pcurcolid=0 pcompct=0
ppars=0x0000000000000000 pdbtlock=0x0000000000000000
pdbinfoget_dbt=0x0000000000000000
pckptwriteq=0xc00000004bcf6da8 plogwriteq=0xc00000004bcf6df8

Locks related info in PSS :
plocksleepq_entry :
SLEEPTASK struct at 0xc000000048075e70
next=0xc000000048075e70 prev=0xc000000048075e70
stsemawait=0xc000000048075f10 stpss=0xc00000004bcefbd8 spid 308
stsequence_number=0 stsleep_path=30 stmagic=0x54
stspinlock=0xc000000040ccc200
ststatus=0x0 (0x0000)
stlid=(616 616)
plockwaitq_entry :
WAITTASK at 0xc000000048075e20
next=0xc000000048075e20 prev=0xc000000048075e20
wtsemawait=0x0000000000000000 wtpss=0x0000000000000000 spid 0
wtseqno=0 wtmagic=0x51 waitno=0
wtstatus=0x0 (0x00)
wtlid=(0 0)
plogsema=0xc000000048075f10
pcurloctxp.lcwaitsema=0x0000000000000000 plockstat=0x0 (0x0000)
plockwait : LOCKWAIT struct at 0xc00000004bcf7360
lwaitperiod=0 lwaitstat=0x4 (0x04 (LOCKWAIT_SERVER))
pcurloctxp.lcxactlocks=0xc00000004bcf6eb0
pcurloctxp.lcstmtlocks=0xc00000004bcf6e90
pcurloctxp.lcstmtl3locks=0xc00000004bcf6ea0
pcurloctxp=0xc00000004bcf6e78 &ploctx=0xc00000004bcf6e78
plocksuffclass=0

plock_freelist 0xc00000004bcf6f10 plock_chash 0xc00000004bcf7078
plock_chash[0]=0x0000000000000000
plock_chash[1]=0x0000000000000000
plock_chash[2]=0x0000000000000000
plock_chash[3]=0x0000000000000000
plock_chash[4]=0x0000000000000000
plock_chash[5]=0x0000000000000000
plock_chash[6]=0x0000000000000000
plock_chash[7]=0x0000000000000000
plock_chash[8]=0x0000000000000000
plock_chash[9]=0x0000000000000000
plock_chash[10]=0x0000000000000000
plock_chash[11]=0x0000000000000000
plock_chash[12]=0x0000000000000000
plock_chash[13]=0x0000000000000000
plock_chash[14]=0x0000000000000000
plock_chash[15]=0x0000000000000000

ptranslate=0xc00000004bcf7100 pnumplan=0
pdate: Jan 1 1900 12:00AM
ptimestamp: Jan 5 2009 10:18AM
pdateformat=1 pdatefirst=7
pmaxthread_cache=2 pmaxthread_ts=1
pmaxscanthread_cache=1 pmaxscanthread_ts=1

Sequencer Frame info in PSS :
pprevframe=0x0000000000000000 pnextframe=0x0000000000000000
phdr=0xc00000005da64000
pplan=0xc00000005da64000 pline=1 pcurstepno=1 prowcount=0

poptions=7 (OPT_TRUNCABORT) 8 (OPT_ARITHABORT) 13 (OPT_CONTROL) 40
(OPT_PREFETCH) 41 (OPT_TRIGGERS) 42 (OPT_REPLICATION_1) 43
(OPT_REPLICATION_2) 48 (OPT_TRANSRPC) 58 (OPT_REMOTE_INDEXES)
pcurseq=0x0000000000000000 pcurstep=0xc00000005da770f8
plaststep=0xc00000005da770f8 ptrigdata=0x0000000000000000
pprocedures=0x0000000000000000 pview=0x0000000000000000
pbegintmps=0x0000000000000000

Proc Headers Array: pprochdrs=0xc00000004bcf7258 pmemusage=6
proc_header[0]=0xc00000005b97e800
proc_header[1]=0xc00000005a187800
proc_header[2]=0x0000000000000000
proc_header[3]=0xc00000005da64000
proc_header[4]=0x0000000000000000
proc_header[5]=0x0000000000000000
proc_header[6]=0x0000000000000000
proc_header[7]=0x0000000000000000
proc_header[8]=0x0000000000000000
proc_header[9]=0x0000000000000000
proc_header[10]=0x0000000000000000

Other PROC_HDR pointers off Pss (to match up addresses):
ptext=0x0000000000000000 [-1] phdr=0xc00000005da64000 [3]
pplan=0xc00000005da64000 [3] pdseg=0xc00000005a187800 [1]
pcss_cache=0x0000000000000000 [-1] pnetdiagphdr=0xc00000005b97e800 [0]
pcursfix=0x0000000000000000 [-1] pcmdtext=0x0000000000000000 [-1]
psortproc=0x0000000000000000 [-1]
pcompiler_scratch_space=0x0000000000000000 [-1]
pll_phdr=0x0000000000000000 [-1] plmtcache=0x0000000000000000 [-1]
rtmrngcache=0x0000000000000000 [-1] psqt_text=0x0000000000000000 [-1]


prowcnt=0 pcurcmd=235 (DUMPXACT)
pstatlist=0x0 pseqstat=16 (0x0010 (SEQ_REPSPROC_CHECKED))

plastprocid=0 ptextsize=32768 pdbopened=0 ptabcount=0

pstringsize=50 pbulkbatchsize=5000 pbulkarraysize=50
pretstat=0 pexecstep=0x0000000000000000
pcurstmt=0xc00000005da77000 prepxdes=0x0000000000000000
psqlstatus=2 plocks=0xc00000004bcf72f8 pl3locks=0xc00000004bcf7308

ppllcfg_stat=0 pisolation=1
pexec_maxthread=2 pexec_maxthread_ext=0
pexec_maxscanthread=1 pexec_maxscanthread_ext=0
pcontext_id=0 pdseg=0xc00000005a187800

SQL globals and monitors :
plasterror=0 plangid=0 pclient_csid=-1 pnextcid=7
pprocnest=0 pxactcnt=1 ptextptr=0x00000000000000000000000000000000
ptextts=0x0000 0x00000000 ptextobjid=0 ptextdbid=0 ptextcolid=0
pcputot=2 pcpucur=1027475 pmemusage=6
pbufread=57901139 pbufwrite=1343 ptimeslice=1

precvbuf=0xc00000004bd06118 psendbuf=0xc00000004bd06148
pnetbufp=0xc000000041ad1f3e
pnetid=0 psid=0 pchannel=0 pscn=0
precvfn=0x400000000038fbd8 psendfn=0x40000000003961c8
precvflushfn=0x4000000000396188
psendeomfn=0x40000000003abc28 pintsql_ctx=0x0000000000000000
pintsql_frame=0x0000000000000000
pprots next=0xc00000004a1669f0 prev=0xc00000004a1672f0
pprotcache=0xc00000004bcf7428 pprotstamp=235
pexcproc=0xc00000004bcf7480 pallocrange=16 pallocwork=12
pcrpname=0x0000000000000000
pclaimedbufs=0xc00000004bd063b0 pkeeppss=0x0000000000000000
pkeepcnt=1 pwaitforevent=0
pdropdbt=0x0000000000000000 netdiagbuf=0xc00000005b88c800
pprealloclocks=0xc00000004bcfbf18

psitebuf=0x0000000000000000 ppreverror=0 pattention=0
pextattn=0000000000000000 plkpromattn=0
pfouinfo=0xc00000004bcfbf40 pindreserved=0

pbkout_gen: PBACKOUT at 0xc00000004bcfbfb8
ptaskdata 0xc00000005f80f000, ptaskabtfn 0x400000000038fcf8
pbkout_diskclr: PBACKOUT at 0xc00000004bcfbfa8
ptaskdata 0x0000000000000000, ptaskabtfn 0x0000000000000000
pbkout_coord: PBACKOUT at 0xc00000004bcfbfc8
ptaskdata 0x0000000000000000, ptaskabtfn 0x0000000000000000
pdskchk_notify=0 piotot=4

pcss=0x0000000000000000 pfreecss=0x0000000000000000
pcss_cache=0x0000000000000000
pdefcss=0x0000000000000000 pcurcss=0x0000000000000000
pdeadlock_id=0
ptranstate=0 actx_env=0x0000000000000000 ptds_strmlen=-1
pblocktime: Jan 6 2009 2:52PM
pclient_cap=0xc00000004bcfc08c ptype_map=0xc00000004bcfc090
peed=0x0000000000000000
pextobject=0x0000000000000000 pcursfix=0x0000000000000000

IDENTITY_VAL info (pidtvalinfo) :


IDENTITY_INSERT settings (pidtinsert) :


pidentity=(at 0xc00000004bcf7318):
pstackbound=0xc000000045a22808 pguardbound=0xc000000045a20810
pnetdiagphdr=0xc00000005b97e800
paudflags=240 pnxt_wktabid=449 pprocess_type=00 (PERFORMING
COMMAND)
psetownercnt=0 psseqframes=0xc00000004bcfc128
psaddfrmaes=0xc00000004bcfc2c8
pcmdtext=0x0000000000000000psortproc=0x0000000000000000
pinternalxact=0x0000000000000000 pmasterxact=0xc00000004d8a0670

ptempdev=0 pworktable_cnt=0 pmainroot=0xc0000000510d5000
pfirstcmd=0xc0000000510d5360
pcompiler_scratch_space=0x0000000000000000
plkstats=0xc00000004bcfc938 pcur_lkwait_time=0
pkernel_addr=0xc00000004bcfca5c pkernel_size=0xc00000004bcfca7a
ptmemfrag=0xc00000004d9b5800 pmempool_addr=0xc00000004bcfca98

MEMPOOL 0xc00000004d9b5800
MEMCOM 0xc00000004d9b5800 mc_name Pss Frag Pool 571539823
mc_link 0xc00000004d9b5800 next=0xc00000004d9e4800
prev=0xc00000004d9fb800
mc_pooltype '2' is a 'Frament' pool
mc_common_stat 0x1 mc_lock 0x0000000000000000 mc_state 0
mc_waiting 0 mc_allocfn 0x40000000003ac4a8 mc_freefn
0x40000000003bff18
mc_maxsize_fn 0x40000000003a7958 mc_mlimitfn 0x0000000000000000
mc_monallocs 2982 mc_monfrees 2982 mc_monsleeps 0
mc_monmigrates 0 mc_total 2048 mc_used 464
mc_growsize 2048 mc_diagcb 0x40000000003b3a18 mc_diagcbarg
0x0000000000000000
mc_hwm 0 mc_maxsize 40960 mc_minsize 2048


mes_held 0 mes_sleepcount 0 mes_kpid 0

mp_link 0xc00000004d9b58e8 next=0xc00000004d9b5980
prev=0xc00000004d9b5980
mp_frags 0xc00000004d9b58f8 next=0xc00000004d9b59b0
prev=0xc00000004d9b59b0
mp_curfrag 0xc00000004d9b59b0 mp_status 0x0 mp_ovhd 464
mp_nfrags 1 mp_largest 0 mp_flags 0
mp_minsize_no 0

***** summary for fragment pool Pss Frag Pool 571539823 *****
# of allocs = 2982
# of frees = 2982
# of sleeps = 0
# of blocks = 1
# of frags = 1
min size = 2048
max size = 40960

used memory = 464
(overhead = 464)
free memory = 1584
total memory = 2048

pll_phdr=0x0000000000000000 pll_ctx=0x0000000000000000

psecsrvsreq=0 psecsendbuflen=0 psecrecvbuflen=0
psecsess=0x0000000000000000 psecsendbuf=0x0000000000000000
psecrecvbuf=0x0000000000000000
pcomp_maxthread=2 pcomp_maxscanthread=1
pexeclass=0xc00000004bcfcafc penginegroup=0xc00000004bcfcb1b
pspexecpri=0 pspprimodflg=0 pslstat=0

pexeiocost=0 ptmrngalarmid=0
pcurrangemap=0xc00000004bcfcb58 pnewrangemap=0xc00000004bcfcb78
plmtcache=0x0000000000000000 ptempdb_limit=0x0000000000000000
rfirstrng=0x0000000000000000 rlastrng=0x0000000000000000
rtmrngcache=0x0000000000000000
pcmpiocostlim=0x0000000000000000 pexeiocostlim=0x0000000000000000
pbatchtimelim=0x0000000000000000 ptranstimelim=0x0000000000000000
prowcntlim=0x0000000000000000
psqltext=0xc00000004d918ee0 plan_blocked=0x0000000000000000
pqueryplan=0
pbufs_in_predirty=0
LATCHWAIT struct at 0xc00000004bcfcc30
Latch wait struct (at address=0xc00000004bcfcc30) of Spid 308

platchedbufs array:

applmon_index=-1 pplc_waited=0x0000000000000000
pplc_locked=0x0000000000000000 pdatarpgctr=0 pindexrpgctr=0
psqt_text=0x0000000000000000 psqt_cache=0x0000000000000000
pworkdeslist=0x0000000000000000
pomnistat=0x0 (0x00000000)
pomnipss=0x0000000000000000
pmda_lasterr=0 pmda_lasterrsev=10 pbatch_id=43
pmda_stats=0xc00000004cef4b28 psqls_snap=0xc00000004bd06470
pdbdomain=0 pastc_taskp=0x0000000000000000
prectable=0x0000000000000000
psystaskcln_func: 0x0000000000000000
psystask_hdlr: 0x0000000000000000
psystaskcln_arg 0x0000000000000000 psystask_arg 0x0000000000000000
pdskbuf_mempool=0x0000000000000000, pdskbuf_cnt=0


PHDR proc header:
PROC_HDR at 0xc00000005da64000
p_hdrstep=0xc00000005da77000 p_hdrseq=0xc0000000510d5378
p_hdrcrt=0x0000000000000000
p_hdrpbuf=0x0000000000000000 p_hdrtmps=0x0000000000000000
p_hdrcaller=0x0000000000000000
p_hdrelease=0 p_hdrtabid=0 p_hdrsub=0x0000000000000000
p_hdrsubcnt=0 p_css=0x0000000000000000 p_csscnt=0
p_hdrstatus=1024 ((0x00000400 (SACREATED)))
p_lastpg=0 p_lastoff=1 p_procnum=0 p_hdrdbid=0 p_tmpdbid=0
p_lostcnt=1 p_hdrtype=2

Memory Allocation Map:


0 mempgptr=0xc00000005da64000 pma_chunksize=2048
byte_count=1832 byte_save=1832
1 mempgptr=0xc00000005da77000 pma_chunksize=2048 byte_count=808
byte_save=0


PPLAN proc header:


PTEXT proc header:


PNETBUFP:


T-SQL command (may be truncated): dump tran <databasename> with
truncate_only
}
DBCC execution completed. If DBCC printed error messages, contact a
user with System Administrator (SA) role.
DBCC execution completed. If DBCC printed error messages, contact a
user with System Administrator (SA) role.
------------------------- Done --------------------------

> -- wait 30 seconds
> dbcc stacktrace(<spid of dump tran process>)
> dbcc pss(0,<spid of dump tran process>)
> dbcc traceoff(3604)
> go

------------------------ Execute ------------------------
DBCC execution completed. If DBCC printed error messages, contact a
user with System Administrator (SA) role.
Start dbcc stacktrace...
pc: 0x40000000005092a0 upsleepgeneric+0x2b8()
pc: 0x40000000004fe90c bufread+0x18c()
pc: 0x400000000054f560 getpage_with_validation+0x398()
pc: 0x4000000000690184 dmp_send_logchain+0x5ec()
pc: 0x4000000000679e30 dmpx_scanlog+0x120()
[Handler pc: 0x40000000009b6b60 dpdb_handle installed by the following
function:-]pc: 0x400000000060a6ac dmpx__truncatemain+0x394()
pc: 0x40000000009b66ec dmpxact+0x9c()
pc: 0x400000000054ab20 s_execute+0x5350()
[Handler pc: 0x40000000008e8538 s_handle installed by the following
function:-]pc: 0x400000000057de58 sequencer+0x268()
pc: 0x4000000000463fb0 tdsrecv_language+0xb0()
[Handler pc: 0x40000000006eccd8 hdl_backout installed by the following
function:-][Handler pc: 0x400000000088d268 ut_handle installed by the
following function:-][Handler pc: 0x400000000088d268 ut_handle
installed by the following function:-]pc: 0x40000000004d4cfc conn_hdlr
+0x774()
pc: 0x4000000000543b60 proc_stack_switch+0x60()
dbcc stacktrace finished.
DBCC execution completed. If DBCC printed error messages, contact a
user with System Administrator (SA) role.
{

PSS (any state) for suid 0 - spid 308:

PSS at 0xc00000004bcefbd8

PSS Status fields :
pstat=0x10002 (0x00010000 (P_USERPROC), 0x00000002 (P_KILLYOURSELF))
p2stat=0x1010 (0x00001000 (P2_XLATE), 0x00000010 (P2_DEBUG))
p3stat=0x81a (0x00000800 (P3_PSS_ACTIVE), 0x00000010 (P3_CACHELIM),
0x00000008 (P3_NEWRNG), 0x00000002 (P3_CACHERNG))
p4stat=0x0 (0x00000000)
p5stat=0x0 (0x00000000)
HA session handle: sysid =0x0 HA session handle: session id =0x0
PSS Link (plink) : elink_next=0xc00000004a83ec08
elink_prev=0xc00000004a553bd8
pexcptr=0xc00000004bcf7480 pssobjp=0xc00000004bcefbc0

Process Ids & Database context info in PSS :
pkspid=571539823 pspid=308 pclient_kpid=571539823
parent_spid=308
puid=1 pgid=0 psuid=0 porigsuid=0
puid=1 pgid=0 psuid=0 porigsuid=0
punlen=0 puname=0xc00000004bcefc12 pbackground=0 ploginflags=15
ptdsversion=5.0.0.0
pdbtable=0xc000000048855380 pcurdb=4
ptmpdbtable=0xc00000004883fc00 ptmpcurdb=2 pdb_stat=1
porigsrids=0xc00000004bcefc84 psrids=0xc00000004bcefe80
porig_auth=0 pcur_auth=15
poffsets=0 pcmderrs=0 (0x00000000)

pmasterxact=0xc00000004d8a0670 ppss=0xc00000004bcefbd8
pplc=0xc000000048187748 pdeskept=0x0000000000000000

pdetached_xdes=0x0000000000000000, pbegints=0x0000 0x00000000
plastbranch=0x0000000000000000 (unknown)

psdes=0xc00000004bcf00a8
User SDES array: psdesp[] array=0xc00000004bcf00a8
SDES list starting from: 0xc00000004bcfcdd0
[ 0] SDES: (at 0xc00000004bcfcdd0) sdes_next=0xc00000004bcfd310
sdesp=0x0000000000000000
[ 1] SDES: (at 0xc00000004bcfd310) sdes_next=0xc00000004bcfd850
sdesp=0x0000000000000000
[ 2] SDES: (at 0xc00000004bcfd850) sdes_next=0xc00000004bcfdd90
sdesp=0x0000000000000000
[ 3] SDES: (at 0xc00000004bcfdd90) sdes_next=0xc00000004bcfe2d0
sdesp=0x0000000000000000
[ 4] SDES: (at 0xc00000004bcfe2d0) sdes_next=0xc00000004bcfe810
sdesp=0x0000000000000000
[ 5] SDES: (at 0xc00000004bcfe810) sdes_next=0xc00000004bcfed50
sdesp=0x0000000000000000
[ 6] SDES: (at 0xc00000004bcfed50) sdes_next=0xc00000004bcff290
sdesp=0x0000000000000000
[ 7] SDES: (at 0xc00000004bcff290) sdes_next=0xc00000004bcff7d0
sdesp=0x0000000000000000
[ 8] SDES: (at 0xc00000004bcff7d0) sdes_next=0xc00000004bcffd10
sdesp=0x0000000000000000
[ 9] SDES: (at 0xc00000004bcffd10) sdes_next=0xc00000004bd00250
sdesp=0x0000000000000000
[10] SDES: (at 0xc00000004bd00250) sdes_next=0xc00000004bd00790
sdesp=0x0000000000000000
[11] SDES: (at 0xc00000004bd00790) sdes_next=0xc00000004bd00cd0
sdesp=0x0000000000000000
[12] SDES: (at 0xc00000004bd00cd0) sdes_next=0xc00000004bd01210
sdesp=0x0000000000000000
[13] SDES: (at 0xc00000004bd01210) sdes_next=0xc00000004bd01750
sdesp=0x0000000000000000
[14] SDES: (at 0xc00000004bd01750) sdes_next=0xc00000004bd01c90
sdesp=0x0000000000000000
[15] SDES: (at 0xc00000004bd01c90) sdes_next=0xc00000004bd021d0
sdesp=0x0000000000000000
[16] SDES: (at 0xc00000004bd021d0) sdes_next=0xc00000004bd02710
sdesp=0x0000000000000000
[17] SDES: (at 0xc00000004bd02710) sdes_next=0xc00000004bd02c50
sdesp=0x0000000000000000
[18] SDES: (at 0xc00000004bd02c50) sdes_next=0xc00000004bd03190
sdesp=0x0000000000000000
[19] SDES: (at 0xc00000004bd03190) sdes_next=0xc00000004bd036d0
sdesp=0x0000000000000000
[20] SDES: (at 0xc00000004bd036d0) sdes_next=0xc00000004bd03c10
sdesp=0x0000000000000000
[21] SDES: (at 0xc00000004bd03c10) sdes_next=0xc00000004bd04150
sdesp=0x0000000000000000
[22] SDES: (at 0xc00000004bd04150) sdes_next=0xc00000004bd04690
sdesp=0x0000000000000000
[23] SDES: (at 0xc00000004bd04690) sdes_next=0xc00000004bd04bd0
sdesp=0x0000000000000000
[24] SDES: (at 0xc00000004bd04bd0) sdes_next=0xc00000004bd05110
sdesp=0x0000000000000000
[25] SDES: (at 0xc00000004bd05110) sdes_next=0xc00000004bd05650
sdesp=0x0000000000000000
[26] SDES: (at 0xc00000004bd05650) sdes_next=0xc00000004bd05b90
sdesp=0x0000000000000000
[27] SDES: (at 0xc00000004bd05b90) sdes_next=0x0000000000000000
sdesp=0x0000000000000000

System SDES array: psyssdesp[] array=0xc00000004bcf02a8
SDES list starting from: 0xc00000004bcf02a8
[ 0] SDES: (at 0xc00000004bcf02a8) sdes_next=0xc00000004bcf07e8
sdesp=0xc0000000487f4cf0 objuid=1 objid=8 objname=syslogs
[ 1] SDES: (at 0xc00000004bcf07e8) sdes_next=0xc00000004bcf0d28
sdesp=0xc0000000487f4cf0 objuid=1 objid=8 objname=syslogs
[ 2] SDES: (at 0xc00000004bcf0d28) sdes_next=0xc00000004bcf1268
sdesp=0xc0000000487f4cf0 objuid=1 objid=8 objname=syslogs
[ 3] SDES: (at 0xc00000004bcf1268) sdes_next=0xc00000004bcf17a8
sdesp=0xc0000000487f25f0 objuid=0 objid=99 objname=CashflowLog
[ 4] SDES: (at 0xc00000004bcf17a8) sdes_next=0xc00000004bcf1ce8
sdesp=0x0000000000000000
[ 5] SDES: (at 0xc00000004bcf1ce8) sdes_next=0xc00000004bcf2228
sdesp=0x0000000000000000
[ 6] SDES: (at 0xc00000004bcf2228) sdes_next=0xc00000004bcf2768
sdesp=0x0000000000000000
[ 7] SDES: (at 0xc00000004bcf2768) sdes_next=0xc00000004bcf2ca8
sdesp=0x0000000000000000
[ 8] SDES: (at 0xc00000004bcf2ca8) sdes_next=0xc00000004bcf31e8
sdesp=0x0000000000000000
[ 9] SDES: (at 0xc00000004bcf31e8) sdes_next=0xc00000004bcf3728
sdesp=0x0000000000000000
[10] SDES: (at 0xc00000004bcf3728) sdes_next=0xc00000004bcf3c68
sdesp=0x0000000000000000
[11] SDES: (at 0xc00000004bcf3c68) sdes_next=0xc00000004bcf41a8
sdesp=0x0000000000000000
[12] SDES: (at 0xc00000004bcf41a8) sdes_next=0xc00000004bcf46e8
sdesp=0x0000000000000000
[13] SDES: (at 0xc00000004bcf46e8) sdes_next=0xc00000004bcf4c28
sdesp=0x0000000000000000
[14] SDES: (at 0xc00000004bcf4c28) sdes_next=0xc00000004bcf5168
sdesp=0x0000000000000000
[15] SDES: (at 0xc00000004bcf5168) sdes_next=0xc00000004bcf56a8
sdesp=0x0000000000000000
[16] SDES: (at 0xc00000004bcf56a8) sdes_next=0xc00000004bcf5be8
sdesp=0x0000000000000000
[17] SDES: (at 0xc00000004bcf5be8) sdes_next=0xc00000004bcf6128
sdesp=0x0000000000000000
[18] SDES: (at 0xc00000004bcf6128) sdes_next=0xc00000004bcf6668
sdesp=0x0000000000000000
[19] SDES: (at 0xc00000004bcf6668) sdes_next=0x0000000000000000
sdesp=0x0000000000000000

ploglocked=0 premote=0x0000000000000000
pdbstack=0xc00000004bcf6be8 pdbindex=0
pcastcontext=0xc00000004bd060d0
pdone :
donestat=0x0 doneinfo=0 donecount=0

ptext=0x0000000000000000 pntext=36 pcurcolid=0 pcompct=0
ppars=0x0000000000000000 pdbtlock=0x0000000000000000
pdbinfoget_dbt=0x0000000000000000
pckptwriteq=0xc00000004bcf6da8 plogwriteq=0xc00000004bcf6df8

Locks related info in PSS :
plocksleepq_entry :
SLEEPTASK struct at 0xc000000048075e70
next=0xc000000048075e70 prev=0xc000000048075e70
stsemawait=0xc000000048075f10 stpss=0xc00000004bcefbd8 spid 308
stsequence_number=0 stsleep_path=30 stmagic=0x54
stspinlock=0xc000000040ccc200
ststatus=0x0 (0x0000)
stlid=(616 616)
plockwaitq_entry :
WAITTASK at 0xc000000048075e20
next=0xc000000048075e20 prev=0xc000000048075e20
wtsemawait=0x0000000000000000 wtpss=0x0000000000000000 spid 0
wtseqno=0 wtmagic=0x51 waitno=0
wtstatus=0x0 (0x00)
wtlid=(0 0)
plogsema=0xc000000048075f10
pcurloctxp.lcwaitsema=0x0000000000000000 plockstat=0x0 (0x0000)
plockwait : LOCKWAIT struct at 0xc00000004bcf7360
lwaitperiod=0 lwaitstat=0x4 (0x04 (LOCKWAIT_SERVER))
pcurloctxp.lcxactlocks=0xc00000004bcf6eb0
pcurloctxp.lcstmtlocks=0xc00000004bcf6e90
pcurloctxp.lcstmtl3locks=0xc00000004bcf6ea0
pcurloctxp=0xc00000004bcf6e78 &ploctx=0xc00000004bcf6e78
plocksuffclass=0

plock_freelist 0xc00000004bcf6f10 plock_chash 0xc00000004bcf7078
plock_chash[0]=0x0000000000000000
plock_chash[1]=0x0000000000000000
plock_chash[2]=0x0000000000000000
plock_chash[3]=0x0000000000000000
plock_chash[4]=0x0000000000000000
plock_chash[5]=0x0000000000000000
plock_chash[6]=0x0000000000000000
plock_chash[7]=0x0000000000000000
plock_chash[8]=0x0000000000000000
plock_chash[9]=0x0000000000000000
plock_chash[10]=0x0000000000000000
plock_chash[11]=0x0000000000000000
plock_chash[12]=0x0000000000000000
plock_chash[13]=0x0000000000000000
plock_chash[14]=0x0000000000000000
plock_chash[15]=0x0000000000000000

ptranslate=0xc00000004bcf7100 pnumplan=0
pdate: Jan 1 1900 12:00AM
ptimestamp: Jan 5 2009 10:18AM
pdateformat=1 pdatefirst=7
pmaxthread_cache=2 pmaxthread_ts=1
pmaxscanthread_cache=1 pmaxscanthread_ts=1

Sequencer Frame info in PSS :
pprevframe=0x0000000000000000 pnextframe=0x0000000000000000
phdr=0xc00000005da64000
pplan=0xc00000005da64000 pline=1 pcurstepno=1 prowcount=0

poptions=7 (OPT_TRUNCABORT) 8 (OPT_ARITHABORT) 13 (OPT_CONTROL) 40
(OPT_PREFETCH) 41 (OPT_TRIGGERS) 42 (OPT_REPLICATION_1) 43
(OPT_REPLICATION_2) 48 (OPT_TRANSRPC) 58 (OPT_REMOTE_INDEXES)
pcurseq=0x0000000000000000 pcurstep=0xc00000005da770f8
plaststep=0xc00000005da770f8 ptrigdata=0x0000000000000000
pprocedures=0x0000000000000000 pview=0x0000000000000000
pbegintmps=0x0000000000000000

Proc Headers Array: pprochdrs=0xc00000004bcf7258 pmemusage=6
proc_header[0]=0xc00000005b97e800
proc_header[1]=0xc00000005a187800
proc_header[2]=0x0000000000000000
proc_header[3]=0xc00000005da64000
proc_header[4]=0x0000000000000000
proc_header[5]=0x0000000000000000
proc_header[6]=0x0000000000000000
proc_header[7]=0x0000000000000000
proc_header[8]=0x0000000000000000
proc_header[9]=0x0000000000000000
proc_header[10]=0x0000000000000000

Other PROC_HDR pointers off Pss (to match up addresses):
ptext=0x0000000000000000 [-1] phdr=0xc00000005da64000 [3]
pplan=0xc00000005da64000 [3] pdseg=0xc00000005a187800 [1]
pcss_cache=0x0000000000000000 [-1] pnetdiagphdr=0xc00000005b97e800 [0]
pcursfix=0x0000000000000000 [-1] pcmdtext=0x0000000000000000 [-1]
psortproc=0x0000000000000000 [-1]
pcompiler_scratch_space=0x0000000000000000 [-1]
pll_phdr=0x0000000000000000 [-1] plmtcache=0x0000000000000000 [-1]
rtmrngcache=0x0000000000000000 [-1] psqt_text=0x0000000000000000 [-1]


prowcnt=0 pcurcmd=235 (DUMPXACT)
pstatlist=0x0 pseqstat=16 (0x0010 (SEQ_REPSPROC_CHECKED))

plastprocid=0 ptextsize=32768 pdbopened=0 ptabcount=0

pstringsize=50 pbulkbatchsize=5000 pbulkarraysize=50
pretstat=0 pexecstep=0x0000000000000000
pcurstmt=0xc00000005da77000 prepxdes=0x0000000000000000
psqlstatus=2 plocks=0xc00000004bcf72f8 pl3locks=0xc00000004bcf7308

ppllcfg_stat=0 pisolation=1
pexec_maxthread=2 pexec_maxthread_ext=0
pexec_maxscanthread=1 pexec_maxscanthread_ext=0
pcontext_id=0 pdseg=0xc00000005a187800

SQL globals and monitors :
plasterror=0 plangid=0 pclient_csid=-1 pnextcid=7
pprocnest=0 pxactcnt=1 ptextptr=0x00000000000000000000000000000000
ptextts=0x0000 0x00000000 ptextobjid=0 ptextdbid=0 ptextcolid=0
pcputot=2 pcpucur=1028422 pmemusage=6
pbufread=57951581 pbufwrite=1343 ptimeslice=1

precvbuf=0xc00000004bd06118 psendbuf=0xc00000004bd06148
pnetbufp=0xc000000041ad1f3e
pnetid=0 psid=0 pchannel=0 pscn=0
precvfn=0x400000000038fbd8 psendfn=0x40000000003961c8
precvflushfn=0x4000000000396188
psendeomfn=0x40000000003abc28 pintsql_ctx=0x0000000000000000
pintsql_frame=0x0000000000000000
pprots next=0xc00000004a1669f0 prev=0xc00000004a1672f0
pprotcache=0xc00000004bcf7428 pprotstamp=235
pexcproc=0xc00000004bcf7480 pallocrange=16 pallocwork=12
pcrpname=0x0000000000000000
pclaimedbufs=0xc00000004bd063b0 pkeeppss=0x0000000000000000
pkeepcnt=1 pwaitforevent=0
pdropdbt=0x0000000000000000 netdiagbuf=0xc00000005b88c800
pprealloclocks=0xc00000004bcfbf18

psitebuf=0x0000000000000000 ppreverror=0 pattention=0
pextattn=0000000000000000 plkpromattn=0
pfouinfo=0xc00000004bcfbf40 pindreserved=0

pbkout_gen: PBACKOUT at 0xc00000004bcfbfb8
ptaskdata 0xc00000005f80f000, ptaskabtfn 0x400000000038fcf8
pbkout_diskclr: PBACKOUT at 0xc00000004bcfbfa8
ptaskdata 0x0000000000000000, ptaskabtfn 0x0000000000000000
pbkout_coord: PBACKOUT at 0xc00000004bcfbfc8
ptaskdata 0x0000000000000000, ptaskabtfn 0x0000000000000000
pdskchk_notify=0 piotot=4

pcss=0x0000000000000000 pfreecss=0x0000000000000000
pcss_cache=0x0000000000000000
pdefcss=0x0000000000000000 pcurcss=0x0000000000000000
pdeadlock_id=0
ptranstate=0 actx_env=0x0000000000000000 ptds_strmlen=-1
pblocktime: Jan 6 2009 2:52PM
pclient_cap=0xc00000004bcfc08c ptype_map=0xc00000004bcfc090
peed=0x0000000000000000
pextobject=0x0000000000000000 pcursfix=0x0000000000000000

IDENTITY_VAL info (pidtvalinfo) :


IDENTITY_INSERT settings (pidtinsert) :


pidentity=(at 0xc00000004bcf7318):
pstackbound=0xc000000045a22808 pguardbound=0xc000000045a20810
pnetdiagphdr=0xc00000005b97e800
paudflags=240 pnxt_wktabid=449 pprocess_type=00 (PERFORMING
COMMAND)
psetownercnt=0 psseqframes=0xc00000004bcfc128
psaddfrmaes=0xc00000004bcfc2c8
pcmdtext=0x0000000000000000psortproc=0x0000000000000000
pinternalxact=0x0000000000000000 pmasterxact=0xc00000004d8a0670

ptempdev=0 pworktable_cnt=0 pmainroot=0xc0000000510d5000
pfirstcmd=0xc0000000510d5360
pcompiler_scratch_space=0x0000000000000000
plkstats=0xc00000004bcfc938 pcur_lkwait_time=0
pkernel_addr=0xc00000004bcfca5c pkernel_size=0xc00000004bcfca7a
ptmemfrag=0xc00000004d9b5800 pmempool_addr=0xc00000004bcfca98

MEMPOOL 0xc00000004d9b5800
MEMCOM 0xc00000004d9b5800 mc_name Pss Frag Pool 571539823
mc_link 0xc00000004d9b5800 next=0xc00000004d9e4800
prev=0xc00000004d9fb800
mc_pooltype '2' is a 'Frament' pool
mc_common_stat 0x1 mc_lock 0x0000000000000000 mc_state 0
mc_waiting 0 mc_allocfn 0x40000000003ac4a8 mc_freefn
0x40000000003bff18
mc_maxsize_fn 0x40000000003a7958 mc_mlimitfn 0x0000000000000000
mc_monallocs 2982 mc_monfrees 2982 mc_monsleeps 0
mc_monmigrates 0 mc_total 2048 mc_used 464
mc_growsize 2048 mc_diagcb 0x40000000003b3a18 mc_diagcbarg
0x0000000000000000
mc_hwm 0 mc_maxsize 40960 mc_minsize 2048


mes_held 0 mes_sleepcount 0 mes_kpid 0

mp_link 0xc00000004d9b58e8 next=0xc00000004d9b5980
prev=0xc00000004d9b5980
mp_frags 0xc00000004d9b58f8 next=0xc00000004d9b59b0
prev=0xc00000004d9b59b0
mp_curfrag 0xc00000004d9b59b0 mp_status 0x0 mp_ovhd 464
mp_nfrags 1 mp_largest 0 mp_flags 0
mp_minsize_no 0

***** summary for fragment pool Pss Frag Pool 571539823 *****
# of allocs = 2982
# of frees = 2982
# of sleeps = 0
# of blocks = 1
# of frags = 1
min size = 2048
max size = 40960

used memory = 464
(overhead = 464)
free memory = 1584
total memory = 2048

pll_phdr=0x0000000000000000 pll_ctx=0x0000000000000000

psecsrvsreq=0 psecsendbuflen=0 psecrecvbuflen=0
psecsess=0x0000000000000000 psecsendbuf=0x0000000000000000
psecrecvbuf=0x0000000000000000
pcomp_maxthread=2 pcomp_maxscanthread=1
pexeclass=0xc00000004bcfcafc penginegroup=0xc00000004bcfcb1b
pspexecpri=0 pspprimodflg=0 pslstat=0

pexeiocost=0 ptmrngalarmid=0
pcurrangemap=0xc00000004bcfcb58 pnewrangemap=0xc00000004bcfcb78
plmtcache=0x0000000000000000 ptempdb_limit=0x0000000000000000
rfirstrng=0x0000000000000000 rlastrng=0x0000000000000000
rtmrngcache=0x0000000000000000
pcmpiocostlim=0x0000000000000000 pexeiocostlim=0x0000000000000000
pbatchtimelim=0x0000000000000000 ptranstimelim=0x0000000000000000
prowcntlim=0x0000000000000000
psqltext=0xc00000004d918ee0 plan_blocked=0x0000000000000000
pqueryplan=0
pbufs_in_predirty=0
LATCHWAIT struct at 0xc00000004bcfcc30
Latch wait struct (at address=0xc00000004bcfcc30) of Spid 308

platchedbufs array:

applmon_index=-1 pplc_waited=0x0000000000000000
pplc_locked=0x0000000000000000 pdatarpgctr=0 pindexrpgctr=0
psqt_text=0x0000000000000000 psqt_cache=0x0000000000000000
pworkdeslist=0x0000000000000000
pomnistat=0x0 (0x00000000)
pomnipss=0x0000000000000000
pmda_lasterr=0 pmda_lasterrsev=10 pbatch_id=43
pmda_stats=0xc00000004cef4b28 psqls_snap=0xc00000004bd06470
pdbdomain=0 pastc_taskp=0x0000000000000000
prectable=0x0000000000000000
psystaskcln_func: 0x0000000000000000
psystask_hdlr: 0x0000000000000000
psystaskcln_arg 0x0000000000000000 psystask_arg 0x0000000000000000
pdskbuf_mempool=0x0000000000000000, pdskbuf_cnt=0


PHDR proc header:
PROC_HDR at 0xc00000005da64000
p_hdrstep=0xc00000005da77000 p_hdrseq=0xc0000000510d5378
p_hdrcrt=0x0000000000000000
p_hdrpbuf=0x0000000000000000 p_hdrtmps=0x0000000000000000
p_hdrcaller=0x0000000000000000
p_hdrelease=0 p_hdrtabid=0 p_hdrsub=0x0000000000000000
p_hdrsubcnt=0 p_css=0x0000000000000000 p_csscnt=0
p_hdrstatus=1024 ((0x00000400 (SACREATED)))
p_lastpg=0 p_lastoff=1 p_procnum=0 p_hdrdbid=0 p_tmpdbid=0
p_lostcnt=1 p_hdrtype=2

Memory Allocation Map:


0 mempgptr=0xc00000005da64000 pma_chunksize=2048
byte_count=1832 byte_save=1832
1 mempgptr=0xc00000005da77000 pma_chunksize=2048 byte_count=808
byte_save=0


PPLAN proc header:


PTEXT proc header:


PNETBUFP:


T-SQL command (may be truncated): dump tran <dbname> with
truncate_only
}
DBCC execution completed. If DBCC printed error messages, contact a
user with System Administrator (SA) role.
------------------------- Done --------------------------

TIA
Dmitri

Mark K

unread,
Jan 6, 2009, 11:11:05 AM1/6/09
to
Is the spid blocked? It shouldn't be, but just checking.

This part of the code does not check to see if the spid has been killed.
There is a check for attention, so it may be possible to Ctrl-C out. It
looks like the process is waiting on I/O a lot. Based on the release I
believe you are running (from your other open cases), there isn't much I can
suggest. The fact that you have a 52GB is interesting. 20GB of log data is
going to take a long time to slice through, and I'm guessing more log data
is being constantly written.

Mark Kusma

<goo...@barski.org> wrote in message
news:ff81b338-9c08-4bf5...@35g2000pry.googlegroups.com...

Derek Asirvadem

unread,
Jan 7, 2009, 2:42:35 AM1/7/09
to
> On 2009-01-07 00:44:39 +1100, goo...@barski.org said:
>
> It IS separate. I have a separate log storage of about 52G for
> transaction log, sp_spaceused syslogs tells me that about 20G are in
> use, wich is about 38% of the available space.

In that case, scratch what I said, we have a different problem,
although in similar vein. A 52GB log is most certainly too big. The
log is supposed to be the largest tran size times the no of active
update connections for the window you are holding it for (one hour).
Times two for safety. Of course, you might have poor transaction
design that defines a large tran size, but you will still get MB not GB.

Changing the dump tran code into a threshold and improving the quality
will eliminate/reduce the chances of dump tran failing.

When you recreate the db, use the correct log size, that will prevent
this altogether; when the script/threshold proc fails, everyone will be
hung in LOG SUSPEND until you intervene and dump the log.

BTW, TS has informed us that since 12.0, the only correct method for
identifying space available in the logsegment is (in MB):
select lct_admin("logsegment_freepages", dbid) / 512
For allocated space use:
select size / 512 from master..sysusages where dbid = db_id() and segmap = 4


So what is happening, in identifying your symptom, is that you have
have some very large trans in the log, and a very, very large amount of
active log used (which will get [a] dumped or [b] undone/redone if
rebooted). Even if you have a rogue trans and you find and kill it,
the rollback for that will take substantial time (and effect
performance).

> I've killed the backupserver - no impact on the dataserver (good), and
> neither on the running "dump tran" process (bad!).

(as per Mark Kusma's comments, that was useless advice)

> Any ideas?

Weak suggestion: Find the rogue spids (that have active large trans)
and kill them within ASE. As per warning above, that will have an
effect on others.

There is nothing else you can do ...
- boot backup server
- dump database
- warn your users that there will be a slow-down, schedule a window,
and dump tran
- I would say, do NOT use "with truncate_only", keep the log dumpfiles,
properly named ( you may need them later for recovery which will be
faster and more acceptable than load database)
- schedule downtime and fix your logsegment size
- implement a threshold proc with error checking and escalation

Until you do ALL those tasks, you are in danger of this happening again.

Lesson for everyone: A large logseg is very dangerous; do not keep
adding logseg space to allow large trans to complete (they keep getting
larger); fix the large tran by breaking it into manageable and sociable
batches (eg rowcount 500).


--
Regards
Derek Asirvadem
Director / Senior Sybase DBA / Information Architect
Sybase BSA [OEM/VAR] Partner

Copyright Š 2008 Software Gems Pty Ltd

goo...@barski.org

unread,
Jan 19, 2009, 9:13:41 AM1/19/09
to
On Jan 7, 8:42 am, Derek Asirvadem <derek.asirva...@gmail.com> wrote:

> In that case, scratch what I said, we have a different problem,
> although in similar vein.  A 52GB log is most certainly too big.  The
> log is supposed to be the largest tran size times the no of active
> update connections for the window you are holding it for (one hour).  
> Times two for safety.  Of course, you might have poor transaction
> design that defines a large tran size, but you will still get MB not GB.

Welll...Theoretically - yes. Actuall - noy. Firstly, because there CAN
be a large number of open transactions in a multiuser system (I'm not
talking about a database server for my homepage here), secondly
because the transactions CAN grow very large due to application design
- which may be poor or not, however a DBA cannot rewrite an
application. And to the 3rd... Have you ever run "Sybase Data
Auditing"? It maintains the "secondary truncation point" in the
transaction log, which restricts truncation of - even dumped and
completed! - transactions, until they are replicated (this actually
applies not only to this piece of Lumigent/Sybase software, but
probably to every other replication agent). If for some reason the
replication runs slower than usual, the transaction log begins
growing, until there is a period where fewer transactions are
generated (for example, in the night, in my case it is actually during
the week except Sunday, when a huge batch processing is done), and can
very well exceed your proposed size.

>
> Changing the dump tran code into a threshold and improving the quality
> will eliminate/reduce the chances of dump tran failing.

This sounds great - however you just cannot go and modify application
code. Remember, I'm a DBA, not an application designer and modifying
the application in that way would actually catapult me out of any tech
support agreement with the application vendor.

> When you recreate the db, use the correct log size, that will prevent
> this altogether; when the script/threshold proc fails, everyone will be
> hung in LOG SUSPEND until you intervene and dump the log.

Sounds great in theory, but unimplementable in practice, see above.

> So what is happening, in identifying your symptom, is that you have
> have some very large trans in the log, and a very, very large amount of
> active log used (which will get [a] dumped or [b] undone/redone if
> rebooted).  Even if you have a rogue trans and you find and kill it,
> the rollback for that will take substantial time (and effect
> performance).

> ...


> Weak suggestion: Find the rogue spids (that have active large trans)
> and kill them within ASE.  As per warning above, that will have an
> effect on others.

Has not helped. The "dump tran" process did not respond to a kill and
killing all of the other processes has not helped.

> There is nothing else you can do ...
> - boot backup server
> - dump database

No dump was possible, due to an already running dump.

> - warn your users that there will be a slow-down, schedule a window,
> and dump tran

See above.


> - I would say, do NOT use "with truncate_only", keep the log dumpfiles,
> properly named ( you may need them later for recovery which will be
> faster and more acceptable than load database)
> - schedule downtime and fix your logsegment size

How do you reduce a log segment size? According to Sybase, there is no
supported way to do it.

> - implement a threshold proc with error checking and escalation

Not applicable, see above.

> Lesson for everyone: A large logseg is very dangerous; do not keep
> adding logseg space to allow large trans to complete (they keep getting
> larger); fix the large tran by breaking it into manageable and sociable
> batches (eg rowcount 500).

Well, this is a very good theory, but is completely useless in some
practical scenarios, especially when you are not allowed to modify
application code (which in turn is very common in busines
environments).


Actually, I ended up with scheduling a downtime for a Saturday, then
suicided the log (as instructed by the application vendor who pretends
to provide Sybase support as well), which caused a massive data
corruption (large number of recurring 605 errors, most of them in
indices, not in tables itself, however they recurred even after dbcc
checkalloc reported the database to be clean) and thus caused us to
revert to a "last known clean" backup.

So lesson for everyone: do NOT suicide the log, unless you can afford
reverting to a backup. If you decide to suicide the log nevertheless,
immediately bcp out the whole database after this, recreate it
(including destroying the "devices"), and bcp in the data.

Dmitri

ThanksButNo

unread,
Jan 19, 2009, 3:19:26 PM1/19/09
to
On Jan 6, 2:34 am, goo...@barski.org wrote:
> On Jan 6, 11:20 am, "Manish Negandhi [TeamSybase]"
>
> <negandhi.man...@gmail.com> wrote:
> > Usually how do you reduce tran log size ? Do you dump tran log
> > frequently or you have "trun log on chkpoint" option set for the
> > database ?
>
> It is dumped every other hour usually by a script - it does not work
> now, as I already have a "dump tran" running.
>

Double-check the script -- my experience has been that
when the transaction log fills up it's sometimes because
the dump script stopped working for whatever reason.

It won't help your current crisis, but may help prevent
a re-occurrence.

\:-\

Bret Halford [Sybase]

unread,
Jan 19, 2009, 5:29:35 PM1/19/09
to
goo...@barski.org wrote:

>> - schedule downtime and fix your logsegment size
> How do you reduce a log segment size? According to Sybase, there is no
> supported way to do it.

I have to think that there was some miscommunication there.
Reducing the size of the log segment is a simple matter of running

sp_dropsegment logsegment, <dbname>, <device>

Of course, you would then have a device that has no segments
associated with it, so you would probably want to extend the
data segement or a user-defined segment onto it so it can be
used for something.


> Actually, I ended up with scheduling a downtime for a Saturday, then
> suicided the log (as instructed by the application vendor who pretends
> to provide Sybase support as well), which caused a massive data
> corruption (large number of recurring 605 errors, most of them in
> indices, not in tables itself, however they recurred even after dbcc
> checkalloc reported the database to be clean) and thus caused us to
> revert to a "last known clean" backup.
>
> So lesson for everyone: do NOT suicide the log, unless you can afford
> reverting to a backup. If you decide to suicide the log nevertheless,
> immediately bcp out the whole database after this, recreate it
> (including destroying the "devices"), and bcp in the data.

Egad. If you have a backup, don't suicide the log <period>. Even
if you are lucky enough that you don't end up with database structure
corruption, you are giving up transactional consistency of the database.
(imagine your database models a bank, and $5K is being transfered from
one account to another. suicide the log and any of these could result:
a) the $5K disappears, neither account has it b) the $5k stays in the
first account AND is added to the second, c) the transfer happens as
expected.)

The larger the amount of transaction log being ignored, the greater
the chance of problems.

I would only suicide the log if there was no other way to retrieve the
data and everyone was aware that the data was going to be suspect.

goo...@barski.org

unread,
Jan 21, 2009, 5:47:43 AM1/21/09
to
On 19 Jan., 21:19, ThanksButNo <no.no.tha...@gmail.com> wrote:

> > It is dumped every other hour usually by a script - it does not work
> > now, as I already have a "dump tran" running.
>
> Double-check the script -- my experience has been that
> when the transaction log fills up it's sometimes because
> the dump script stopped working for whatever reason.

There is actually nothing you can do wrong on an unchanged "dump tran
<dbname> to '<filename>'" statement :)
The script has not changed for ages.

> It won't help your current crisis, but may help prevent
> a re-occurrence.

The script is subject to design review anyway, the crisis has been
solved now (after revertig to a clean backup).

Thanks nevertheless.
Dmitri

Derek Asirvadem

unread,
Jan 21, 2009, 9:26:56 AM1/21/09
to
On 2009-01-20 01:13:41 +1100, goo...@barski.org said:

> On Jan 7, 8:42 am, Derek Asirvadem <derek.asirva...@gmail.com> wrote:
>
>> In that case, scratch what I said, we have a different problem,
>> although in similar vein.  A 52GB log is most certainly too big.  The
>> log is supposed to be the largest tran size times the no of active
>> update connections for the window you are holding it for (one hour).  
>> Times two for safety.  Of course, you might have poor transaction
>> design that defines a large tran size, but you will still get MB not GB.
> Welll...Theoretically - yes. Actuall - noy. Firstly, because there CAN
> be a large number of open transactions in a multiuser system (I'm not
> talking about a database server for my homepage here), secondly
> because the transactions CAN grow very large due to application design
> - which may be poor or not, however a DBA cannot rewrite an
> application. And to the 3rd... Have you ever run "Sybase Data
> Auditing"? It maintains the "secondary truncation point" in the
> transaction log, which restricts truncation of - even dumped and
> completed! - transactions, until they are replicated (this actually
> applies not only to this piece of Lumigent/Sybase software, but
> probably to every other replication agent). If for some reason the
> replication runs slower than usual, the transaction log begins
> growing, until there is a period where fewer transactions are
> generated (for example, in the night, in my case it is actually during
> the week except Sunday, when a huge batch processing is done), and can
> very well exceed your proposed size.

Not quite, you are refusing to accept a small log size. The proposed
size remains the proposed size.

Replication does add complications. Replication of a unnormalised data
heap, or of large transactions, is a dog's breakfast.

Sybaudits is a similar issue. If you have a good security model AND
transactions = sprocs, then you only need to audit sproc executions;
otherwise you have to udit cmdtext from every connection, MB vs GB.
The GB is consequences of the poor trans design; the GB has
consequences of its own.

The fact remains, that bad design is compounded, and has consequences
all the way down the chain. Reduce you transaction first, and that
will have consequences all the way down as well.

The proposed log size remains, but your shop may have reasons why they
are 1,000 times that.

>> Changing the dump tran code into a threshold and improving the quality
>> will eliminate/reduce the chances of dump tran failing.
> This sounds great - however you just cannot go and modify application
> code. Remember, I'm a DBA, not an application designer and modifying
> the application in that way would actually catapult me out of any tech
> support agreement with the application vendor.

E, dump tran and thresholds is DBA code.

>> When you recreate the db, use the correct log size, that will prevent
>> this altogether; when the script/threshold proc fails, everyone will be
>> hung in LOG SUSPEND until you intervene and dump the log.
>
> Sounds great in theory, but unimplementable in practice, see above.

Well no. If the application was delivered with a mixed data/log, or
with massive transactions, just send it back as "unfit for purpose,
unfit for production use". If you have already paid money for software
written by red dwarves, without testing, well there's a sucker born
every minute.

As a DBA, I have no influence on what the business purchases, but I do
have a responsibility to tell them, hey, this app/db is a substandard
piece of junk; I can't give you a SLA; and if you put it on the server,
every other business unit is going to kill you.

Log suspend and wait for you to fix clear it is ONL required if the
threshold sproc fails; and it is INFINITELY better than whta you have
now (your original post).

>> So what is happening, in identifying your symptom, is that you have
>> have some very large trans in the log, and a very, very large amount of
>> active log used (which will get [a] dumped or [b] undone/redone if
>> rebooted).  Even if you have a rogue trans and you find and kill it,
>> the rollback for that will take substantial time (and effect
>> performance).
>> ...
>> Weak suggestion: Find the rogue spids (that have active large trans)
>> and kill them within ASE.  As per warning above, that will have an
>> effect on others.
> Has not helped. The "dump tran" process did not respond to a kill and
> killing all of the other processes has not helped.
>
>> There is nothing else you can do ...
>> - boot backup server
>> - dump database
> No dump was possible, due to an already running dump.
>
>> - warn your users that there will be a slow-down, schedule a window,
>> and dump tran
> See above.
>> - I would say, do NOT use "with truncate_only", keep the log dumpfiles,
>> properly named ( you may need them later for recovery which will be
>> faster and more acceptable than load database)
>> - schedule downtime and fix your logsegment size

Ok, so tell you r users you have been in an unrecoverable state since
Tue 06 Jan 09, and but for the resilience of Sybase the heap the
vendors have misrepresented as a database should have dropped dead by
now.

Just start truncating the log periodically (unless you write a
threshold ... see,, it is a DBA function).

> How do you reduce a log segment size? According to Sybase, there is no
> supported way to do it.

Correct. There are unsupported ways of doing it, but the difficulty
depends on a number of factors.

Bret has already addressed a specific case: if you have logseg on more
than one device, you can drop it down to one device, but you cannot
drop it from that last device.

If you have mixed data/log on the same devices, you will have to mmove
the logseg off the data devices, one at a time. The SAG has a good
write-up on that.

>> - implement a threshold proc with error checking and escalation
> Not applicable, see above.

Applicable, see above. Unless you like typing "dump tran ..." all the time.

Read up on threshold stored procs.

>> Lesson for everyone: A large logseg is very dangerous; do not keep
>> adding logseg space to allow large trans to complete (they keep getting
>> larger); fix the large tran by breaking it into manageable and sociable
>> batches (eg rowcount 500).
> Well, this is a very good theory, but is completely useless in some
> practical scenarios, especially when you are not allowed to modify
> application code (which in turn is very common in busines
> environments).

It is not theory by definition, it is good practice. Your shop has a
substandard app/database. You cannot go around changing the standard
to fit the rubbish passing for apps; you cannnot change the law to suit
the very criminals we need protection from.

Common business practices is going broke, and is responsible for the
collapse of the economic system. The sooner we are rid of them the
better.

> Actually, I ended up with scheduling a downtime for a Saturday, then
> suicided the log (as instructed by the application vendor who pretends
> to provide Sybase support as well),

Hilarious because it was unnecessary. All you had to do was rebooot,
with no recovery (otherwise it would take 25 hours), then load data
heap.

But in licensing/responsibility terms, OEM licence, that is what you
get (vendor is responsible for providing "support").

> which caused a massive data
> corruption (large number of recurring 605 errors, most of them in
> indices, not in tables itself, however they recurred even after dbcc
> checkalloc reported the database to be clean) and thus caused us to
> revert to a "last known clean" backup.
>
> So lesson for everyone: do NOT suicide the log, unless you can afford
> reverting to a backup. If you decide to suicide the log nevertheless,
> immediately bcp out the whole database after this, recreate it
> (including destroying the "devices"), and bcp in the data.

Lesson for everyone: do not buy substandard apps; do not accept an app
without testing.
Do not believe vendors ... check it out carefully yourself.
--
Cheers
Derek


Senior Sybase DBA / Information Architect

Copyright © 2008 Software Gems Pty Ltd
Quality Standards = Zero Maintenance + Zero Surprises
Performance Standards = Predictability + Scaleability

Derek Asirvadem

unread,
Jan 21, 2009, 9:33:32 AM1/21/09
to
> There is actually nothing you can do wrong on an unchanged "dump tran
> <dbname> to '<filename>'" statement :)

Nonsense. The COMMAND can fail for a number of reasons (not enough
file space). Besides, you TOLD us it failed. If it does not check
@@error, it has no idea if the command worked or not. Is it logging
its executions ? what if it did not get invoked ?

My threshold stored proc escalates, based on the err condition, where
else there is space, the db_options, etc.

> The script is subject to design review anyway, the crisis has been
> solved now (after revertig to a clean backup).

Demand that they give you the minimum acceptable for production use: a
threshold stored proc with err checking, PLUS a separate backup script
that calls it (so that you do not have the same code in two places.

--
Regards
Derek Asirvadem
Director / Senior Sybase DBA / Information Architect
Sybase BSA [OEM/VAR] Partner

Copyright Š 2008 Software Gems Pty Ltd

goo...@barski.org

unread,
Jan 21, 2009, 9:50:39 AM1/21/09
to
On Jan 21, 3:26 pm, Derek Asirvadem <derek.asirva...@gmail.com> wrote:
> [...]

Have you ever thought of the fact, that the DBA exists for the
business, not vice versa, so if the busines makes a decision (in this
case - in the past, long before the current team was employed), maybe
well suboptimal, then the DBA has to follow it? Unless the DBA owns
the company, that is...

Bret Halford [Sybase]

unread,
Jan 21, 2009, 7:10:54 PM1/21/09
to

It depends on the company culture, of course.

In general though, if the DBA identifies problems with the decisions
made in the past, or improvements that can be made due to changes in
the technology, it is usually reasonable and desirable for them
to suggest changes, and provide data and explanations to support
better decisions and improve performance and reliability of the
system.

Some DBAs have to work with strict rules on what they can and
can't do, others have a great deal of leeway and are simply trusted
to be experts that will do what they can to improve the system.

Derek Asirvadem

unread,
Jan 22, 2009, 8:28:45 PM1/22/09
to

I am a consultant and I come up against this all the time
• if the business has the purchasing power and makes all the hiring
decisions, they should get a bunch of developers without IT degrees
(anyone with a part-time Microshaft course will do these days), and
tell them what to do, how to do it, etc., and live with the consequences
• if I am hired as a walk-in DBA or to mentor a DBA team, which the
business has paid me to do, then I do that job as a qualified,
experienced "DBA", and get the DBA team to lift their game
• that means that when the business give me/the team insane
instructions, I refuse it on professional grounds (they are free to get
someone else to do substandard work). If I do accept such
instructions, and implement insane or incorrect constructs, then there
will be consequences which need work, clean-up, hand-holding ... which
they will blame on me, which I avoid.
• If you merely follow instructions from the business (which change all
the time) without question, then you are a clerk with DBA capabilities,
not a DBA.
• I operate on a set of principles and standards, in order to avoid
schizophrenia, all standards are integrated with all other standards (I
do not allow a conflicting standard to be introduced). What this means
in the business/consulting/DBA world is, I remind the business that
they (first) purchased an OLTP server, and they committed to (and I
provided) an OLTP app/system; when they (later) want to introduce a
massive transactions or non-OLTP app/database, I flag that as a
contradiction of _their_ previous OLTP, fast server, commitment, and
get them to resolve _their_ conflict themselves. I do not take it on
as my problem.
• If you are an employee dealing with past bad decisions, then just
clearly identify the SLA you can provide for (a) standard-compliant
OLTP app/dbs (b) sub standard OLTP app/dbs and (c) untested data heaps.
Clearly identify the causes and consequences of the problems. You can
fix the problems to the best of your ability (and keep fixing the same
problems over and over again), but do not accept problems created by
others as your own.

The DBA does not exist for the business, the DBA exists to administer
databases, depending on the pay structure, to do so with little
experience or with quality and much experience. After you implement
fixes to sub-standard app/dbs, which of course would be substandard
themselves, you become trapped and you are part of the problem. Best
to think carefully and draw a line at the right place (job description,
responsibility, ownership).

I would not try to give advice on non-technical subjects in this
technical forum, but I have found that a key to maintaining my sanity
and serenity is to avoid taking on the insanity of others, no matter
how much they pay me or manipulate me into believing their insanity is
mine.

--
Cheers
Derek
Senior Sybase DBA / Information Architect
Copyright © 2008 Software Gems Pty Ltd

--
Tired of databases that are more cost than benefit ? Wondering why you
cannot get Sybase performance from Sybase ? Find out

ThanksButNo

unread,
Jan 25, 2009, 4:40:54 PM1/25/09
to
On Jan 21, 6:33 am, Derek Asirvadem <derek.asirva...@gmail.com> wrote:
> > There is actually nothing you can do wrong on an unchanged "dump tran
> > <dbname> to '<filename>'" statement :)
>
> Nonsense. The COMMAND can fail for a number of reasons (not enough
> file space).

Oh it's worse than that.

Sometimes somebody goes in with a "great idea" and
changes something, and "forgets" to tell anybody about
it (coz he knows he wasn't supposed to change anything).

(I know, coz I've been that "somebody" myself!)

So, you double-check, and find out maybe the idea really
was "great," except the guy made a typo somewhere that
caused the script to fail the very next day. And you
don't see the consequences of that "improvement" until
the transaction logs fill up and everything comes to a
screeching halt.

OOOOPSIE!

<8-D

0 new messages