Thx
Richard
Did you mean sp_monitorconfig? The documentation,
http://infocenter.sybase.com/help/topic/com.sybase.help.ase_15.0.sprocs/html/sprocs/sprocs170.htm
says its metadata. What does the Procedure Cache Management section
of sp_sysmon tell you? Have you looked at 'dbcc memusage' as well?
It has a section that reports on the top 20 sprocs - their sizes
and number of copies in cache.
-am © 2007
Procedure cache mgmt varies - for one case (per sec over a five minute
interval): requests 24.1, reads from disk 1.0, writes 0.0, removals 8.6,
recomp 0.2. I believe this is a hit ratio of 96%. It would be nice if there
was more info.
I looked at 'dbcc memusage' and don't think we can run in production with
the caveats. It also doesn't indicate what measurement is used to calculate
the top procedures (I suspect sum of proc sizes).
It is still not clear when more cache is needed (unless it is a low hit
ratio). I would prefer not to wait for a 701.
Thanks for your comments.
Richard
I have to agree with you there.
> Procedure cache mgmt varies - for one case (per sec over a five minute
> interval): requests 24.1, reads from disk 1.0, writes 0.0, removals 8.6,
> recomp 0.2. I believe this is a hit ratio of 96%. It would be nice if there
> was more info.
The values sound too low to be conclusive but it looks as
though your proc cache isn't busy.
> I looked at 'dbcc memusage' and don't think we can run in production with
> the caveats. It also doesn't indicate what measurement is used to calculate
> the top procedures (I suspect sum of proc sizes).
I suppose you'd either have to try auditing or the MDA tables
to track the number of calls.
> It is still not clear when more cache is needed (unless it is a low hit
> ratio). I would prefer not to wait for a 701.
Also note that ASE 15 chews up a lot more proc cache.
-am © 2007
Thx
R
The optimiser and how it evaluates joins.
> Is there a rule of thumb to calcuate the value to use for ASE 15?
Perhaps. Here's a note from Tech Support while chasing
this up -
In the particular case the sproc contains one 10 table join and one 15
table join. Under 12.5.x, the 10 table join would require the optimizer to
look at 5040 different plans and the 15 table join generates 32760 plans.
Under 15.0.x, the optimizer considers a lot more options. For a 10 table
join it considers over 2 million plans and for a 15 table join it tries to
consider over 2 billion plans.
I'm not sure if you can derive a rule of thumb from this
since it doesn't say how much proc cache is required but
it is considerable. There's a config param in ASE 15.0.1
to control this, "sproc optimize timeout limit", but I
don't know if its 12.5.4 (which also seems to have the
problem).
-am © 2007
Most of what sp_monitorconfig returns (eg. "open indexes") is
metadata, not data/cache for the object. Idea is this is the set of
higher level entries that ASE cycles through all the time, and the
config thus affects overall ASE performance. In the case of the
procedure cache is not metadata, it is the cache itself. I'm not sure,
but I believe the engineers view the proc cache as metatdata.
Certainly the reuse issues are the same (more like metatdata than
cache).
The number you give sp_configure is the no. of [2k] pages to allocate
for the "procedure cache".
The proc cache contains the most recently used query plans for sprocs
and triggers, either (really should have been named the other way
around):
• proc buffers, 76 bytes overhead (one per copy of every compiled object)
• proc headers, query plans (one or more 2k pages per compiled object
depending on size)
David Owen has provided a decent write-up at
http://omicron.felk.cvut.cz/FAQ/articles/a4278.html
The info predates Named Caches, sp_sysmon, etc, but the explanation is
worth-while and it unaffected by the newer facilities.
> It appears that even if Num_free is zero, procedures can be removed
> from cache to make room for new entries.
Yes. NumFree means there are no free buffers/space for a new entry,
therefore we are going to get into re-using the headers we have, which
means one/more plans that are used will get paged out and the new plan
gets paged in, and [more important] the plan that got paged out will
have to be paged in [later].
> Procedure cache mgmt varies - for one case (per sec over a five minute
> interval): requests 24.1, reads from disk 1.0, writes 0.0, removals 8.6,
> recomp 0.2. I believe this is a hit ratio of 96%.
I don't think so. You're talking sp_sysmon here, which returns
"removals" not "reuse" but they are the same thing. Five minutes is
not a meaningful sample for this metric. Sure, 23/24 is 96% hits [of
the avail space], but 8 removals out of 24 means 33% cache misses due
to not enough proc cache space.
Look at sp_monitorconfig "procedure cache" again. Reuse of any
kind/number is bad news and will significantly affect ASE overall
performance, so you want to get this down to zero.
You can never have enough cache (data or proc), particularly if you are
throwing dynamic SQL at the server.
> It would be nice if there was more info.
Completely agree.
--
Cheers
Derek
Senior Sybase DBA / Information Architect
Copyright © 2007 Software Gems Pty Ltd
Blissfully attached to Quality & High Performance, from the land of
ThingsThatWork
"Patient, normalise thyself"
It's unlikely you'll be able to get rid of all re-use for proccache. Unlike
other config parameters reported by sp_monitorconfig, like 'number of open
objects' for which a maximum value can be found to accomodate all needs
(simply put, the total #tables in the server), there is no such upper limit
for the proc.cache. On busy systems with many concurrent users, various
copies of a plan may be in the proc cache concurrently. In addition, any
sort operations (update statistics on non-leading columns; create index)
also consume proccache, and often a lot of it, causing other cached plans to
be removed. This is a different way of saying that some turnover for the
proccache is natural. What matters is that the Num_Reuse counter doesn't
keep counting up quickly -- when that happens, you do need more proccache --
although a large create index or update stats may consume your entire proc
cache anyway (think of 701 errors) so more proc cache is not always
possible.
Incidentally, for update stats on non-leading columns, use sampling to
relieve the load on the proccache.
Rob V.
The manual indicates procedure cache ages LRU entries out but doesn't
describe the algorithm. Is there any documenation that describes this
better?
It seems that the active storage per user is much too large.
Thx
Richard Kinread
Basically, the proccache is organised along and LRU-MRU mechanism. There's
nothing you can configure about this other than the size.
Keep in mind that the proc.cache is used for *lots* of other purposes as
well apart from query plans -- this has always been that way and is not
specific for ASE 15, although ASE 15 has a higher memory consumption for
some of these purposes.
Rob V.
I'm not sure how you found that one but the Sybase FAQ lives on
http://www.isug.com/Sybase_FAQ/ It is somewhat out of date and
Mark Kuzma of Sybase was working on updating it. David Owen was
also posting the FAQ as per the usual FAQ posting rules to
comp.database.sybase on Usenet. Google has it archived. Looks
like the above site is putting those posted FAQs online. I'm
not sure why since there's no need.
> Look at sp_monitorconfig "procedure cache" again. Reuse of any
> kind/number is bad news and will significantly affect ASE overall
> performance, so you want to get this down to zero.
>
> You can never have enough cache (data or proc), particularly if you are
> throwing dynamic SQL at the server.
What about the statement cache? ;)
-am © 2007
[I think you know ...]
Yes, al database servers, we want the proc cache reuse to get as close
to zero as poss, but that is not reasonable on [database] servers that
are forever compiling new SQL, or when stats are taken while
maintenance is being done.
Well, the statement cache is Sybase Engineering's brilliant answer to
compiling the same old unchanged SQL over and over and over and over
... (maybe we should make a SQLCompileServer) ... On a database
server, it does not do much. By all means, make sure it is big enough
to handle the abuse.
In both those instances, it is not the server that needs configuring,
tweaking, or fixing up of any kind.
Thx
Richard Kinread
See my other post in this thread where I said ASE 12.5.4 exhibits
the same optimisation problems as ASE 15. I suspect this was
backported from ASE 15 to ASE 12.5.4.
-am © 2007
1 I don't have proc cache/sttmt cache re-use issues on my servers. BUt
but two current assignments have big problems with it, they have
"unruly" third party apps (their words), and I am changing the
servers/Dbs within the criteria of not affecting the apps.
2 We moved from 12.5.2 to 12.5.4/ESD#3 and experienced (a) much better
stability and (b) mostly unchanged performance but distinctly worse
performance in a few areas. Not enough to benchmark or complain. Proc
cache is slightly higher but then quite a few sp_sysmon stats are
different. Would not call it "dramatic". Cust is happy because it is
faster overall, and that's without the config/index changes.
3 We did not use 12.5.3 any ESD.