"JW" <j...@nctr.fda.gov> a écrit dans le message de
It could be "normal" or it could not be. It depends on what your end
users are doing. Are they running tools which let them perform ad-hoc
queries? If so, then they could be accessing data that never gets
accessed by others, therefore blocks in the buffer cache don't get
reused like they would in an OLTP environment. Then again, if they are
running some software and querying the same tables over and over again,
then your buffer cache might need to be bigger. The big thing here is to
look at how the users are accessing the data and try to determine how to
best meet their requirements.
Remove the "remove_spam." from the email address to email me.
"I can give it to you cheap, quick, and good. Now pick two out of
we offer our special-tuning-package:
- economy : hit ratio 90% minimum
- professional: hit ratio 99% minimum
-export : hit ratio 99,999% mininum
our software packages runs round the clock and improves the hit ration of
we offer also:
your managment will be very happy about the savings achieved with
our management software
"JW" <j...@nctr.fda.gov> schrieb im Newsbeitrag
Are your users complaining about performance?
If not, then your hit ratio is just fine and dandy, whatever it happens to
The more general point here is that, Don Burleson and TUSC notwithstanding,
the buffer cache hit ratio is an absolutely abysmal way of tuning anything.
It can sometimes offer a useful corollary to other statistics; to allow you
to distinguish between two otherwise equally plausible causes for, for
example, free buffer waits. But as a performance tuning goal in its own
> Mr. "senior-master-expert-principle DBA",
> could you please do the world a favor by vanishing forever.
> Serious Oracle discussion groups such as this one are not the place
> 1) Advertize your services
> 2) Especially when the services you offer are based on the absolutely
> ridiculous goal of improving the hit ratio, without looking at
> anything else in the database!
Since there was absolutely no contact information, I suspect our
"senior-master-expert-principle DBA" was havin' a bit-o fun.
(BTW, I just noticed that re-arranged it becomes an expert-PMS DBA. Take
that any way you wnt <g>)
Concentrate on waits rather than ratios.
Make sure you're not using bind variables (you're a data warehouse after
See whether you can reduce the number of physical reads by, for example,
denormalising things (think materialised views, index clusters) or by
partitioning things (assuming partitioning is an option).
Assess whether the developers have correctly implemented 'exotic segments'
such as Index Organised Tables or hash clusters or bitmap indexes. If 9i,
see whether your developers have heard of bitmap join indexes.
Depending on your version, you might be able to reduce physical reads by
implementing index and/or table compression.
That sort of thing.
The one thing that's not in the list is 'worry about my hit ratio'.
"JW" <j...@nctr.fda.gov> wrote in message
"Harry Sheng" <harrys...@hotmail.com> wrote in message
I'm not convinced - but then since I consider hit-ratios to be misleading
perhaps I am biased.
This chapter talks about not setting a target, about concentrating on the
miss ratio to reduce physical reads. However at no point does it consider
whether the workload is optimal, nor does it mention - except in passing -
the end-user response time. If halving the miss ratio from 10% to 5% results
in a 0.05% change in response time what is the point. What happens if
halving the miss rate results in much *increased* response time -
inappropriately choosing indexed access paths will frequently do this.
It also begs the question what do you consider to be 'good' and why.
By contrast a response time focussed approach can ask of the end-users, or
the business management, questions like how long should it take to enter an
order/how many orders should we process per day, what should our batch
window be in hours. Much more productive.
Audit Commission UK
What's more important? That you get a hit and make your ratio look good;
or that your queries are efficient and don't need to access that block
A ratio without a workload is nonsense; a meaningless average. And it's
far more important to tune the workload than it is to worry about
particular numbers on a ratio.
> Consider chapter
> 7 of Alan Packers book which is available (along with the TOC) online at
I read it, and I note that even he says he hasn't a clue what the
"right" number for the buffer cache hit ratio should be, since three
different tuning books mentioned three different values. He then pulls
off quite a clever stroke, which is to concentrate on halving or
quartering the miss ratio... but however neat that is, it's just a
writer's trick that has you concentrating on improving the hit ratio at
the end of the day.
He also hints, but doesn't dwell on, the fact that the degree of index
access versus full table scans affects the ratio. The skewness of your
data affects the ratio. And, at one point, he comes right out and says it:
"The objective of monitoring the buffer cache hit ratio is not to
achieve some arbitrary number; it is to optiize performance by finding a
balance between memory consumption and disk I/O"
...which is one of the more intelligent statements on the subject I've
seen. Based on that statement, how can you -or anyone- say "10% miss
ratio... I do not consider that to be good"??? You've just elevated some
"arbitrary number" to exalted target status, and just lost site of the
'balance between memory and disk I/O' goal. Which is the perennial
danger of simplistic ratios.
What your author doesn't go on to say, perhaps because his book is a
generalist one, aimed at the Sybase and DB2 markets as well as the
Oracle one, is that there are *better* ways of monitoring this balance
between memory consumption and disk I/O in Oracle than a simplistic
ratio. If the balance is skewed badly and ill-advisedly towards disk
access, you will know all about it in Oracle because you'll suffer from
measurable free buffer waits, for example. So if you tune free buffer
waits out of your database, you can quite comfortably state that disk
I/O is not a problem without ever one having to calculate a buffer cache
Oracle's wait events interface is a rich source of detailed and specific
tuning information that cannot be fudged in the way a ratio can. And
that's why it's far more sensible to concentrate on eliminating waits in
Oracle than it is to spend time worrying about a largely meaningless ratio.
In the post that I was responding to:
"I get a data buffer hit ratio of more than 90%. The users still yell at me
that the database is not responding ! So it really depends. If your users
do not yell at you, forget about the hit-ratio and have a good sleep."
My take was that he was inferring "I get a high hit ratio and therefore the
buffer cache is adequate." I disagree with that position (if that was what
was inferring). Unfortunately, my posted response was very, very different
from my intent.
p.s. I was wondering if your comments were based upon the online chapter or
the whole book?
"Howard J. Rogers" <h...@dizwell.com> wrote in message
> p.s. I was wondering if your comments were based upon the online chapter or
> the whole book?
Merely the online chapter, I'm afraid. Actually, I've never even heard
of the author before. But as I say, he has some rather intelligent
comments to make about the hit ratio (my only grumble is that, for
Oracle, it would be even more intelligent not to mention it at all!),
and I was overall impressed, albeit on a very thin basis of just one
Put it this way: unlike certain other authors that could be named, he
*doesn't* say that 'if you can increase the buffer cache hit ratio from
95 to 98%, your database will perform 400% faster' (though I suspect
that your author's reference to seeing "98%" advised in one or more
performance tuning books is a direct reference to that lunatic assertion).