Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Another Oracle "Myth"?

75 views
Skip to first unread message

Geomancer

unread,
Nov 20, 2003, 9:53:02 PM11/20/03
to
Cary Millsap makes the assertion that a buffer hit ratio of > 99%
OFTEN indicates inefficient SQL:

http://www.hotsos.com/dnloads/1.Millsap2001.02.26-CacheRatio.pdf

According to Mr. Millsap:

"A hit ratio in excess of 99% often indicates the existence of
extremely inefficient SQL that robs your system's LIO capacity."

With 30 gigabyte data buffer becoming more common and RAM caches
approaching 100% for small systems, I wonder if it is true that a
99.9% data buffer hit ratio is due to high caching of frequently
referenced objects than some mysterous un-tuned SQL.

To me, this does not make any sense, because many well-tuned systems
benefit from additional RAM. The v$db_cache_advice view was
introduced in 9i for this very reason.

Is this another Myth, or am I missing something?

Anurag Varma

unread,
Nov 20, 2003, 10:28:30 PM11/20/03
to

"Geomancer" <pharfr...@hotmail.com> wrote in message news:cf90fb89.03112...@posting.google.com...

You are taking the statement a little too literally.

I quickly read the document and believe its an excellent written article.
Cary seems to be trying to prove the point by introducing a certain shock value to it :)

He does not say that 99.9% hit ratio is always bad (which you seem to be interpreting).
However he does seem to be saying one should NOT rely on 99.9% hit ratio to make the judgment that
the database performance is good. The fact might just be the opposite.
A statement which I will agree with fully.

Did I say its an excellent article!

Anurag


Sybrand Bakker

unread,
Nov 21, 2003, 12:38:41 AM11/21/03
to
On 20 Nov 2003 18:53:02 -0800, pharfr...@hotmail.com (Geomancer)
wrote:

>To me, this does not make any sense, because many well-tuned systems
>benefit from additional RAM. The v$db_cache_advice view was
>introduced in 9i for this very reason.
>
>Is this another Myth, or am I missing something?

You probably subscribe to the Burleson school of 'tune by hit ratio's'
Millsap is advocating you should tune the SQL rather than just
throwing memory at the problem.


--
Sybrand Bakker, Senior Oracle DBA

Noel

unread,
Nov 21, 2003, 4:41:09 AM11/21/03
to
"Anurag Varma" <av...@hotmail.com> wrote in message news:<yDfvb.529$Sm1...@news02.roc.ny>...

> "Geomancer" <pharfr...@hotmail.com> wrote in message news:cf90fb89.03112...@posting.google.com...
>
> He does not say that 99.9% hit ratio is always bad (which you seem to be interpreting).
> However he does seem to be saying one should NOT rely on 99.9% hit ratio to make the judgment that
> the database performance is good. The fact might just be the opposite.

Not hard to imagine bad sql query slowing the database performance.
Hit ratio hides number of memory/disk reads.
If you would load all datafiles into memory and database buffers the
hit ratio would always be 100%.
In that case, relying on hit ratio you would never find what slow your
database, saying 'My database is perfect, no disk reads at all'.

If you multiple that hit ratio by number of reads you find bad sql query.

You should never rely on something what gives results as '%' :-)

> Did I say its an excellent article!

Yes, it is.

/Noel

Connor McDonald

unread,
Nov 21, 2003, 6:48:51 AM11/21/03
to

There are systems out there that genuinely require high frequency access
to a small set of data - thus resulting in a 99.9% cache usage.

(In my opinion) there are a far greater number of systems out there that
needlessly waste CPU by excessively pounding away at objects resulting
in a 99.9% cache usage.

hth
connor

Niall Litchfield

unread,
Nov 21, 2003, 7:53:42 AM11/21/03
to
"Geomancer" <pharfr...@hotmail.com> wrote in message
news:cf90fb89.03112...@posting.google.com...
> Cary Millsap makes the assertion that a buffer hit ratio of > 99%
> OFTEN indicates inefficient SQL:
>
> http://www.hotsos.com/dnloads/1.Millsap2001.02.26-CacheRatio.pdf
>
> According to Mr. Millsap:
>
> "A hit ratio in excess of 99% often indicates the existence of
> extremely inefficient SQL that robs your system's LIO capacity."
>
> With 30 gigabyte data buffer becoming more common and RAM caches
> approaching 100% for small systems, I wonder if it is true that a
> 99.9% data buffer hit ratio is due to high caching of frequently
> referenced objects than some mysterous un-tuned SQL.

30GB SGA Common, surely not. For a system written without bind variables and
with 200 or so concurrent users against a 50gb database we require under 1gb
SGA. to be honest I'd be sceptical of any system that had an SGA of more
than 2-4GB but I'm sure those that deal with the Boeings and BTs of this
world might be able to educate me if I'm wrong.

> To me, this does not make any sense, because many well-tuned systems
> benefit from additional RAM. The v$db_cache_advice view was
> introduced in 9i for this very reason.
>
> Is this another Myth, or am I missing something?

I thought the most effective use of db_cache_advice is that it tells you to
*stop* adding more ram as it becomes pointless.

--
Niall Litchfield
Oracle DBA
Audit Commission UK


Niall Litchfield

unread,
Nov 21, 2003, 7:59:05 AM11/21/03
to
"Noel" <tb...@go2.pl> wrote in message
news:ec30e927.03112...@posting.google.com...

> "Anurag Varma" <av...@hotmail.com> wrote in message
news:<yDfvb.529$Sm1...@news02.roc.ny>...
> > "Geomancer" <pharfr...@hotmail.com> wrote in message
news:cf90fb89.03112...@posting.google.com...
> >
> > He does not say that 99.9% hit ratio is always bad (which you seem to be
interpreting).
> > However he does seem to be saying one should NOT rely on 99.9% hit ratio
to make the judgment that
> > the database performance is good. The fact might just be the opposite.
>
> Not hard to imagine bad sql query slowing the database performance.
> Hit ratio hides number of memory/disk reads.
> If you would load all datafiles into memory and database buffers the
> hit ratio would always be 100%.

Not true. Trivially because there always has to be an initial read, less
trivially because dirty blocks get written down to disk and reread. Oracle
isn't an in-memory database.

Geomancer

unread,
Nov 21, 2003, 7:59:40 AM11/21/03
to
> You probably subscribe to the Burleson school of 'tune by hit ratio's'
> Millsap is advocating you should tune the SQL rather than just
> throwing memory at the problem.


Hi Sybrand,

No, not many people tune exclusively by hit ratio's anymore.
Incidentally, neither does Burleson (if you read anything he wrote
after 1994)! For example this Burleson wait event tuning article
remarkably similar to Millsap's conclusions:

http://www.dbazine.com/burleson8.html

Anyway, even Millsap agrees that the DBHR is not totally useless, just
useless as a sole metric of performance. If we assume well-tuned SQL
with a recycle pool for large full table scans, then the more RAM, the
less overall physical I/O.

If the data buffer hit ratio is useless, then why is Oracle using the
v$db_cache_advice view as a component of the 10g self-tuning memory?

Is Oracle on the wrong track with 10g Automatic Memory Management?

************************************

Again, my problem was with this statement:

"A hit ratio in excess of 99% often indicates the existence of
extremely inefficient SQL"

I was simply hoping that someone could explain why a stallar BHR often
indicates poorly optimized SQL. Can someone explain the "OFTEN" part
here?

Niall Litchfield

unread,
Nov 21, 2003, 8:14:30 AM11/21/03
to
"Geomancer" <pharfr...@hotmail.com> wrote in message
news:cf90fb89.03112...@posting.google.com...

> Again, my problem was with this statement:


>
> "A hit ratio in excess of 99% often indicates the existence of
> extremely inefficient SQL"
>
> I was simply hoping that someone could explain why a stallar BHR often
> indicates poorly optimized SQL. Can someone explain the "OFTEN" part
> here?

Ways to get a high hit ratio.

1. Have everything in memory before hand and read it just enough.
2. Repeatedly read the same data over and over and over and.....

I think the comment (and bear in mind that Cary works almost exclusively
with systems with performance problems) is really saying that when you have
spectacularly high hit ratios it is most likely that point 2 is happening
rather than point 1. You will *usually* get this where

you are doing nested loops joins rather than a merge or hash join.
you are repeatedly issuing the same sql with different literals, or in a
loop rather than using array fetches.

mcstock

unread,
Nov 21, 2003, 1:22:09 PM11/21/03
to
most of the nasty performance problems i've seen have to do with not how
efficiently the work is being done (i.e., good hit-ratio) but with how much
unnecessary work is being done

one of the examples in the paper indicates the true issue -- i think it was
something like ~90% efficiency getting 10 blocks vs ~99% efficiency getting
10,000 blocks for the same resultset

i still find BUFFER_GETS in V$SQLAREA to be a real good indicator of where
problems are hiding -- the higher the value, the more suspicious i am of the
SQL

-- mcs


"Geomancer" <pharfr...@hotmail.com> wrote in message
news:cf90fb89.03112...@posting.google.com...

ctc...@hotmail.com

unread,
Nov 21, 2003, 2:14:04 PM11/21/03
to
pharfr...@hotmail.com (Geomancer) wrote:
> Cary Millsap makes the assertion that a buffer hit ratio of > 99%
> OFTEN indicates inefficient SQL:
>
> http://www.hotsos.com/dnloads/1.Millsap2001.02.26-CacheRatio.pdf
>
> According to Mr. Millsap:
>
> "A hit ratio in excess of 99% often indicates the existence of
> extremely inefficient SQL that robs your system's LIO capacity."
>
> With 30 gigabyte data buffer becoming more common

Er, more common than what?

> and RAM caches
> approaching 100% for small systems, I wonder if it is true that a
> 99.9% data buffer hit ratio is due to high caching of frequently
> referenced objects than some mysterous un-tuned SQL.

> To me, this does not make any sense, because many well-tuned systems
> benefit from additional RAM. The v$db_cache_advice view was
> introduced in 9i for this very reason.
>
> Is this another Myth, or am I missing something?

I think you are missing this quote from the same paper:

"However, even this correlation is not reliable, because even
extraordinary high hit ratios can be generated by databases with
legitimately optimized SQL workload."

Morons can turn anything into a myth. That certainly isn't
Millsap's fault.

Xho

--
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service New Rate! $9.95/Month 50GB

ctc...@hotmail.com

unread,
Nov 21, 2003, 2:19:11 PM11/21/03
to

Do dirty blocks really generally get reread after being written to disk?
If it's written out because someone needs a free cache block, then sure
it needs to be re-read next time it's needed. But if it gets written out
due to a checkpoint or just a bored LGWR, isn't it still available in
memory, just marked clean rather than dirty?

ctc...@hotmail.com

unread,
Nov 21, 2003, 2:27:10 PM11/21/03
to

Do dirty blocks really generally get reread after being written to disk?


If it's written out because someone needs a free cache block, then sure
it needs to be re-read next time it's needed. But if it gets written out

due to a checkpoint or just a bored DBWR, isn't it still available in


memory, just marked clean rather than dirty?

Xho

(If you saw the first post, oops. DBWR, not LGWR)

Daniel Morgan

unread,
Nov 21, 2003, 4:22:59 PM11/21/03
to
Niall Litchfield wrote:

<snipped>


>
> 30GB SGA Common, surely not. For a system written without bind variables and
> with 200 or so concurrent users against a 50gb database we require under 1gb
> SGA. to be honest I'd be sceptical of any system that had an SGA of more
> than 2-4GB but I'm sure those that deal with the Boeings and BTs of this
> world might be able to educate me if I'm wrong.
>

<snipped>

Having been at Boeing twice as a consultant I'll step forward and
answer your question.

I too would be skeptical of an SGA larger than 2-4GB: Even in that
environment.

There are times and places for larger. But they are as common as the
requirement for tables with more than 100 columns.

Greatly overused ... rarely the best solution.
--
Daniel Morgan
http://www.outreach.washington.edu/ext/certificates/oad/oad_crs.asp
http://www.outreach.washington.edu/ext/certificates/aoa/aoa_crs.asp
damo...@x.washington.edu
(replace 'x' with a 'u' to reply)

Daniel Morgan

unread,
Nov 21, 2003, 4:28:07 PM11/21/03
to
mcstock wrote:

> most of the nasty performance problems i've seen have to do with not how
> efficiently the work is being done (i.e., good hit-ratio) but with how much
> unnecessary work is being done
>

<snipped>
>

In my experience most tuning problems have nothing to do with the
SGA or hit ratios or anything requiring senior level skillls. Most
tuning problems come down to lazyiness and ignoring basic skills.

How many people are really running Explain Plan or autotrace?
How many people are really running DBMS_PROFILER?
How many people build indexes and never check to see if they are used?
How many people just assume EXISTS is faster than IN and never verify?
And as Tom Kyte would likely say ... how many people are really using
bind variables?

And I'll bet the answer to all of the above questions is not more than
5% of all developers and DBAs.

mcstock

unread,
Nov 21, 2003, 4:43:20 PM11/21/03
to
interesting and valid points, but i'm not sure how they relate to my post...

"Daniel Morgan" <damo...@x.washington.edu> wrote in message
news:1069450115.433539@yasure...

Daniel Morgan

unread,
Nov 21, 2003, 4:47:43 PM11/21/03
to
mcstock wrote:

> interesting and valid points, but i'm not sure how they relate to my post...

Directly ... they don't.

But I thought it important to remind people that before they go climbing
to the top of the tree to find something to eat ... there are plenty of
apples hanging on the lower branches.

Richard Foote

unread,
Nov 22, 2003, 8:57:33 AM11/22/03
to
"Geomancer" <pharfr...@hotmail.com> wrote in message
news:cf90fb89.03112...@posting.google.com...

Hi Geomancer,

Having just spent 3 days this week with Cary and Gary Goodman, I'm sure he
would get a giggle that his thoughts on hit ratios would be consider a myth
!!

Currently it's thought by many and promoted by some that a high BHR means
the "database" must be well tuned and is a good thing. Point is of course
that it doesn't necessary mean any such thing and if what Cary has done is
question any such opinions, then that's great.

I understand it's the word *often* that you have difficulty with. Putting
'often' into percentage terms is difficult but I would suggest there are
many many databases out there is Oracle Land (enough to make the word
'often' grammatically correct) that have very high BHR because of very
poorly tuned SQL and other factors, not because they're optimally tuned.
This is Cary's point.

And it only take *one* piece of what I technically define as "Crap" code to
both inflate the BHR to incredibly high levels whilst at the same time
killing or impacting *database* performance.

I'm probably more sympathetic to BHRs than many. However, it provides only
one very small piece in the tuning puzzle, one that needs to be put into
perspective. It can be used as an indicator of whether the buffer cache is
set way to low/high and nothing more. And what it's actual *value* is of
little consequence, there is no such thing as an ideal value x.

Does a 99.9% BHR mean the database/buffer cache/sql is well tuned. Possibly.

Does a 99.9% BHR mean the database/buffer cache/sql is poorly tuned.
Possibly.

So what does a 99.9% BHR actually mean and represent ? Without being able to
answer this question in it's fullness, the figure is meaningless.

You get the point.

Let me ask you a very simple question. If you could cache your entire
database in memory, the whole damn lot (say 50G) and that you could
therefore guarantee all objects would always be found in memory and after
the database has warmed up, a 100% hit ratio is assured, would this be a
good thing, would you have to worry about tuning, is there such thing as
inefficient SQL, could we relax and spend more time watching David Bowie
DVDs ????

Interesting question which might further explain this issue ;)

Cheers

Richard


Michael J. Moore

unread,
Nov 22, 2003, 1:02:27 PM11/22/03
to
Checking in late on this one but my reaction to what Mr. Millsap is saying
is basically "Well duh!"
It is like putting a program into a loop and watching the CPU race and
thinking that that means you have a very efficient program. It seems rather
obvious that hit ratio, as an isolated metric, is rather meaningless.

Also, there is that old adage, "If it is too good to be true, it probably
is." Well, 99% is too good to be true so it is very suspicious and I think
Mr. Millsap is saying that the number one suspect would be a badly tuned SQL
statement.

So, if the hit ratio is fairly low, and you know your SQL is well tuned,
then throwing more RAM at the problem is probably a good idea. Right?

But this raises some questions in my mind that some of the gurus in here may
like to comment on.

"When is a PHYSICAL IO NOT A Physical IO? and does it matter?"
I am thinking about RAM disk cache here, and RAID and any other type of
physical device that may look like a physical IO to Oracle but may not be
requiring an actual disk read. And then there is the problem of other
applications running on the same system that may be determining if Oracle's
request for a physical IO can be serviced by a Logical IO at the OS/device
level.

Goodbye science, hello art.

Mike


"Geomancer" <pharfr...@hotmail.com> wrote in message
news:cf90fb89.03112...@posting.google.com...

Daniel Morgan

unread,
Nov 22, 2003, 5:05:30 PM11/22/03
to
Michael J. Moore wrote:

> Checking in late on this one but my reaction to what Mr. Millsap is saying
> is basically "Well duh!"
> It is like putting a program into a loop and watching the CPU race and
> thinking that that means you have a very efficient program. It seems rather
> obvious that hit ratio, as an isolated metric, is rather meaningless.
>
> Also, there is that old adage, "If it is too good to be true, it probably
> is." Well, 99% is too good to be true so it is very suspicious and I think
> Mr. Millsap is saying that the number one suspect would be a badly tuned SQL
> statement.
>
> So, if the hit ratio is fairly low, and you know your SQL is well tuned,
> then throwing more RAM at the problem is probably a good idea. Right?
>
> But this raises some questions in my mind that some of the gurus in here may
> like to comment on.
>
> "When is a PHYSICAL IO NOT A Physical IO? and does it matter?"
> I am thinking about RAM disk cache here, and RAID and any other type of
> physical device that may look like a physical IO to Oracle but may not be
> requiring an actual disk read. And then there is the problem of other
> applications running on the same system that may be determining if Oracle's
> request for a physical IO can be serviced by a Logical IO at the OS/device
> level.
>
> Goodbye science, hello art.
>
> Mike
>
>

><snipped>

Not claiming guru status but I'll give you my opinion:

If it ain't broke don't fix it. By which I mean that there enough
real problems in any database application that developers and DBAs
should be focusing on those before worrying about hit ratios and
whether an IO is or is not truly a physical IO.

I've yet to see the database application where such esoteric issues
where the root cause of the problem. Concentrate on the low hanging
fruit ... indexes, bind variables, explain plan, autotrace, and hold
code reviews and you'll likely never have to care.

Which doesn't mean the topic isn't important but rather the point out
that too often people get hung up on the esoteric and don't deal with
the all too obvious.

Geomancer

unread,
Nov 22, 2003, 8:12:09 PM11/22/03
to
> could we relax and spend more time watching David Bowie DVDs ????

Um, who is David Bowie?

Mladen Gogala

unread,
Nov 23, 2003, 10:09:45 PM11/23/03
to
On Thu, 20 Nov 2003 18:53:02 -0800, Geomancer wrote:


> According to Mr. Millsap:
>
> "A hit ratio in excess of 99% often indicates the existence of extremely
> inefficient SQL that robs your system's LIO capacity."
>
> With 30 gigabyte data buffer becoming more common and RAM caches
> approaching 100% for small systems, I wonder if it is true that a 99.9%
> data buffer hit ratio is due to high caching of frequently referenced
> objects than some mysterous un-tuned SQL.
>
> To me, this does not make any sense, because many well-tuned systems
> benefit from additional RAM. The v$db_cache_advice view was introduced
> in 9i for this very reason.
>
> Is this another Myth, or am I missing something?

Yes, you are missing something. Basically, if you cache your whole
database, your performance gain will not be as large as you may expect,
because, with all the overhead of oracle processing (conistent image,
various locks and latches required to maintain internal structures), LIO
("logical I/O") is a fairly expensive beast, with its duration shooting
close to the millisecond range. Performing several millions LIO calls from
memory instead of performing them from disk will be several orders of
magnitude slower then performing those same several millions of calls from
memory, but the latter will be an order of magnitude slower then
performing a few hundreds of LIO calls from either memory or disk. Cary's
approach looks toward reducing the number of IO calls in the first
place. Second, there is a question of logic. Before I proceed, let me
explain my background. I'm a senior Oracle DBA from Croatia (ex-YU) and I
am a US resident since March 1997. In the US, I've been working for a
software development company (NYC, 1997-1999), a large HMO in CT
(1999-2003) and now I'm working for a hedge fund in CT. Before coming to
US, I've been working for a large international telecom. Hopefully, that
establishes my credentials as a senior oracle DBA. All calls from users
coming to me have always started with "why is my application slow today".
Answering that question with "the cache hit ratio on the database is bad
today" is, essentially, the same thing as saying "there is something
strange with the Force today" or pulling out the "solar flares excuse",
for those of us who appreciate BOFH. Traditionally, BCHR has not even been
a good indicator of problems, especially not so when you encounter things
like "global cache locks". There is only one logical way to continue
conversation with the complaining user: ask him what exactly is slow and
what exactly he or she is doing. The goal is to uniquely identify the user
session and that is why one should always know the application system
thast is being used to access the database. When and if you identify
session (it is sometimes impossible to do so, especially with JDBC drivers
taking advantage of connection pooling), you, generally speaking, first
proceed by querying V$SESSION_WAIT, which is probably the single most
important V$ table in the database. That table will give you the event(s)
that the database session is waiting for, and those are the ones that have
to be addressed. From here, it is the question of the DBA's knowledge of
both programming tools and practices, database internals and political
skills (needed in order to get the vile creatures called "developers" to
fix their stuff as well as for making users to lay the blame on them and
not on the DBA. That is so called CYA methodology which is so
frequently needed in the modern companies.). That is the essence of Cary
Millsap's method (called "Cary's method" because it was first documented
by Anjo Kolk in his "wait" paper for Oracle7 and later in the YAPP paper).
BCHR doesn't enter the picture at any stage and is usually used in the
same way as a hare's paw, four leaf clover, horseshoe and other good luck
charms. As I don't believe in magic, I don't use it. Cutting down the
number of IOs is the best and also the hardest strategy one could attempt.


--
None of us is as dumb as all of us.
(http://www.despair.com/meetings.html)

Mladen Gogala

unread,
Nov 23, 2003, 10:21:42 PM11/23/03
to
On Sat, 22 Nov 2003 13:57:33 +0000, Richard Foote wrote:

> So what does a 99.9% BHR actually mean and represent ? Without being able to
> answer this question in it's fullness, the figure is meaningless.
>
> You get the point.

Hey there, your name sounds familiar :). BCHR is important because if BCHR
is low, cows will produce only male calves and it will do Bad Things to
the milk, as well cause the wine to turn itself into winegar. Clearly, low
BCHR comes directly from Beelzebub himself. People are believing in the
powers of hare's paw, garlic (which actually does bring good luck
nutrition-wise), old horseshoe or 4 leaved clovers, so why are you so
opposed to BCHR which plays the very same role as those holy items? As
for Mr. Millsap, if I were him, I would laugh my a...rteries off, not just
"have a chuckle".

Mladen Gogala

unread,
Nov 23, 2003, 10:34:12 PM11/23/03
to
On Fri, 21 Nov 2003 06:38:41 +0100, Sybrand Bakker wrote:

> You probably subscribe to the Burleson school of 'tune by hit ratio's'
> Millsap is advocating you should tune the SQL rather than just
> throwing memory at the problem.

Hey there, Don Burleson has changed his evil ways :). He has completely
accepted Cary's and Anjo's arguments. There is no "Burleson school" as
opposed to "Millsap's school". I have several of Don Burleson's books (most
notably, the "Statspack book") and I find him to be technically very strong and perfectly reasonable
author. Furthermore, he has a publishing company (called "Rampant Books")
which is publishing some of the most cutting edge books in the field .
Don Burleson really is a good guy, despite the fact that I've never met
him in person.

Joel Garry

unread,
Nov 25, 2003, 8:38:21 PM11/25/03
to
Daniel Morgan <damo...@x.washington.edu> wrote in message news:<1069538755.832436@yasure>...

The paper says explicitly, "Certainly, LIO processing is the dominant
workload component in a competently managed system."

I'd say you should be considered a guru, since you've hit right on the
weakness of the paper, and by extension a problem with a lot of the
criticisms we have seen here in cdos - that is, most systems are not
"competently managed," whatever that means. One needs to somehow get
the system close to reasonable before one should worry about LIO's.
Even before worrying about SQL tuning. Most of the criticism I've
seen about the various tuning schemes seem to focus on particular
issues, and even those are misrepresented as they are yanked out of
context, while ignoring the reality of the zeitgeist.

jg
--
@home.com is bogus.
http://www.comics.com/comics/herman/archive/herman-20031120.html

Geomancer

unread,
Nov 26, 2003, 10:05:46 AM11/26/03
to
> Cutting down the
> number of IOs is the best and also the hardest strategy one could attempt.

Great point Mladen.

So, which is best for say, a 20 gig Oracle database?

A - 100% caching with db_cache_size = 25g

B - A small db_cache_size with solid-state disk?

Essentially, the question is:

If LIO is also expensive, then would PIO on a super-fast RAM-SAN be a
better choice?

It's only going to be a few years until almost all Oracle database run
solid-state, so this is an important question!

paolas...@gmail.com

unread,
Sep 30, 2016, 6:12:19 AM9/30/16
to
Hi Anurag,

I know it's a very old conversation... But I'm looking for this article and I can't find it. Is not anymore in https://www.hotsos.com.
Is there a chance that you have it available?

Thank you so much for your time!

Regards,
Paola.

cjac...@rallydev.com

unread,
Sep 30, 2016, 5:25:38 PM9/30/16
to
The article is titled "Why You Should Focus on LIOs Instead of PIOs" by Cary Millsap a google search will turn up multiple links to the article.

Craig Jackson

TheBoss

unread,
Oct 12, 2016, 7:05:53 PM10/12/16
to
cjac...@rallydev.com wrote in
news:875c6439-3652-4154...@googlegroups.com:
In 2008 the copyright for Cary's papers has been transfered from Hotsos to
Method R Corporation. The version of the paper that reflects this (and lots
of other interesting papers!) can be found on:

http://method-r.com/papers

--
Jeroen

---
Dit e-mailbericht is gecontroleerd op virussen met Avast antivirussoftware.
https://www.avast.com/antivirus

0 new messages