Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Cache a table

187 views
Skip to first unread message

Kalle

unread,
Oct 21, 2003, 1:00:52 AM10/21/03
to
Hi all,

when you would cache a table. what are the general rules if the amount
of memory isn't a problem...

tia
Kalle

Howard J. Rogers

unread,
Oct 21, 2003, 6:09:46 AM10/21/03
to
Kalle wrote:

The general rule is don't bother.

Well, OK... what I mean is that if you are talking about the 'CACHE' clause
of the alter table statement, or create table, statement, don't bother. It
is a pretty poor way of achieving better performance, and has long since
been superceded by the multiple buffer pools feature (first introduced in
Oracle 8.0).

And the rules of thumb for those multiple buffer pools are:

tables smaller than 10% of your default pool are candidates for housing in
the keep pool

Tables bigger than 200% of your keep pool are candidates for housing in the
recycle pool.

But those are only rules of thumb.

Regards
HJR
--
--------------------------------------------
See my brand new website, soon to be full of
new articles: www.dizwell.com.
Nothing much there yet, but give it time!!
--------------------------------------------

Chuck

unread,
Oct 21, 2003, 9:14:55 AM10/21/03
to
"Howard J. Rogers" <h...@dizwell.com> wrote in
news:3f95066c$0$10411$afc3...@news.optusnet.com.au:

> Kalle wrote:
>
>> Hi all,
>>
>> when you would cache a table. what are the general rules if the
>> amount of memory isn't a problem...
>>
>> tia
>> Kalle
>
> The general rule is don't bother.
>
> Well, OK... what I mean is that if you are talking about the 'CACHE'
> clause of the alter table statement, or create table, statement, don't
> bother. It is a pretty poor way of achieving better performance, and
> has long since been superceded by the multiple buffer pools feature
> (first introduced in Oracle 8.0).
>
> And the rules of thumb for those multiple buffer pools are:
>
> tables smaller than 10% of your default pool are candidates for
> housing in the keep pool
>
> Tables bigger than 200% of your keep pool are candidates for housing
> in the recycle pool.
>
> But those are only rules of thumb.
>
> Regards
> HJR

And I would add that for the keep pool, you should only include tables
that are frequently scanned. No point in wasting memory on a table that
isn't, even though it meets the ROT for size.

I can think of one case where using the pre-8.0 style of caching might
apply. That would be a small table that is only used in a long running
batch job that does frequent scans of that table. It would end up being
cached in memory for the duration of the batch job, but eventually
dropped out of memory when the job finishes.

Anurag Varma

unread,
Oct 22, 2003, 2:06:28 AM10/22/03
to

"Chuck" <chu...@softhome.net> wrote in message news:Xns941B5E141AD77...@130.133.1.4...

If a table is frequently scanned (i.e. very hot), it is going to remain in the
buffer pool no matter what. Its no good use putting it in the KEEP pool.

The tables which are medium-high in scans are better candidates
for the KEEP pool.

Anurag


Howard J. Rogers

unread,
Oct 22, 2003, 6:48:10 AM10/22/03
to
Anurag Varma wrote:

>
> If a table is frequently scanned (i.e. very hot), it is going to remain in
> the buffer pool no matter what. Its no good use putting it in the KEEP
> pool.

Disagree. The issue is not whether it would tend to want to stay in the
buffer cache or not, but whether it is at risk of being dislodged by a
rogue, huge, tablescan. By putting even a frequently-accessed table into
the keep pool, you ensure it can't be dislodged by scans against large
tables (assuming you haven't been daft enough to ask for them to go into
the keep pool as well, of course!!).

Such cache 'partitioning' has got to be a good thing in its own right.

Anurag Varma

unread,
Oct 22, 2003, 9:03:14 AM10/22/03
to

"Howard J. Rogers" <h...@dizwell.com> wrote in message news:3f9660e8$0$9554$afc3...@news.optusnet.com.au...

Howard,

In your argument I would ask why that big table is not put in the recycle pool.
If a segment is extremely hot, then random access to large segments should
not displace its cached buffer. After all a buffer block earns its place by
touch count.

However, warm-hot segments can be in danger of their blocks being displaced
by a large segment read.

Now in a datawarehouse, pretty much most of the dimension tables and especially
their indexes should probably be in the Keep Pool.
While the fact tables probably should be kept in the recycle pool.

Although, I would agree that if the database shows unpredictable activity of
ad-hoc users making large/medium table scans on a regular basis, then
the advice of putting extremely hot segments in the keep pool makes more sense.

Anurag

Anurag


Chuck

unread,
Oct 22, 2003, 10:45:16 AM10/22/03
to
"Howard J. Rogers" <h...@dizwell.com> wrote in
news:3f9660e8$0$9554$afc3...@news.optusnet.com.au:

> Anurag Varma wrote:
>
>>
>> If a table is frequently scanned (i.e. very hot), it is going to
>> remain in the buffer pool no matter what. Its no good use putting it
>> in the KEEP pool.
>
> Disagree. The issue is not whether it would tend to want to stay in
> the buffer cache or not, but whether it is at risk of being dislodged
> by a rogue, huge, tablescan. By putting even a frequently-accessed
> table into the keep pool, you ensure it can't be dislodged by scans
> against large tables (assuming you haven't been daft enough to ask for
> them to go into the keep pool as well, of course!!).
>
> Such cache 'partitioning' has got to be a good thing in its own right.
>
> Regards
> HJR

Or whether it will dislodge other things in the buffer cache that would
otherwise still be there if it were placed in the keep pool to start
with.

Rene Nyffenegger

unread,
Oct 22, 2003, 1:01:01 PM10/22/03
to

>
> And I would add that for the keep pool, you should only include tables
> that are frequently scanned. No point in wasting memory on a table that
> isn't, even though it meets the ROT for size.


Excuse my ignorance, but what is ROT?

Rene

--
Rene Nyffenegger
http://www.adp-gmbh.ch

Brian Peasland

unread,
Oct 22, 2003, 2:30:35 PM10/22/03
to
Rule of Thumb?

Brian

--
===================================================================

Brian Peasland
dba@remove_spam.peasland.com

Remove the "remove_spam." from the email address to email me.


"I can give it to you cheap, quick, and good. Now pick two out of
the three"

Rene Nyffenegger

unread,
Oct 22, 2003, 4:45:27 PM10/22/03
to

> Rule of Thumb?

Yes, that makes sense.
Thanks.



>> > And I would add that for the keep pool, you should only include tables
>> > that are frequently scanned. No point in wasting memory on a table that
>> > isn't, even though it meets the ROT for size.
>>
>> Excuse my ignorance, but what is ROT?
>>
>> Rene
>>
>>
>> --
>> Rene Nyffenegger
>> http://www.adp-gmbh.ch
>


--
Rene Nyffenegger
http://www.adp-gmbh.ch

Geomancer

unread,
Oct 22, 2003, 7:47:21 PM10/22/03
to
"Howard J. Rogers" <h...@dizwell.com> wrote in message news:<3f9660e8$0$9554$afc3...@news.optusnet.com.au>...


> Disagree. The issue is not whether it would tend to want to stay in the
> buffer cache or not, but whether it is at risk of being dislodged by a
> rogue, huge, tablescan.

Howard, "Dislodged" is another of those Myths you hate so much!

In Oracle9i, full-table scans changed to make FTS rows go directly
into the PGA for the user, completely bypassing the block cache.

Remember, back in Oracle7 there was a special recycle area at the LRU
end of the cache for FTS, (which could NEVER page-out MRU blocks).

This was changed in 9i to put FTS blocks directly into the PGA because
Oracle knew that no other task would use them.

In Oracle's OTN site we see the "Oracle Expert" article (your beloved
Mr. Niemeic) state "a full table scan is put at the cold end of the
LRU (Least Recently Used) list."

I verified this with a simple experiment. Start a fresh instance,
read a small table, then do an FTS gainist a huge tables and run a
query against v$bh. You will NOTsee the FTS rows in the data buffer!

It seems that this "dislodging" may be one of those urban myths!

Howard J. Rogers

unread,
Oct 23, 2003, 5:21:29 AM10/23/03
to
Geomancer wrote:


I'm not enitrely sure what you're getting at, but none of this makes much
sense. They invented the recycle buffer precisely so that full table scans
can find a home in the Buffer Cache which wouldn't 'dislodge' other buffers
of more long-lasting use.

The idea that Oracle refuses to cache blocks read via FTS is just silly: how
could Oracle "know no other task would use them"? We might just as well
abolish the buffer cache entirely and have done with it, if that were
really true.

There *are* direct reads which by-pass the buffer cache, but a full table
scan against a regular table wouldn't be one of them.

If you could be more precise about your sources, or what you are suggesting,
I'll happily discuss it, but generally FTSes go via the buffer cache.

Howard J. Rogers

unread,
Oct 23, 2003, 7:54:02 AM10/23/03
to
Howard J. Rogers wrote:

> Geomancer wrote:
>
>> "Howard J. Rogers" <h...@dizwell.com> wrote in message
>> news:<3f9660e8$0$9554$afc3...@news.optusnet.com.au>...
>>
>>> Disagree. The issue is not whether it would tend to want to stay in the
>>> buffer cache or not, but whether it is at risk of being dislodged by a
>>> rogue, huge, tablescan.
>>
>> Howard, "Dislodged" is another of those Myths you hate so much!
>>
>> In Oracle9i, full-table scans changed to make FTS rows go directly
>> into the PGA for the user, completely bypassing the block cache.
>>
>> Remember, back in Oracle7 there was a special recycle area at the LRU
>> end of the cache for FTS, (which could NEVER page-out MRU blocks).
>>
>> This was changed in 9i to put FTS blocks directly into the PGA because
>> Oracle knew that no other task would use them.
>>
>> In Oracle's OTN site we see the "Oracle Expert" article (your beloved
>> Mr. Niemeic) state "a full table scan is put at the cold end of the
>> LRU (Least Recently Used) list."
>>
>> I verified this with a simple experiment. Start a fresh instance,
>> read a small table, then do an FTS gainist a huge tables and run a
>> query against v$bh. You will NOTsee the FTS rows in the data buffer!
>>
>> It seems that this "dislodging" may be one of those urban myths!

Having re-read your post several times, I still can't quite work out what
you're on about, so here's a test I've just done on Oracle 9i Release 2,
Red Hat 9:

create tablespace bhtest
datafile '/u01/app/oracle/oradata/lx92/bh01.dbf' size 10m;

select file#, name from v$datafile;
SQL> select file#,name from v$datafile;

FILE# NAME
--------------------------------------------------------------------------------
1 /u01/app/oracle/oradata/lx92/system01.dbf
2 /u01/app/oracle/oradata/lx92/undotbs01.dbf
3 /u01/app/oracle/oradata/lx92/example01.dbf
4 /u01/app/oracle/oradata/lx92/indx01.dbf
5 /u01/app/oracle/oradata/lx92/tools01.dbf
6 /u01/app/oracle/oradata/lx92/users01.dbf
7 /u01/app/oracle/oradata/lx92/bh01.dbf

[You might note the new datafile is number 7]

create table t1 tablespace bhtest
as select * from dba_objects; (7000+ rows inserted)

insert into t1 select * from t1 (repeat until you run out of space)
commit;

[That's a big table now, with 49,680 rows, and approx. 10M in size]

startup force;
select * from scott.emp [14 rows... a small table, as you specified]
select * from t1 (wait for the scrolling to stop)

select count(*) from v$bh where file#=7;
COUNT(*)
--------
2304

Since datafile 7's only just been created, and since it only has one table
in it, and since I did a small scan and then a FTS like you asked, what
were you saying about blocks from a FTS not going into the buffer cache??

By the way, the blocks from EMP were no longer in my (non-multi-pool) cache:
they'd been "dislodged" by the FTS.

You might want to repeat your test.

Richard Foote

unread,
Oct 23, 2003, 10:24:05 AM10/23/03
to
"Howard J. Rogers" <h...@dizwell.com> wrote in message
news:3f97c125$0$24515$afc3...@news.optusnet.com.au...

Hi Howard,

Reading FTS data directly into the PGA and bypassing the buffer cache is
news to me as well (and of course easily proven to be untrue by querying
x$bh).

However, just to perhaps explains things a little further, the algorithm
used to determine how a block "ages" within the buffer cache is a little
more complicated than a simple LRU list, although there is still the concept
of an LRU. Oracle now splits the buffer caches into two sections, a "hot"
and a "cold" section and Oracle determines which section a block belongs in
by a combination of the number of times the blocks have been "touched" and
the last time the block was touched (these values can be seen in the TCH and
TIM columns in x$bh).

Now if a block has been touched sufficient times, it earns the right to move
to the "hot" end and head the MLU list but unless they keep getting touched
start to move down and age . All these settings can be viewed and modified
(not that I would recommend it) with the _DB_AGING_% and _DB_PERCENT_% list
of parameters.

The result of all this means that frequently accessed blocks can be
protected from being "dislodged" (not sure if it's the term I would used but
I know what you mean ;) by large FTS because it's only the cold portion of
the buffer cache that is going to be initially affected.

When a block is read in via an index, the block is kinda loaded into the MRU
bit of the cold section of the buffer cache (roughly the "middle" of the LRU
list so to speak). However, when a block is loaded via a FTS, this behaviour
differs and the blocks are loaded into the LRU bit of the buffer cache. This
is designed to prevent the "dislodgement" of potentially useful stuff at the
colder end of the LRU. Therefore the effect of a FTS by nocached tables is
minimal, an important point.

Simple demo on 9.2, the Bowie table is approximately 13,000 blocks, small is
117 blocks:

SQL> alter table bowie nocache;

Table altered.

SQL> select object_name, object_id, data_object_id from dba_objects where
object
_name in ('BOWIE', 'SMALL');

OBJECT_NAME OBJECT_ID DATA_OBJECT_ID
--------------- ---------- --------------
BOWIE 31379 31379
SMALL 31457 31457

SQL> select * from bowie; (run with autotrace traceonly)

SQL> select count(*) from x$bh where obj=31379;

COUNT(*)
----------
18

Note that only the last few blocks from the FTS actually remain in memory.
If I repeat the select, I still have the same result from x$bh and the same
number of *physical reads" occur each time.

If I run the same thing with my "small" table which has about 117 blocks,
the same thing happens ....

SQL> alter table small nocache;

Table altered.

SQL> select * from small;

SQL> select count(*) from x$bh where obj=31457;

COUNT(*)
----------
18

Note that again only the last few blocks from the FTS actually remain in
memory. If I repeat the select, I still have the same result from x$bh and
again the same number of physical reads occur each time.

OK, lets change my small table and cache the thing and see if I get a
different result ...

SQL> alter table small cache;

Table altered.

SQL> select * from small;

SQL> select count(*) from x$bh where obj=31457;

COUNT(*)
----------
117

I now see that all 117 blocks (that's all data blocks + segment header) are
all now cached as expected. Repeated reruns of the select now generate *no*
physical I/Os.

But what if I now run a select on my "big" BOWIE table, what effect will
this have on the SMALL cached blocks ?

SQL> select * from bowie;

SQL> select count(*) from x$bh where obj=31379;

COUNT(*)
----------
18

Nothing new here, only the last few blocks again remain from the BOWIE table
with the same physical I/Os generated.

SQL> select count(*) from x$bh where obj=31457;

COUNT(*)
----------
117

and thankfully nothing has changed with the SMALL table as a result. These
blocks still remain cached and have not been "dislodged" as a result of the
FTS on the big BOWIE table (as they sit safely somewhere near the middle,
cold side of the LRU

Finally, what if we play silly buggers and decide to cache the big BOWIE
table ...

SQL> alter table bowie cache;

Table altered.

SQL> select * from bowie;

SQL> select count(*) from x$bh where obj=31379;

COUNT(*)
----------
1338

We now see that a whole heap of buffers have now been cached, approximately
10%. However, again the physical I/Os remain constant because we are still
not effectively caching the table (the undocumented parameters behind the
scene kick in to prevent the whole cache from flooding).

But the effect on poor SMALL...

SQL> select count(*) from x$bh where obj=31457;

COUNT(*)
----------
1

only one poor block (the header) has survived the experience :(

So the touch aging LRU algorithm, the various hot/cold portions of the
buffer cache and whether a table is cached or nocached all have an effect on
how the end object is actually cached.

It's only an intro but it's a start to any newbies listening in :)

Cheers

Richard


Jack Wang

unread,
Oct 23, 2003, 1:29:18 PM10/23/03
to
Howard,

Could you elaborate 'direct read' more? I know reading LOB is a kind of 'direct read', anything
else?

Thanks.
Jack

"Howard J. Rogers" <h...@dizwell.com> wrote in message

news:3f979d65$0$24515$afc3...@news.optusnet.com.au...

Howard J. Rogers

unread,
Oct 23, 2003, 4:00:08 PM10/23/03
to
Richard Foote wrote:

[large snip]


>> [That's a big table now, with 49,680 rows, and approx. 10M in size]
>>
>> startup force;
>> select * from scott.emp [14 rows... a small table, as you specified]
>> select * from t1 (wait for the scrolling to stop)
>>
>> select count(*) from v$bh where file#=7;
>> COUNT(*)
>> --------
>> 2304
>>

[ditto]


Hi Richard,

I've not snipped any of your post, because it's a model of clarity and
accuracy, and should stick around for a while for the benefit of newbies
who might otherwise think the Buffer Cache is a waste of time!

Given that my buffer cache in the test above has about 6000 buffers, only
2304 of which were my T1 table at the end of the FTS, what you describe is,
of course, quite correct.

If that's what "Geomancer" was trying to say, then fair enough. I think his
talk of the PGA completely threw me, however!

Regards
HJR

Geomancer

unread,
Oct 23, 2003, 8:57:01 PM10/23/03
to
VERY impressive examples.

I got the PGA thing at OracleWorld from someone in the Oracle
real-rorld performance group, but I must have misunderstood it.

Let me check my notes and get back to you.

Jonathan Lewis

unread,
Oct 25, 2003, 12:47:08 PM10/25/03
to

It is possible that the person was
talking about a feature for
'serial reads direct'
which can be enabled through a
hidden parameter.


--
Regards

Jonathan Lewis
http://www.jlcomp.demon.co.uk

The educated person is not the person
who can answer the questions, but the
person who can question the answers -- T. Schick Jr


One-day tutorials:
http://www.jlcomp.demon.co.uk/tutorial.html
____Belgium__November (EOUG event - "Troubleshooting")
____UK_______December (UKOUG conference - "CBO")


Three-day seminar:
see http://www.jlcomp.demon.co.uk/seminar.html
____UK___November


The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


"Geomancer" <pharfr...@hotmail.com> wrote in message
news:cf90fb89.03102...@posting.google.com...

Jonathan Lewis

unread,
Oct 25, 2003, 12:55:34 PM10/25/03
to

Note in-line

--
Regards

Jonathan Lewis
http://www.jlcomp.demon.co.uk

The educated person is not the person
who can answer the questions, but the
person who can question the answers -- T. Schick Jr


One-day tutorials:
http://www.jlcomp.demon.co.uk/tutorial.html
____Belgium__November (EOUG event - "Troubleshooting")
____UK_______December (UKOUG conference - "CBO")


Three-day seminar:
see http://www.jlcomp.demon.co.uk/seminar.html
____UK___November


The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


"Anurag Varma" <av...@hotmail.com> wrote in message
news:E7plb.14636$4s3....@news01.roc.ny...


>
>
> If a table is frequently scanned (i.e. very hot), it is going to
remain in the
> buffer pool no matter what. Its no good use putting it in the KEEP
pool.
>

A little known detail about tablescans in 8.1 (and probably 9.2 but I
don't recall confirming it) is that the touch count on blocks subject
to tablescans is NOT increased - so even if a "small" table is loaded
into the middle of the LRU chain and repeatedly scanned, it will
always end up falling off the end of the LRU chain as other blocks
are read into the buffer. It never gets promoted to the hot end unless
it is also subject to indexed accesses.


Richard Foote

unread,
Oct 26, 2003, 7:44:25 AM10/26/03
to
"Jonathan Lewis" <jona...@jlcomp.demon.co.uk> wrote in message
news:bne9sm$kog$2$8302...@news.demon.co.uk...

Hi Jonathan,

That's a good point. Only the segment header which is read via a single
block read is subject to a touch count increment during a tablescan.

And yes, it's still correct in 9.2.

Cheers

Richard


Anurag Varma

unread,
Oct 26, 2003, 4:14:17 PM10/26/03
to

"Richard Foote" <richar...@bigpond.com> wrote in message
news:JkPmb.166170$bo1.1...@news-server.bigpond.net.au...

Jonathan,

Yes I agree .. that for full table scans, the touch counts
are not increased. It increases only under indexed access.
A simple test on 9i shows that its true.

Anurag


Geomancer

unread,
Oct 26, 2003, 10:09:54 PM10/26/03
to
"> A little known detail about tablescans in 8.1 (and probably 9.2 but
I
> don't recall confirming it) is that the touch count on blocks subject
> to tablescans is NOT increased - so even if a "small" table is loaded
> into the middle of the LRU chain and repeatedly scanned, it will
> always end up falling off the end of the LRU chain as other blocks
> are read into the buffer. It never gets promoted to the hot end unless
> it is also subject to indexed accesses.


If this is correct (Jonathan is seldom wrong!), then it would imply
that rows from small table full table scans would be re-read many
times as they age-out of the LRU end of the buffer.

This might cause unnecessary disk I/O.

Would this be justification for caching all small tables that
experience full table scans in the KEEP pool?

Jonathan Lewis

unread,
Oct 27, 2003, 2:50:00 AM10/27/03
to

Certainly an argument in favour. But only if the
tablescans are very frequent - then you have to
ensure that the KEEP cache is large enough
to hold all relevant objects.

However, if the database is running on buffered
file system, then the cost of re-reading from file
system might not be significant compared to
other 'real' reads.

On the other hand, if you are doing extremely
regular tablescans, you might ask why, and
whether the correct solution is to use indexed
access anyway.


--
Regards

Jonathan Lewis
http://www.jlcomp.demon.co.uk

The educated person is not the person
who can answer the questions, but the
person who can question the answers -- T. Schick Jr


One-day tutorials:
http://www.jlcomp.demon.co.uk/tutorial.html
____Belgium__November (EOUG event - "Troubleshooting")
____UK_______December (UKOUG conference - "CBO")


Three-day seminar:
see http://www.jlcomp.demon.co.uk/seminar.html
____UK___November


The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html

"Geomancer" <pharfr...@hotmail.com> wrote in message
news:cf90fb89.03102...@posting.google.com...

Geomancer

unread,
Oct 27, 2003, 7:42:59 AM10/27/03
to
> > If this is correct (Jonathan is seldom wrong!), then it would imply
> > that rows from small table full table scans would be re-read many
> > times as they age-out of the LRU end of the buffer.

> Certainly an argument in favour. But only if the


> tablescans are very frequent - then you have to
> ensure that the KEEP cache is large enough
> to hold all relevant objects.

Yes, it's important to adjust the KEEP pool to allow 100% caching.

> However, if the database is running on buffered
> file system, then the cost of re-reading from file
> system might not be significant compared to
> other 'real' reads.

Agreed, but in a large system my 4 gig disk caches pales to the 10 gig
worth of small tables.

In my experience with the CBO, reading a table with less than 30
blocks via full table scan (db_file_multiblock_read_count=16) is often
fastest because of the high values for clustering factor on the index.

Is there a way (i.e. large_table_threshold) to tell Oracle to
automatically re-touch these rows?

0 new messages