Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Stale statistics

38 views
Skip to first unread message

Nag

unread,
Jun 30, 2009, 8:29:51 AM6/30/09
to
Hi,
We have OLTP system on 10gR2 with RAC (3 nodes) with lot of active
tables. Means deletes, inserts and updates. We are only gathering 1%
sample statistcs. After bringing the system for couple of hours all
the statistics are becoming stale. All our queries use bind varaibles
and cursor sharing parameter is set to EXACT.

Because of this only the first time we do hard parse and the plan is
set. After that though the statistics are stale, it is still using the
same plan.

What kind of impact will be on the system with stale statistcs?


Any input will be appreciated.


Thanks,
Naga

Mark D Powell

unread,
Jun 30, 2009, 9:04:16 AM6/30/09
to

On 10g Oracle provides a dbms_scheduler job that automatically gathers
statistics. Did you disable it or modify it in order to do the 1%
sample size? The default sample size is auto which means the
optimizer picks one based on what it finds.

By default on 10g histograms are collected. I do not think a 1%
sample will result in very good histograms.

How are you determining that the statistics are stale after only a
couple of hours?

HTH -- Mark D Powell --

Nag

unread,
Jun 30, 2009, 10:15:54 AM6/30/09
to
> HTH -- Mark D Powell --- Hide quoted text -
>
> - Show quoted text -

user_tab_statistics/dba_tab_statistics with column name STALE_STATS =
'YES'

joel garry

unread,
Jun 30, 2009, 1:35:18 PM6/30/09
to

In a nutshell, whether it makes a difference depends on if the
statistics that exist result in the correct plan being chosen for all
the accesses of the data.

In a practical sense, this means you can either let the system gather
statistics when it thinks things are stale, you can gather statistics
when you think they should be gathered, or you can figure out the best
statistics and lock them down. The latter is preferred, because you
don't have so many emergencies operationally. It may be expensive to
implement, though.

If your site does things that are normal from the point of view of the
assumptions the staleness makes, it may be reasonable to just use the
default gathering (with the caveat about histograms being collected),
and just deal with problems as they arise. This worked ok for me for
a while, until a vendor started making metadata changes while I wasn't
in, and the subsequent gathering blew those all to hell. So I
reverted those to the old stats and locked them down - score one for
EM, as far as I'm concerned.

Since you've stated your tables are very volatile and we can guess
there must be some legitimate reason you chose the 1% sampling, it
sounds like it would be worth it for you to lock those statistics,
document all plans accessing those tables, and research any
performance problems and respond accordingly. This would be
considered best development practice, anyways.

There's been a lot of debate on the web about when to gather stats. I
believe people have written blogs or white papers about it but don't
have time to look just now.

jg
--
@home.com is bogus.
If at first you don't succeed, you're not Chuck Norris.

Mladen Gogala

unread,
Jun 30, 2009, 2:26:00 PM6/30/09
to
On Tue, 30 Jun 2009 06:04:16 -0700, Mark D Powell wrote:

> By default on 10g histograms are collected. I do not think a 1% sample
> will result in very good histograms.

That depends. If the table is huge, 1% sample may be sufficient. One can
even use larger sample but use block sampling, which will speed up the
collection process. Unfortunately, in 10g one cannot set BLOCK_SAMPLE as
a parameter. One cannot set is as table preference in 11g, either.

A trick that can be used to combat stale statistics is to set
OPTIMIZER_DYNAMIC_SAMPLING to 6 or higher. If you take a look at the 10053
traces, level 1, you will see that the decisions reached after setting
the dynamic sampling to high value will be similar to the decisions
reached after analyzing the table. Of course, dynamic sampling also
introduces an element of instability to the system, but I do find it
useful.


--
http://mgogala.freehostia.com

John Hurley

unread,
Jul 1, 2009, 8:58:42 AM7/1/09
to
On Jun 30, 8:29 am, Nag <naga.cha...@gmail.com> wrote:

snip

Don't forget about the changes version 10 introduced when new
statistics are gathered.

By default now... unless you invoke dbms_stats with NO_INVALIDATE set
to FALSE ( a backward weird logic setting if any ever existed ...
shouldn't it be INVALIDATE = TRUE ) ... parsed already active SQL
statements do not get invalidated right away ... they stay around even
though new statistics are in place.

This is a huge change in the behavior of 10 and above but many people
seem to still be unaware of it.

If you run dbms_stats without using the NO_INVALIDATE parameter
( ie ... like 99.9 percent of the world ) then you start running into
the effects introduced by another ( new with 10 ) undocumented oracle
parameter _optimizer_invalidation_period ...

So even if SQL statements logically should be re parsed ... you
wait ... a variable length of time depending on how much a specific
SQL statement keeps getting issued ( or not ) ... before the new
statistics are reflected in the execution plan.

Worth looking at is this http://forums.oracle.com/forums/thread.jspa?threadID=592771

There's a doc id worth looking at in Metalink but geez I am getting
login errors right now.


John Hurley

unread,
Jul 1, 2009, 9:25:15 AM7/1/09
to
On Jul 1, 8:58 am, John Hurley <johnbhur...@sbcglobal.net> wrote:

snip

> Don't forget about the changes version 10 introduced when new
> statistics are gathered.
>
> By default now... unless you invoke dbms_stats with NO_INVALIDATE set
> to FALSE ( a backward weird logic setting if any ever existed ...
> shouldn't it be INVALIDATE = TRUE ) ... parsed already active SQL
> statements do not get invalidated right away ... they stay around even
> though new statistics are in place.
>
> This is a huge change in the behavior of 10 and above but many people
> seem to still be unaware of it.
>
> If you run dbms_stats without using the NO_INVALIDATE parameter
> ( ie ... like 99.9 percent of the world ) then you start running into
> the effects introduced by another ( new with 10 ) undocumented oracle
> parameter _optimizer_invalidation_period ...
>
> So even if SQL statements logically should be re parsed ... you
> wait ... a variable length of time depending on how much a specific
> SQL statement keeps getting issued ( or not ) ... before the new
> statistics are reflected in the execution plan.
>

> Worth looking at is thishttp://forums.oracle.com/forums/thread.jspa?threadID=592771


>
> There's a doc id worth looking at in Metalink but geez I am getting
> login errors right now.

Doc id is 557661.1

Mladen Gogala

unread,
Jul 1, 2009, 2:57:50 PM7/1/09
to
On Wed, 01 Jul 2009 05:58:42 -0700, John Hurley wrote:

> This is a huge change in the behavior of 10 and above but many people
> seem to still be unaware of it.

There is an even greater change in Oracle 11g. The only reason which
motivates me to recommend upgrading to 11g is the plan stability.

--
http://mgogala.freehostia.com

John Hurley

unread,
Jul 1, 2009, 3:10:10 PM7/1/09
to
On Jul 1, 2:57 pm, Mladen Gogala <mla...@bogus.email.com> wrote:

snip

> > This is a huge change in the behavior of 10 and above but many people
> > seem to still be unaware of it.
>
> There is an even greater change in Oracle 11g.

What are you referring to exactly please?

> The only reason which motivates me to recommend upgrading to 11g is the plan stability.

We have been pretty happy with 11.1.0.7 and planning to check out and
probably move to 11.2 pretty darn quickly.


0 new messages