Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Help interpreting dba_hist_sqlstat

465 views
Skip to first unread message

Chuck

unread,
Nov 16, 2007, 1:51:03 PM11/16/07
to
Oracle version 10.2.0.2

I need a little help with the dba_hist_sqlstat view in determining why
executions_total - executions_delta doesn't always equal the previous
snapshot's executions_total. Below I've queried the view and show for a
single SQL three different successive snapshots where that sql was in
the snapshot. Between lines 2 and 3 everything balances out but between
1 and 2 it does not. Is this because the sql was flush from the library
cache between the hourly snapshots (notice the gap in snap_id's). If so,
why doesn't the executions_total get reset to 0 since the manual defines
the column as "Cumulative number of executions that took place on this
object *since it was brought into the library cache*".

Thanks.

SQL> SELECT
2 to_char(b.begin_interval_time,'dd-MON-yy hh24:mi') begin_interval_time,
3 --to_char(b.end_interval_time,'dd-MON-yy hh24:mi') end_interval_time,
4 a.snap_id,
5 a.executions_total as total,
6 a.executions_delta as delta,
7 a.executions_total - a.executions_delta diff
8 FROM dba_hist_sqlstat a,
9 dba_hist_snapshot b
10 WHERE a.snap_id BETWEEN 5483
11 AND 5491
12 AND a.sql_id = 'c8afnp575a327'
13 AND a.snap_id = b.snap_id
14 ORDER BY a.snap_id;

BEGIN_INTERVAL_ SNAP_ID TOTAL DELTA DIFF
--------------- ---------- -------- -------- --------
12-NOV-07 01:00 5483 481,227 6,957 474,270
12-NOV-07 08:00 5490 517,201 6,458 510,743
12-NOV-07 09:00 5491 536,625 19,424 517,201

hpuxrac

unread,
Nov 18, 2007, 8:55:37 AM11/18/07
to

You say "three successive" snapshorts but that's not what the interval
and snap_id's are showing.

Looks like you are missing the interval's from 2:00 to 7:00 ...

Chuck

unread,
Nov 19, 2007, 2:37:03 PM11/19/07
to

No, there were no snapshots that included sql_id c8afnp575a327 from
02:00 through 07:00.

Jonathan Lewis

unread,
Nov 19, 2007, 3:57:04 PM11/19/07
to

"Chuck" <skilove...@bluebottle.com> wrote in message
news:zBl0j.689$oL5.48@trnddc05...


AWR collection is engineered to avoid working too hard
if possible - which includes limits on the amount of data
dumped to the tables. For the 'typical' level, only the top
30 statements are flushed at each snapshot.

Looking at your results I think this means that there were
some statements (more than 30) which were busier than
the one are interested in (sql_id = 'c8afnp575a327') from
1am to 7am, so the statistics were increasing in memory,
but not fast enough to get into the dump - then from 7:00 am
to 8:00 this statement started working harder (or the others
dropped back) so that this was came back into the top 30.


--
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com

Author: Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


Chuck

unread,
Nov 20, 2007, 12:22:44 PM11/20/07
to
Jonathan Lewis wrote:

>
> Looking at your results I think this means that there were
> some statements (more than 30) which were busier than
> the one are interested in (sql_id = 'c8afnp575a327') from
> 1am to 7am, so the statistics were increasing in memory,
> but not fast enough to get into the dump - then from 7:00 am
> to 8:00 this statement started working harder (or the others
> dropped back) so that this was came back into the top 30.

Thanks Jonathan.

One more question. If the sql_id I'm interested in where *completely*
flushed from the shared pool (not just from the top 30), does Oracle
still maintain the statistic counters for it somewhere? Or will they be
reset to 0's.

What I'm trying to accomplish is seeing how much elasped time that
sql_id consumed over the past week. If the counters ever get reset to 0,
I want to know that and account for it in my query.

Jonathan Lewis

unread,
Nov 22, 2007, 4:15:44 AM11/22/07
to

"Chuck" <skilove...@bluebottle.com> wrote in message
news:EJE0j.12$Xg.9@trnddc06...

If a cursor is completely flushed from the library cache,
it's gone, and the statistics are gone with it - there isn't
even a place to show a zero.

I think there may be odd cases where a cursor can lose
its sub-heaps for a while (kicked out of memory) and show
zero counts - then - some time later, a re-execution causes
a new optimisation and some of the statistics appear from
somewhere. I've never tracked down how this can happen,
though - but in these cases some part of the cursor (including
the text) was still in memory all the time.

Yong Huang

unread,
Nov 24, 2007, 2:33:51 PM11/24/07
to
On Nov 22, 5:15 pm, "Jonathan Lewis" <jonat...@jlcomp.demon.co.uk>
wrote:

>
> I think there may be odd cases where a cursor can lose
> its sub-heaps for a while (kicked out of memory) and show
> zero counts - then - some time later, a re-execution causes
> a new optimisation and some of the statistics appear from
> somewhere. I've never tracked down how this can happen,
> though - but in these cases some part of the cursor (including
> the text) was still in memory all the time.
>
> --
> Regards
>
> Jonathan Lewishttp://jonathanlewis.wordpress.com
>
> Author: Cost Based Oracle: Fundamentalshttp://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

>
> The Co-operative Oracle Users' FAQhttp://www.jlcomp.demon.co.uk/faq/ind_faq.html

Are you talking about the case where you do a DDL on the table
referenced in the SQL therefore invalidating the cursor body? I think
it happens in 9i only. In 10gR2 (not sure about R1), DDL will remove
the entire cursor, not just heap 6.

The DDL in this case includes analyze (special because it doesn't
update last_ddl_time), and even dbms_stats.gather_table_stats unless
no_invalidate is true. (In 10g, gather_table_stats doesn't clean out
the cursor stats regardless the no_invalidate setting.)

Yong Huang

Yong Huang

unread,
Nov 24, 2007, 2:59:54 PM11/24/07
to

I should add that, for the last part, my 10gR2 database has the
default no_invalidate which is auto_invalidate. My simple test
happened to not cause Oracle to invalidate the cursor.

Yong Huang

0 new messages