Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Query with LONG RAW

38 views
Skip to first unread message

bob123

unread,
Mar 2, 2010, 3:46:51 PM3/2/10
to
Hi,

How can I speed up the query below (third party software)
it takes 30 min to execute.
Oracle 9.2.0.6
LONG RAW on PFMQ_MESSAGEDATASTORAGE

Thanks in advance

********************************************************************************

SELECT mds.messagepartdata, ms.status, mi.ID, mi.messageguid, mi.channel,
ROWNUM AS messagecount
FROM pfmq_messagedata md,
pfmq_messagedatastorage mds,
pfmq_messageinfo mi,
pfmq_messagestatus ms
WHERE ( mi.queuename = 'CL312911032'
AND mi.ID = ms.ID
AND mi.ID = md.ID
AND mi.ID = mds.ID
AND md.ID = mds.ID
AND md.messageparttype = mds.messageparttype
AND md.messageparttype = 1
AND (ms.statusrevisionnumber = (SELECT MAX (statusrevisionnumber)
FROM pfmq_messagestatus ms2
WHERE ms2.ID = ms.ID)
)
)
AND ((ms.status = 64) AND (mi.direction = 1) AND mi.messagetype = 0)
ORDER BY mi.sequenceordinalnumber, mi.senttime

call count cpu elapsed disk query current
rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0
0
Execute 1 0.00 0.00 0 0 0
0
Fetch 4321 14.56 580.31 231750 746064 0
64806
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4323 14.56 580.31 231750 746064 0
64806

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 29

Rows Row Source Operation
------- ---------------------------------------------------
64806 SORT ORDER BY (cr=681258 r=184767 w=0 time=403515790 us)
64806 COUNT (cr=681258 r=184767 w=0 time=1729762996 us)
64806 NESTED LOOPS (cr=681258 r=184767 w=0 time=1729717540 us)
64806 NESTED LOOPS (cr=486600 r=92648 w=0 time=901417748 us)
64806 NESTED LOOPS (cr=356748 r=46572 w=0 time=268980743 us)
64820 TABLE ACCESS BY INDEX ROWID PFMQ_MESSAGEINFO (cr=31514 r=23422
w=0 time=44437657 us)
120784 INDEX RANGE SCAN AK1_PFMQ_SEQUENCENUMBER (cr=3117 r=3062 w=0
time=10896605 us)(object id 6511)
64806 TABLE ACCESS BY INDEX ROWID PFMQ_MESSAGESTATUS (cr=325234
r=23150 w=0 time=224278563 us)
64820 INDEX RANGE SCAN XPKPF_MESSAGESTATUS (cr=260414 r=15792 w=0
time=208616639 us)(object id 6515)
64820 SORT AGGREGATE (cr=129644 r=116 w=0 time=1973822 us)
64820 FIRST ROW (cr=129644 r=116 w=0 time=1810738 us)
64820 INDEX RANGE SCAN (MIN/MAX) XPKPF_MESSAGESTATUS (cr=129644
r=116 w=0 time=1756030 us)(object id 6515)
64806 INDEX UNIQUE SCAN XPKPF_MESSAGEDATA (cr=129852 r=46076 w=0
time=632244506 us)(object id 6505)
64806 TABLE ACCESS BY INDEX ROWID PFMQ_MESSAGEDATASTORAGE (cr=194658
r=92119 w=0 time=828055493 us)
64806 INDEX UNIQUE SCAN XPKPF_MESSAGEDATASTORAGE (cr=129852 r=46036
w=0 time=613528422 us)(object id 6507)

********************************************************************************


Sybrand Bakker

unread,
Mar 2, 2010, 5:16:19 PM3/2/10
to
On Tue, 2 Mar 2010 21:46:51 +0100, "bob123" <bob...@gmail.com> wrote:

>How can I speed up the query below (third party software)
>it takes 30 min to execute.
>Oracle 9.2.0.6
>LONG RAW on PFMQ_MESSAGEDATASTORAGE

Apparently the table has heavy row chaining, which means 1 row is in
more than 1 block. This is always resulting in severe performance
hits.
There is nothing wrong with the query, the design of that table is
wrong, as is still running a desupported release of Oracle.

--
Sybrand Bakker
Senior Oracle DBA

Charles Hooper

unread,
Mar 2, 2010, 8:32:23 PM3/2/10
to
On Mar 2, 3:46 pm, "bob123" <bob...@gmail.com> wrote:
> Hi,
>
> How can I speed up the query below (third party software)
> it takes 30 min to execute.
> Oracle 9.2.0.6
> LONG RAW on PFMQ_MESSAGEDATASTORAGE
>
> Thanks in advance
>
> ***************************************************************************­*****
> ***************************************************************************­*****

I might be looking in the wrong place, but I do not see evidence of
row chaining.

It appears that the execution plan is using mostly high precision
indexes with the exception of the AK1_PFMQ_SEQUENCENUMBER index, where
46% of the rows identified by the index are discarded. One question
that you might have is whether or not using those indexes with nested
loop joins is the most appropriate method, or would a couple of hash
joins thrown into the execution plan produce a more efficient
execution plan. The execution plan is starting with the most logical
tables, given the predicates in the WHERE clause.

Let's look at a couple of items:
There were 14.56 seconds of CPU time, where the database server's CPUs
were working to determine the rows to be returned to the client.
Considering that the TKPROF output shows that the elapsed time was
580.31 seconds, that *probably* means that 565.75 seconds were spent
doing something other than actively burning CPU cycles - such as
waiting for the completion of a disk read. There were 231,750 blocks
read from disk, and considering the heavy use of indexes and nested
loop joins, those blocks were likely read one at a time from disk. If
that is the case, the average time to read a block from disk is
0.0024412 seconds (2.4412ms), which is considered to be very fast
access times for physical reads from disk. 64,806 rows were retrieved
by the client in 4,321 fetch calls - indicating that the client is
array fetching just 15 rows at a time. 580.31 seconds of elapsed time
is about 9.7 seconds - and not roughly 30 minutes.

* See if it is possible to configure the third party application to
array fetch 100 rows, rather than 15 rows at a time.
* Verify that the server is not CPU starved - that could explain the
missing 20.3 minutes.
* Verify that there is not a large network latency problem.
* Verify that there is not a lot of client-side delays.

Suggestions:
* Generate the 10046 trace at level 8, rather than level 1. That will
cause the wait events to be written to the trace file. If the trace
file contains a large number of long duration waits on SQL*Net type
wait events, check the network with a packet sniffer (Wireshark, for
example) and check the client-side activity to make certain that it is
not the client application that is the source of the slow
performance. If you see large unexplained gaps in the tim= values in
the trace file without a corresponding long wait event in between,
investigate whether the server's CPUs are over-burdened.
* Check the statistics on the tables and indexes to make certain that
those statistics are reasonably accurate and up to date.
* Review the current optimizer related initialization parameters to
make certain that silly parameters are not specified.
* Most of all, determine where the missing 20.3 minutes have gone.

You will find articles on my blog that will help you with the above
suggestions.

Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration
from the Oak Table"
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.

Randolf Geist

unread,
Mar 3, 2010, 5:44:47 AM3/3/10
to
On Mar 2, 9:46 pm, "bob123" <bob...@gmail.com> wrote:
> How can I speed up the query below (third party software)
> it takes 30 min to execute.
> Oracle 9.2.0.6
> LONG RAW on PFMQ_MESSAGEDATASTORAGE

I think Charles is right, there is no sign of row chaining - each
table row is only causing a single consistent get, which also means
you don't have an issue with applying undo due to concurrent
modifications.

Your TKPROF output however shows several oddities:

1. The total elapsed time is not consistent with what is reported in
the "Row Source Operation", and in particular the "Row Source
Operations" below the top SORT ORDER BY operation show a greater
elapsed time that corresponds to almost 29 minutes (1729 seconds).

2. So it looks like that the 29 minutes are actually spent on
executing the statement rather than in transporting any huge results
(potentially inefficiently) to the client

3. Another oddity is that most of your time is spent on physical reads
of the index blocks - usually it is the other way around when random
table access by ROWID is involved since the random block access to
table rows usually takes up most of the time whereas the index blocks
are cached very well

4. It is also obvious from the timings that your physical block reads
on the index segments are slower than on the table segments - the
average time for a single index block read seems to be approx. 13 ms,
whereas a table block is read on average in between 3 and 5 ms. Are
your indexes located in a different tablespace that is on slower disks
or that is more busy than the tablespaces with the table blocks?

5. How large are your indexes / how many rows are covered by the
indexes and how large is your buffer cache used by those indexes? In
particular the XPKPF_MESSAGEDATA and XPKPF_MESSAGEDATASTORAGE indexes
require each 46,000 physical reads for 130,000 consistent gets /
approx. 64,000 accesses (which corresponds to an index height of 2 and
therefore 2 consistent gets per access). An index of height 2 with
more than physical 46,000 block reads either means that you have a far
too small buffer cache so that the same blocks need to be physically
read over and over again and age out immediately or your index is
simply that huge so that many keys accessed reside in a different
block.

You potentially have an issue with (some of) your indexes - if you
have a chance for an offline period (or do have a mirror copy of the
database) you could run an ANALYZE INDEX ... VALIDATE STRUCTURE and
report the results from the INDEX_STATS view. Note that this requires
an exclusive lock on the index segment and therefore can not be done
online.

Some more observations:

a) The query contains a superfluous join condition: "AND md.ID =
mds.ID" which is not required and might lead to an underestimate of
the cardinality. This in turn could favor the nested loop / index
access path chosen by the optimizer

b) The aggregate subquery (SELECT MAX()...) doesn't contribute
significantly to the execution time and therefore at present seems to
be negligible


Given these facts and the large number of physical block reads that
you need to perform on the index segments the statement might perform
better if you performed full table scans through the tables using
multi-block I/O requests. Of course this depends on how large the
table segments are, and the full table scan performance (size of multi-
block I/O). Since the table blocks seem to get served quicker than the
index blocks, this might turn out to complete faster even if the
segments to read are significantly larger in size than the number of
block read at present.

However the picture might change if you perform more full table scans
- and this might also impact the execution of other concurrent
activities.

Regards,
Randolf

Oracle related stuff blog:
http://oracle-randolf.blogspot.com/

Co-author of the "OakTable Expert Oracle Practices" book:
http://www.apress.com/book/view/1430226684
http://www.amazon.com/Expert-Oracle-Practices-Database-Administration/dp/1430226684

Randolf Geist

unread,
Mar 3, 2010, 6:09:17 AM3/3/10
to
On Mar 2, 9:46 pm, "bob123" <bob...@gmail.com> wrote:
> How can I speed up the query below (third party software)
> it takes 30 min to execute.
> Oracle 9.2.0.6
> LONG RAW on PFMQ_MESSAGEDATASTORAGE

I forgot to mention that Jonathan Lewis was so kind to publish a ready-
to-use script that allows to calculate an approximate index size which
might help you if you can't do the ANALYZE INDEX ... VALIDATE
STRUCTURE on your system:

http://jonathanlewis.wordpress.com/2010/02/28/index-efficiency-2/

Please read the comments and notes how to use it and interpret the
results of the script.

Also - as mentioned in the blog post's comments you could use EXPLAIN
PLAN in 10g on the CREATE INDEX statement to get an index size
estimation from Oracle itself.

0 new messages