Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

SELECT statement efficiency question

12 views
Skip to first unread message

Dereck L. Dietz

unread,
Apr 7, 2007, 11:52:08 PM4/7/07
to
I have the following two select statements in some code I've been looking
at. I don't have
access to a computer with Oracle on it currently so I can't determine this
myself.

Both selects do the same thing. The function in the second version simply
checks the
condition what is in the first and send back a 'Y' or 'N' whereas the first
has the
check in the where clause itself.

My question is this: is any one of the two selects more efficient (or less
efficient)
than the other?


SELECT a.mbr_dob,
a.mbr_first_nm,
a.mbr_gender,
b.mbr_key,
a.mbr_last_nm,
b.mbr_sys_key,
b.mbr_updated_dt,
a.mbr_x_fst_nm,
a.mbr_x_lst_nm,
a.person_id,
a.z_person_id
FROM mbr_person a,
mbr_system b
WHERE a.person_id = b.person_id
AND a.deactivate = 0
AND b.deactivate = 0
AND (
a.mbr_last_nm NOT IN ( 'DATA','CORRECTION' )
AND a.mbr_first_nm NOT IN ( 'DATA','CORRECTION' )
);


SELECT a.mbr_dob,
a.mbr_first_nm,
a.mbr_gender,
b.mbr_key,
a.mbr_last_nm,
b.mbr_sys_key,
b.mbr_updated_dt,
a.mbr_x_fst_nm,
a.mbr_x_lst_nm,
a.person_id,
a.z_person_id
FROM mbr_person a,
mbr_system b
WHERE a.person_id = b.person_id
AND a.deactivate = 0
AND b.deactivate = 0
AND not_data_correction( a.mbr_last_nm, a.mbr_first_nm ) = 'Y';


Mladen Gogala

unread,
Apr 7, 2007, 11:54:32 PM4/7/07
to
On Sat, 07 Apr 2007 22:52:08 -0500, Dereck L. Dietz wrote:

> I have the following two select statements in some code I've been
> looking at. I don't have
> access to a computer with Oracle on it currently so I can't determine
> this myself.

Interesting situation.

>
> Both selects do the same thing. The function in the second version
> simply checks the
> condition what is in the first and send back a 'Y' or 'N' whereas the
> first has the
> check in the where clause itself.
>
> My question is this: is any one of the two selects more efficient (or
> less efficient)
> than the other?
>
>
> SELECT a.mbr_dob,
> a.mbr_first_nm,

.....
The question here is what is more efficient: a "not in" expression that
clearly cannot use an index or the same thing wrapped up in a function,
with a possible use of the functional bitmap index. This question deserves
a closer look. First, if the table is a target for frequent DML
statements, bitmap indexes are out of the question. In this case, the
second method is much less efficient because it cannot use an index and it
has to execute PL/SQL code, something that the first method doesn't have
to do. If it's a DW database and you can create bitmap indexes at will,
then the second method will be faster. You can, however, create a hash
cluster based on that function and speed the retrieval.
What the closer look at the query reveals is a missing object. I don't
know many people named "DATA" (even the Startrek NG character is a machine
which doesn't run Windows) or "CORRECTION" and the columns like the first
and last name are usually reserved for humans. Your data structure is
trying to make distinctly inhuman entities look human. What you're
probably looking for is another column.
The real answer how to speed up this query is to index the remaining
columns and make sure that the optimizer uses available indexes. Then,
it will filter out the non-conforming results in a snap, if the result
set is small enough.
--
http://www.mladen-gogala.com

Dereck L. Dietz

unread,
Apr 8, 2007, 1:30:14 AM4/8/07
to

"Mladen Gogala" <mgogala.S...@verizon.net> wrote in message
news:pan.2007.04...@verizon.net...

> On Sat, 07 Apr 2007 22:52:08 -0500, Dereck L. Dietz wrote:
>
> What the closer look at the query reveals is a missing object. I don't
> know many people named "DATA" (even the Startrek NG character is a machine
> which doesn't run Windows) or "CORRECTION" and the columns like the first
> and last name are usually reserved for humans. Your data structure is
> trying to make distinctly inhuman entities look human. What you're
> probably looking for is another column.

This is a data warehouse load. The input source has some rows with "DATA"
and "CORRECTION" in the name fields which indicate there needs to be a data
correction and also that those rows are NOT to be loaded into the tables.

Thanks.


Mladen Gogala

unread,
Apr 8, 2007, 1:02:53 AM4/8/07
to
On Sun, 08 Apr 2007 00:30:14 -0500, Dereck L. Dietz wrote:

> This is a data warehouse load. The input source has some rows with
> "DATA" and "CORRECTION" in the name fields which indicate there needs to
> be a data correction and also that those rows are NOT to be loaded into
> the tables.

May be a flag column defined like this:

should_load char(1) default 'Y' check (col1 in ('Y','N'))

would serve you better then the convention with names that
you currently use?
--
http://www.mladen-gogala.com

hpuxrac

unread,
Apr 8, 2007, 8:38:28 AM4/8/07
to

In general, anything that you can do strictly in sql ( without using
plsql ) is probably going to be much more efficient and faster. This
should be tested and benchmarked in specific cases of course.

It's always good when you submit a question like this to include
details about what version of oracle.

While the optimizer has evolved and version by version may make better
choices when evaluating a NOT IN phrase at times especially when given
such a small range of things you are checking for alternate coding
should be considered or at least evaluated ( and ( a.mbr_first_nm <>
'DATA' OR a.mbr_first_nm <> 'CORRECTION' ) etc ) but IMHO something
just seems very strange about the design in the first place.

Charles Hooper

unread,
Apr 8, 2007, 9:37:37 AM4/8/07
to
On Apr 7, 11:52 pm, "Dereck L. Dietz" <diet...@ameritech.net> wrote:

Good advice given so far. I would caution against creating too many
indexes, as this will likely negatively impact performance of other
parts of the system while offering minimal assistance to this SQL
statement. Histograms could be important on the columns
MBR_PERSON.DEACTIVATE, MBR_SYSTEM.DEACTIVATE, MBR_PERSON.MBR_LAST_NM,
and MBR_PERSON.MBR_FIRST_NM. The histograms will give Oracle's cost
based optimizer a better picture of the contents of the columns,
rather than assuming an even spread of the data values between the min
and max for the column.

Keep in mind that Oracle may transform your SQL statement into another
equivalent form, and may use constraints and transitive closure to
generate additional predicates (think entries in the WHERE) clause for
the SQL statement. You may want to perform timing with alternate SQL
syntax:


SELECT a.mbr_dob,
a.mbr_first_nm,
a.mbr_gender,
b.mbr_key,
a.mbr_last_nm,
b.mbr_sys_key,
b.mbr_updated_dt,
a.mbr_x_fst_nm,
a.mbr_x_lst_nm,
a.person_id,
a.z_person_id
FROM mbr_person a,
mbr_system b
WHERE a.person_id = b.person_id
AND a.deactivate = 0
AND b.deactivate = 0

AND A.DEACTIVATE = B.DEACTIVATE
AND A.MBR_LAST_NM NOT IN ('DATA','CORRECTION')
AND A.MBR_FIRST_NM NOT IN ('DATA','CORRECTION');

Indexes on the MBR_PERSON.DEACTIVATE, MBR_SYSTEM.DEACTIVATE columns
may be helpful if most of the rows have something other than 0 in
those columns. Indexes on the MBR_PERSON.MBR_LAST_NM and
MBR_PERSON.MBR_FIRST_NM columns will likely be of limited use. An
index on MBR_PERSON.PERSON_ID, MBR_SYSTEM.PERSON_ID will likely be
very helpful.

My guess is that the above SQL statement will perform a full tablescan
on both tables. The full tablescans may be the most efficient way to
retrieve the rows. However, that will depend greatly on the
composition of the data in the two tables. Oracle may select to
perform a hash join between the two tables, so a large HASH_AREA_SIZE
may help.

As has been mentioned in this thread, avoid using PL/SQL for something
that can be done efficiently in pure SQL. Context switches (and I
suppose on-the-fly compiling of the PL/SQL code if not using Native
code compilation) are expensive on some operating system platforms,
and may greatly decrease performance.

The best way to test the performance of the various methods requires
access to the Oracle database, unless of course you want to dig
through 10046 traces at level 8 or 12.

Charles Hooper
PC Support Specialist
K&M Machine-Fabricating, Inc.

Mladen Gogala

unread,
Apr 8, 2007, 1:17:55 PM4/8/07
to
On Sun, 08 Apr 2007 06:37:37 -0700, Charles Hooper wrote:

> Good advice given so far. I would caution against creating too many
> indexes, as this will likely negatively impact performance of other
> parts of the system

Charles, this is the line I frequently find in many books, CBT and
manuals and yet I have never seen insert or delete slowed down to the
unacceptable levels because of too many indexes. The only method to
diagnose that this is indeed happening would be to observe significant
increase in average I/O time on the underlying data file.
Again, I've never even seen this happening. I believe that this thing
with too many indexes is dangerous only in the extreme situations and it
is very hard to diagnose because the process that waits for writing the
index blocks is DBWR so the users never wait for the blocks to be
written. Users may wait for checkpoints or log file sync but not for the
index

--
http://www.mladen-gogala.com

Maxim Demenko

unread,
Apr 8, 2007, 2:15:42 PM4/8/07
to Mladen Gogala
Mladen Gogala schrieb:

I think, the increased IO activity is negligible compared to increased
latch contention in this case.
Here was an interesting blog entry about this
http://esemrick.blogspot.com/2006/03/unused-indexes-and-scalability.html
One can simply run this test to see the difference in 'cache buffers
chains' latches:

create table d as select * from dba_objects where 1=2;
create table d1 as select * from dba_objects where 1=2;
spool d.sql
select 'CREATE INDEX D_'||COLUMN_ID||' ON D('||COLUMN_NAME||');' from
dba_tab_columns where table_name = 'DBA_OBJECTS'
spool off
@d
exec runstats_pkg.rs_start
insert into d1 select * from dba_objects;
exec runstats_pkg.rs_middle
insert into d select * from dba_objects;
exec runstats_pkg.rs_stop

To be consequent, one should run 10046 trace for the indexed case and
compare time spent due to IO related wait events and latch related wait
events...

Some years ago i've to do a big data load into OLTP Siebel system over
weekend - some of very important tables (like s_customer) here often
have about 40-50 indexes per table (they are just illdesigned so, to be
generic) . The only possibility to get it done was to drop all indexes
not required for load and recreate them after it. I suppose, many ERP
systems don't differ much in this regard from Siebel.

Best regards

Maxim

Charles Hooper

unread,
Apr 8, 2007, 8:17:07 PM4/8/07
to
On Apr 8, 1:17 pm, Mladen Gogala <mgogala.SPAM_ME....@verizon.net>
wrote:

I have also seen the statement written in books, blogs, and articles
stating that having too many indexes on tables causes performance
problems, essentially that a table update that would have required one
I/O is turned into one I/O plus three or four I/Os per index (or maybe
it was two to three I/Os per index). I never tested this, since it
seemed to make sense, after all the I/O system can only handle so many
logical I/O and so many physical I/Os per second. I devised a quick
test earlier today using DBA_OBJECTS, but aborted the test after 15+
minutes, and instead decided to use a physical table for the source
data for testing.

First, I created a very simple table that includes the first 19
columns of my source table:
CREATE TABLE T3 AS
SELECT
ID,
DESCRIPTION,
STOCK_UM,
PLANNING_LEADTIME,
ORDER_POLICY,
ORDER_POINT,
SAFETY_STOCK_QTY,
FIXED_ORDER_QTY,
DAYS_OF_SUPPLY,
MINIMUM_ORDER_QTY,
MAXIMUM_ORDER_QTY,
ENGINEERING_MSTR,
PRODUCT_CODE,
COMMODITY_CODE,
MFG_NAME,
MFG_PART_ID,
FABRICATED,
PURCHASED,
STOCKED
FROM
PART
WHERE
0=1;

This is what the test table looks like:
DESC T3

Name Null? Type
----------------------------------------- -------- ------------
ID NOT NULL VARCHAR2(30)
DESCRIPTION VARCHAR2(40)
STOCK_UM NOT NULL VARCHAR2(15)
PLANNING_LEADTIME NOT NULL NUMBER
ORDER_POLICY NOT NULL CHAR(1)
ORDER_POINT NUMBER(14,4)
SAFETY_STOCK_QTY NUMBER(14,4)
FIXED_ORDER_QTY NUMBER(14,4)
DAYS_OF_SUPPLY NUMBER
MINIMUM_ORDER_QTY NUMBER(14,4)
MAXIMUM_ORDER_QTY NUMBER(14,4)
ENGINEERING_MSTR VARCHAR2(3)
PRODUCT_CODE VARCHAR2(15)
COMMODITY_CODE VARCHAR2(15)
MFG_NAME VARCHAR2(30)
MFG_PART_ID VARCHAR2(30)
FABRICATED NOT NULL CHAR(1)
PURCHASED NOT NULL CHAR(1)
STOCKED NOT NULL CHAR(1)

Now, in the database with no other activity, I bring the source
table's blocks into the KEEP pool:
SELECT
ID,
DESCRIPTION,
STOCK_UM,
PLANNING_LEADTIME,
ORDER_POLICY,
ORDER_POINT,
SAFETY_STOCK_QTY,
FIXED_ORDER_QTY,
DAYS_OF_SUPPLY,
MINIMUM_ORDER_QTY,
MAXIMUM_ORDER_QTY,
ENGINEERING_MSTR,
PRODUCT_CODE,
COMMODITY_CODE,
MFG_NAME,
MFG_PART_ID,
FABRICATED,
PURCHASED,
STOCKED
FROM
PART
WHERE
ROWNUM<=30000;

Now, I set up for the first test:
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST_NO_INDEXES';

And just perform a mass insert from the source table:
INSERT INTO
T3
SELECT
ID,
DESCRIPTION,
STOCK_UM,
PLANNING_LEADTIME,
ORDER_POLICY,
ORDER_POINT,
SAFETY_STOCK_QTY,
FIXED_ORDER_QTY,
DAYS_OF_SUPPLY,
MINIMUM_ORDER_QTY,
MAXIMUM_ORDER_QTY,
ENGINEERING_MSTR,
PRODUCT_CODE,
COMMODITY_CODE,
MFG_NAME,
MFG_PART_ID,
FABRICATED,
PURCHASED,
STOCKED
FROM
PART
WHERE
ROWNUM<=30000;

Now, I switch to another trace file and reset the test table back to
the starting point:
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST_CLEAN_UP';
TRUNCATE TABLE T3;

Let's create a simple B*Tree index on each column of our test table:
CREATE INDEX T3_1 ON T3(ID);
CREATE INDEX T3_2 ON T3(DESCRIPTION);
CREATE INDEX T3_3 ON T3(STOCK_UM);
CREATE INDEX T3_4 ON T3(PLANNING_LEADTIME);
CREATE INDEX T3_5 ON T3(ORDER_POLICY);
CREATE INDEX T3_6 ON T3(ORDER_POINT);
CREATE INDEX T3_7 ON T3(SAFETY_STOCK_QTY);
CREATE INDEX T3_8 ON T3(FIXED_ORDER_QTY);
CREATE INDEX T3_9 ON T3(DAYS_OF_SUPPLY);
CREATE INDEX T3_10 ON T3(MINIMUM_ORDER_QTY);
CREATE INDEX T3_11 ON T3(MAXIMUM_ORDER_QTY);
CREATE INDEX T3_12 ON T3(ENGINEERING_MSTR);
CREATE INDEX T3_13 ON T3(PRODUCT_CODE);
CREATE INDEX T3_14 ON T3(COMMODITY_CODE);
CREATE INDEX T3_15 ON T3(MFG_NAME);
CREATE INDEX T3_16 ON T3(MFG_PART_ID);
CREATE INDEX T3_17 ON T3(FABRICATED);
CREATE INDEX T3_18 ON T3(PURCHASED);
CREATE INDEX T3_19 ON T3(STOCKED);

Let's try the test again now that the 19 indexes are on the columns:
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST_WITH_INDEXES';

INSERT INTO
T3
SELECT
ID,
DESCRIPTION,
STOCK_UM,
PLANNING_LEADTIME,
ORDER_POLICY,
ORDER_POINT,
SAFETY_STOCK_QTY,
FIXED_ORDER_QTY,
DAYS_OF_SUPPLY,
MINIMUM_ORDER_QTY,
MAXIMUM_ORDER_QTY,
ENGINEERING_MSTR,
PRODUCT_CODE,
COMMODITY_CODE,
MFG_NAME,
MFG_PART_ID,
FABRICATED,
PURCHASED,
STOCKED
FROM
PART
WHERE
ROWNUM<=30000;

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'DONE';

>From the TEST_NO_INDEXES trace file, the wait events that appeared:
Wait Time Wait Event Wait Event Raw Details
0.036 db file sequential read WAIT #3: nam='db file sequential read'
ela= 36131 file#=4 block#=3 blocks=1 obj#=12373 tim=3652347137
0.036 db file sequential read WAIT #3: nam='db file sequential read'
ela= 36131 file#=4 block#=3 blocks=1 obj#=12373 tim=3652347137
0 db file sequential read WAIT #3: nam='db file sequential read' ela=
234 file#=4 block#=4 blocks=1 obj#=12373 tim=3652347440
0 db file sequential read WAIT #3: nam='db file sequential read' ela=
204 file#=4 block#=5 blocks=1 obj#=12373 tim=3652347707
0 db file sequential read WAIT #3: nam='db file sequential read' ela=
203 file#=4 block#=6 blocks=1 obj#=12373 tim=3652347971
0 db file sequential read WAIT #3: nam='db file sequential read' ela=
171 file#=4 block#=7 blocks=1 obj#=12373 tim=3652348181
0 db file sequential read WAIT #3: nam='db file sequential read' ela=
195 file#=4 block#=8 blocks=1 obj#=12373 tim=3652348434
----------------------
It looks like there were a total of seven single block writes that the
session waited on, for a total of 0.072 seconds
----

>From the TEST_WITH_INDEXES trace file, the wait events that appeared:
Wait Time Wait Event Wait Event Raw Details
0.079 db file sequential read WAIT #4: nam='db file sequential read'
ela= 79485 file#=4 block#=3295081 blocks=1 obj#=15076 tim=3735217458
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
205 file#=4 block#=3295082 blocks=1 obj#=15076 tim=3735218917
0.013 db file scattered read WAIT #4: nam='db file scattered read'
ela= 12548 file#=4 block#=3295340 blocks=5 obj#=15077 tim=3735231979
0.009 db file scattered read WAIT #4: nam='db file scattered read'
ela= 8933 file#=4 block#=3295700 blocks=5 obj#=15078 tim=3735241489
0.012 db file scattered read WAIT #4: nam='db file scattered read'
ela= 11927 file#=4 block#=3295828 blocks=5 obj#=15079 tim=3735253891
0.003 db file scattered read WAIT #4: nam='db file scattered read'
ela= 2692 file#=4 block#=3296028 blocks=5 obj#=15080 tim=3735257040
0.011 db file scattered read WAIT #4: nam='db file scattered read'
ela= 11249 file#=4 block#=3296092 blocks=5 obj#=15081 tim=3735268791
0.013 db file scattered read WAIT #4: nam='db file scattered read'
ela= 12550 file#=4 block#=3296164 blocks=5 obj#=15083 tim=3735281801
0.003 db file scattered read WAIT #4: nam='db file scattered read'
ela= 3421 file#=4 block#=3296188 blocks=5 obj#=15086 tim=3735285609
0.002 db file scattered read WAIT #4: nam='db file scattered read'
ela= 2223 file#=4 block#=3296204 blocks=5 obj#=15087 tim=3735288147
0.002 db file scattered read WAIT #4: nam='db file scattered read'
ela= 2243 file#=4 block#=3296220 blocks=5 obj#=15088 tim=3735290688
0.004 db file scattered read WAIT #4: nam='db file scattered read'
ela= 3825 file#=4 block#=3296236 blocks=5 obj#=15089 tim=3735294815
0.013 db file scattered read WAIT #4: nam='db file scattered read'
ela= 12564 file#=4 block#=3296308 blocks=5 obj#=15090 tim=3735307829
0.016 db file scattered read WAIT #4: nam='db file scattered read'
ela= 16333 file#=4 block#=3296404 blocks=5 obj#=15091 tim=3735324649
0.001 db file scattered read WAIT #4: nam='db file scattered read'
ela= 1026 file#=4 block#=3296412 blocks=5 obj#=15092 tim=3735325925
0.001 db file scattered read WAIT #4: nam='db file scattered read'
ela= 982 file#=4 block#=3296420 blocks=5 obj#=15093 tim=3735327191
0.011 db file scattered read WAIT #4: nam='db file scattered read'
ela= 11291 file#=4 block#=3296484 blocks=5 obj#=15094 tim=3735338933
0.011 db file scattered read WAIT #4: nam='db file scattered read'
ela= 11297 file#=4 block#=3296548 blocks=5 obj#=15095 tim=3735350677
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
458 file#=4 block#=3295698 blocks=1 obj#=15078 tim=3735356766
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
252 file#=4 block#=3295697 blocks=1 obj#=15078 tim=3735357279
0.033 db file sequential read WAIT #4: nam='db file sequential read'
ela= 33085 file#=4 block#=3296306 blocks=1 obj#=15090 tim=3735440346
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
196 file#=4 block#=3296305 blocks=1 obj#=15090 tim=3735440775
0.01 db file sequential read WAIT #4: nam='db file sequential read'
ela= 10447 file#=4 block#=3295338 blocks=1 obj#=15077 tim=3735453346
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
215 file#=4 block#=3295337 blocks=1 obj#=15077 tim=3735453758
0.01 db file sequential read WAIT #4: nam='db file sequential read'
ela= 10359 file#=4 block#=3295826 blocks=1 obj#=15079 tim=3735475243
0.01 db file sequential read WAIT #4: nam='db file sequential read'
ela= 10359 file#=4 block#=3295826 blocks=1 obj#=15079 tim=3735475243
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
253 file#=4 block#=3295825 blocks=1 obj#=15079 tim=3735475696
0.006 db file sequential read WAIT #4: nam='db file sequential read'
ela= 5589 file#=4 block#=3296234 blocks=1 obj#=15089 tim=3735482850
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
218 file#=4 block#=3296233 blocks=1 obj#=15089 tim=3735483299
0.002 db file sequential read WAIT #4: nam='db file sequential read'
ela= 1916 file#=4 block#=3296026 blocks=1 obj#=15080 tim=3735489517
0.011 db file sequential read WAIT #4: nam='db file sequential read'
ela= 10727 file#=4 block#=3296025 blocks=1 obj#=15080 tim=3735500455
0.011 db file sequential read WAIT #4: nam='db file sequential read'
ela= 10945 file#=4 block#=3296090 blocks=1 obj#=15081 tim=3735512137
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
330 file#=4 block#=3296089 blocks=1 obj#=15081 tim=3735512661
0.012 db file sequential read WAIT #4: nam='db file sequential read'
ela= 11793 file#=4 block#=3296418 blocks=1 obj#=15093 tim=3735526329
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
283 file#=4 block#=3296417 blocks=1 obj#=15093 tim=3735526806
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
266 file#=4 block#=3296482 blocks=1 obj#=15094 tim=3735527871
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
271 file#=4 block#=3296481 blocks=1 obj#=15094 tim=3735528379
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
376 file#=4 block#=3296546 blocks=1 obj#=15095 tim=3735529787
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
306 file#=4 block#=3296545 blocks=1 obj#=15095 tim=3735530308
0.023 db file scattered read WAIT #4: nam='db file scattered read'
ela= 22865 file#=4 block#=3296172 blocks=5 obj#=15084 tim=3735560714
0.077 db file scattered read WAIT #4: nam='db file scattered read'
ela= 77052 file#=4 block#=3296156 blocks=5 obj#=15082 tim=3735658131
0.077 db file scattered read WAIT #4: nam='db file scattered read'
ela= 77052 file#=4 block#=3296156 blocks=5 obj#=15082 tim=3735658131
0.123 db file sequential read WAIT #4: nam='db file sequential read'
ela= 122864 file#=4 block#=3296186 blocks=1 obj#=15086 tim=3735917834
0.123 db file sequential read WAIT #4: nam='db file sequential read'
ela= 122864 file#=4 block#=3296186 blocks=1 obj#=15086 tim=3735917834
0.121 db file sequential read WAIT #4: nam='db file sequential read'
ela= 120743 file#=4 block#=3296185 blocks=1 obj#=15086 tim=3736038702
0.135 db file sequential read WAIT #4: nam='db file sequential read'
ela= 134798 file#=4 block#=3296218 blocks=1 obj#=15088 tim=3736222728
0.135 db file sequential read WAIT #4: nam='db file sequential read'
ela= 134798 file#=4 block#=3296218 blocks=1 obj#=15088 tim=3736222728
0.013 db file sequential read WAIT #4: nam='db file sequential read'
ela= 12658 file#=4 block#=3296217 blocks=1 obj#=15088 tim=3736235504
0.27 log buffer space WAIT #4: nam='log buffer space' ela= 270380 p1=0
p2=0 p3=0 obj#=15088 tim=3736880230
0.27 log buffer space WAIT #4: nam='log buffer space' ela= 270380 p1=0
p2=0 p3=0 obj#=15088 tim=3736880230
0.402 log buffer space WAIT #4: nam='log buffer space' ela= 402235
p1=0 p2=0 p3=0 obj#=15088 tim=3737410490
0.402 log buffer space WAIT #4: nam='log buffer space' ela= 402235
p1=0 p2=0 p3=0 obj#=15088 tim=3737410490
0.17 db file sequential read WAIT #4: nam='db file sequential read'
ela= 170460 file#=4 block#=3296202 blocks=1 obj#=15087 tim=3737612923
0.17 db file sequential read WAIT #4: nam='db file sequential read'
ela= 170460 file#=4 block#=3296202 blocks=1 obj#=15087 tim=3737612923
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
346 file#=4 block#=3296201 blocks=1 obj#=15087 tim=3737613373
0.041 log buffer space WAIT #4: nam='log buffer space' ela= 40699 p1=0
p2=0 p3=0 obj#=15087 tim=3737756910
0.041 log buffer space WAIT #4: nam='log buffer space' ela= 40699 p1=0
p2=0 p3=0 obj#=15087 tim=3737756910
0.153 db file sequential read WAIT #4: nam='db file sequential read'
ela= 153445 file#=4 block#=3296154 blocks=1 obj#=15082 tim=3738030523
0.153 db file sequential read WAIT #4: nam='db file sequential read'
ela= 153445 file#=4 block#=3296154 blocks=1 obj#=15082 tim=3738030523
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
316 file#=4 block#=3296153 blocks=1 obj#=15082 tim=3738030946
0.15 log buffer space WAIT #4: nam='log buffer space' ela= 150208 p1=0
p2=0 p3=0 obj#=15082 tim=3738192472
0.018 log buffer space WAIT #4: nam='log buffer space' ela= 18194 p1=0
p2=0 p3=0 obj#=15082 tim=3738587208
0.018 log buffer space WAIT #4: nam='log buffer space' ela= 18194 p1=0
p2=0 p3=0 obj#=15082 tim=3738587208
0.405 db file sequential read WAIT #4: nam='db file sequential read'
ela= 404755 file#=4 block#=3296402 blocks=1 obj#=15091 tim=3739010542
0.405 db file sequential read WAIT #4: nam='db file sequential read'
ela= 404755 file#=4 block#=3296402 blocks=1 obj#=15091 tim=3739010542
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
295 file#=4 block#=3296401 blocks=1 obj#=15091 tim=3739010977
0.178 db file sequential read WAIT #4: nam='db file sequential read'
ela= 178125 file#=4 block#=3296410 blocks=1 obj#=15092 tim=3739204175
0.178 db file sequential read WAIT #4: nam='db file sequential read'
ela= 178125 file#=4 block#=3296410 blocks=1 obj#=15092 tim=3739204175
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
284 file#=4 block#=3296409 blocks=1 obj#=15092 tim=3739204575
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
466 file#=4 block#=3296162 blocks=1 obj#=15083 tim=3739212628
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
466 file#=4 block#=3296162 blocks=1 obj#=15083 tim=3739212628
0 db file sequential read WAIT #4: nam='db file sequential read' ela=
206 file#=4 block#=3296161 blocks=1 obj#=15083 tim=3739212885
0.298 log buffer space WAIT #4: nam='log buffer space' ela= 297733
p1=0 p2=0 p3=0 obj#=15083 tim=3739742943
0.298 log buffer space WAIT #4: nam='log buffer space' ela= 297733
p1=0 p2=0 p3=0 obj#=15083 tim=3739742943
0.933 log buffer space WAIT #4: nam='log buffer space' ela= 932660
p1=0 p2=0 p3=0 obj#=15083 tim=3740714395
0.587 log buffer space WAIT #4: nam='log buffer space' ela= 586746
p1=0 p2=0 p3=0 obj#=15083 tim=3741542848
0.587 log buffer space WAIT #4: nam='log buffer space' ela= 586746
p1=0 p2=0 p3=0 obj#=15083 tim=3741542848
0.818 log buffer space WAIT #4: nam='log buffer space' ela= 817764
p1=0 p2=0 p3=0 obj#=15083 tim=3742397903
0.818 log buffer space WAIT #4: nam='log buffer space' ela= 817764
p1=0 p2=0 p3=0 obj#=15083 tim=3742397903
1 log buffer space WAIT #4: nam='log buffer space' ela= 1000039 p1=0
p2=0 p3=0 obj#=15083 tim=3743638629
0.031 log buffer space WAIT #4: nam='log buffer space' ela= 31071 p1=0
p2=0 p3=0 obj#=15083 tim=3743839418
0.845 log buffer space WAIT #4: nam='log buffer space' ela= 844796
p1=0 p2=0 p3=0 obj#=15083 tim=3744726107
0.845 log buffer space WAIT #4: nam='log buffer space' ela= 844796
p1=0 p2=0 p3=0 obj#=15083 tim=3744726107
----------------------------------------
Totals from the above:
2.653 db file sequential read
0.302 db file scattered read
8.671 log buffer space

0.072 seconds is definitely less that 11.626 seconds. This looks like
a severe penalty from having the indexes in place - 161.5 times longer
to insert the same data. I did not test update performance, although
I would anticipate individual updates of a single row on all columns
in this table by a "screen painter" that automatically updates all
columns regardless of whether or not the columns changed to take
roughly 57 times longer than if no indexes existed on the table: 1:
(1+19*3). This increased time may be small compared to improved query
performance by end users.

The above was an isolated test case, and may very well exibit
different bahavior in other environments.

Mladen, thanks for prompting the test. I was curious about this also.

DA Morgan

unread,
Apr 8, 2007, 8:55:59 PM4/8/07
to
Charles Hooper wrote:

> I have also seen the statement written in books, blogs, and articles
> stating that having too many indexes on tables causes performance
> problems, essentially that a table update that would have required one
> I/O is turned into one I/O plus three or four I/Os per index (or maybe
> it was two to three I/Os per index).

And if you want to see it really get ugly ... add a bunch of indexes
and rollback the transaction.
--
Daniel A. Morgan
University of Washington
damo...@x.washington.edu
(replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org

Mladen Gogala

unread,
Apr 9, 2007, 12:36:13 AM4/9/07
to
On Sun, 08 Apr 2007 17:17:07 -0700, Charles Hooper wrote:

> Mladen, thanks for prompting the test. I was curious about this also.
>
> Charles Hooper
> PC Support Specialist
> K&M Machine-Fabricating, Inc.


Charles, I did my own brief test, much less refined then yours, but
nevertheless confirming your results. Here is the test, I believe that
it speaks for itself:

SQL> create table no_ind_tst(c1 number(10),c2 varchar2(5));

Table created.

SQL> create table ind_txt(c1 number(10) primary key,c2 varchar2(5));

Table created.

SQL> set timing on
SQL> declare
2 ind number:=0;
3 begin
4 for i in 1..1000000
5 loop
6 insert into no_ind_tst values(i,'TEST');
7 end loop;
8 commit;
9 end;
10 /

PL/SQL procedure successfully completed.

Elapsed: 00:01:06.07
SQL> declare
2 ind number:=0;
3 begin
4 for i in 1..1000000
5 loop
6 insert into ind_txt values(i,'TEST');
7 end loop;
8 commit;
9 end;
10 /

PL/SQL procedure successfully completed.

Elapsed: 00:01:46.50
SQL>

SQL> select 106.5/66.07 from dual;

106.5/66.07
-----------
1.61192674

Elapsed: 00:00:00.01
SQL>


That means that the run time of the insert into the table with the
index is 61% longer then without the index. I believe that it can make
a huge difference when loading huge data sets. Typically, something like
inserting a million records into the database would not be done during
the peak usage time. For an on-line update from an interactive
application, the difference practically doesn't exist. For a nightly batch
job, the difference may be huge.

--
http://www.mladen-gogala.com

Mladen Gogala

unread,
Apr 9, 2007, 12:38:27 AM4/9/07
to
On Mon, 09 Apr 2007 04:36:13 +0000, Mladen Gogala wrote:

> For a nightly batch
> job, the difference may be huge.

BTW, cleanup also showed a huge difference:
SQL> drop table ind_txt purge;

Table dropped.

Elapsed: 00:00:03.48
SQL> drop table no_ind_tst purge;

Table dropped.

Elapsed: 00:00:00.09
SQL>


--
http://www.mladen-gogala.com

Steve Adams

unread,
Apr 9, 2007, 1:13:17 AM4/9/07
to
Mladen Gogala @ 9/04/2007 3:17 AM:
Hi All,

I was just cruising by to see what might be being said about oracle-l
when I noticed the post above. I see that Charles has already shown that
there is indeed a very significant impact on I/O, because the foreground
processes doing the inserts and deletes need to read extra index blocks
into the buffer cache prior to each change. This may not involve real
physical I/O, but it does require latching if nothing else.

What has not yet been said is that in OLTP environments over-indexing
also limits concurrency in redo generation and in the application of
changes to the data blocks. Because the redo records have more change
vectors, and more buffer locks and thus more latches need to be taken
for change application, the redo latches and buffer locks are held for
longer than they would otherwise be. The increased latching and locking
and the increased latch and lock retention in both the log buffer and
the buffer cache reduce scalability.

@ Regards,
@ Steve Adams
@ http://www.ixora.com.au/ - For DBAs
@ http://www.christianity.net.au/ - For all

Mladen Gogala

unread,
Apr 9, 2007, 8:03:54 AM4/9/07
to
On Mon, 09 Apr 2007 15:13:17 +1000, Steve Adams wrote:

> I was just cruising by to see what might be being said about oracle-l

LOOL!

--
http://www.mladen-gogala.com

Charles Hooper

unread,
Apr 9, 2007, 8:53:25 AM4/9/07
to
On Apr 9, 12:36 am, Mladen Gogala <mgogala.SPAM_ME....@verizon.net>
wrote:

> On Sun, 08 Apr 2007 17:17:07 -0700, Charles Hooper wrote:
> > Mladen, thanks for prompting the test. I was curious about this also.
>
> > Charles Hooper
> > PC Support Specialist
> > K&M Machine-Fabricating, Inc.
>
> Charles, I did my own brief test, much less refined then yours, but
> nevertheless confirming your results. Here is the test, I believe that
> it speaks for itself:
>
> SQL> create table no_ind_tst(c1 number(10),c2 varchar2(5));
>
> Table created.
>
> SQL> create table ind_txt(c1 number(10) primary key,c2 varchar2(5));
>
> Table created.
(SNIP)

> That means that the run time of the insert into the table with the
> index is 61% longer then without the index. I believe that it can make
> a huge difference when loading huge data sets. Typically, something like
> inserting a million records into the database would not be done during
> the peak usage time. For an on-line update from an interactive
> application, the difference practically doesn't exist. For a nightly batch
> job, the difference may be huge.
>
> --http://www.mladen-gogala.com

Nice short test that you provided that is easy to reproduce.

I ran your test here - simply because I was curious to see if the
times of the two tests were being distorted by 1,000,000 hard parses,
but that was not the case - 1,000,000 hard parses was actually closer
to 2 hard parses. Elapsed: 00:00:46.83 for the first test and
Elapsed: 00:01:49.67 for the second. I then considered the
possibility that the performance related to index that was created for
the primary was being adversely affected by the primary key's unique
requirement, so I dropped the two tables, removed the primary key
specification, and created a non-unique index on the same column. My
timings for those runs are Elapsed: 00:00:48.55 for the first test run
and Elapsed: 00:01:48.26 for the second. The primary key requirement
had a slight impact, but not as much as I expected.

Thanks again for providing the test case.

hpuxrac

unread,
Apr 9, 2007, 11:08:53 AM4/9/07
to
On Apr 9, 12:36 am, Mladen Gogala <mgogala.SPAM_ME....@verizon.net>
wrote:

Does this mean that you have disproven and are now disavowing your
statements made earlier in this thread?

Mladen Gogala

unread,
Apr 9, 2007, 11:36:12 AM4/9/07
to
On Mon, 09 Apr 2007 08:08:53 -0700, hpuxrac wrote:

> Does this mean that you have disproven and are now disavowing your
> statements made earlier in this thread?

No, it means that you don't know how to read and are, therefore,
functionally illiterate.

--
http://www.mladen-gogala.com

hpuxrac

unread,
Apr 9, 2007, 12:40:57 PM4/9/07
to
On Apr 9, 11:36 am, Mladen Gogala <mgogala.SPAM-ME....@verizon.net>
wrote:

You wrote this earlier in this thread talking about defining
additional indexes ...

... Charles, this is the line I frequently find in many books, CBT


and
manuals and yet I have never seen insert or delete slowed down to the
unacceptable levels because of too many indexes. The only method to
diagnose that this is indeed happening would be to observe
significant
increase in average I/O time on the underlying data file.
Again, I've never even seen this happening. I believe that this thing
with too many indexes is dangerous only in the extreme situations and
it
is very hard to diagnose because the process that waits for writing
the
index blocks is DBWR so the users never wait for the blocks to be
written. Users may wait for checkpoints or log file sync but not for
the
index

Mladen what you wrote just doesn't make sense. Any time you add
indexes to any table you affect to some degree the scalability of an
application. Tom Kyte's test harness is an easy way to look at the
impact of adding indexes. A 10046 trace is another way.

Charles questioned what you wrote and put in a simple test. You
appeared to confirm the results noted by Charles yourself in a later
post.


Mladen Gogala

unread,
Apr 9, 2007, 2:10:05 PM4/9/07
to
On Mon, 09 Apr 2007 09:40:57 -0700, hpuxrac wrote:

> Mladen what you wrote just doesn't make sense. Any time you add indexes
> to any table you affect to some degree the scalability of an
> application. Tom Kyte's test harness is an easy way to look at the
> impact of adding indexes. A 10046 trace is another way.

The 10046 is not a good way as it doesn't show writing to indexes. The
only writes done by the user process are direct writes. In other words,
you don't see much of an impact from the 10046 trace. The only measure you
can use to measure the overall impact of an index is timing, precisely
what my test did. That is what "set timing on" was all about. Look at
the Charles's trace: you don't see anything like "single block writes",
indicative of an index maintenance. The only thing you see is an increased
wait for the log sync, but you cannot quantify that.

Furthermore, my statement said that I've never seen an unacceptable
impact. Mostly, for the OLTP transactions, indexes will extend the
transaction duration from 300 milisec to 500 milisec or 1/2 of a second.
Most users will not complain about 1/5 of a second. The only cases when
this impact becomes significant is when we start working with large batch
transactions, usually scheduled not to run during the peak usage. If a
load runs from 1-3 AM instead of 1-2 AM is a big difference performance-
wise, but I will not give it a second thought because it doesn't have any
business impact.

--
http://www.mladen-gogala.com

Jonathan Lewis

unread,
Apr 9, 2007, 2:42:09 PM4/9/07
to
"Mladen Gogala" <mgogala.S...@verizon.net> wrote in message
news:pan.2007.04...@verizon.net...
> Mostly, for the OLTP transactions, indexes will extend
> the
> transaction duration from 300 milisec to 500 milisec or 1/2 of a second.
> Most users will not complain about 1/5 of a second.

It's not always that simple.

Inserts into tables tend to be focused over a very small
number of blocks, which are therefore cached quite well.

Inserts into indexes are (except for your sequential key
example) more likely to be scattered randomly across
all leaf blocks. Unless you have a large enough memory
to buffer most of the leaf blocks of most of the indexes,
then inserts into (cached) table blocks cause increasing
numbers of physical reads of index leaf blocks as the
number of indexes increases. An insert into a table with
10 indexes could easily result in 9 or 10 physical read
requests - which shouldn't affect the performance for a
relatively low insert rate; but if you push the concurrency
up you will find a lot of interference between inserts as
the discs fail to respond to a high demand for random I/O.

It is still true that you should be cautious about adding
indexes because of the maintenance overheads; but the
balance of power has changed over the years , and can
be quite hard to determine in general, due to the opposing
effects of increased memory, and decreased spindle counts.


--
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com

Author: Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html

Mladen Gogala

unread,
Apr 9, 2007, 4:10:35 PM4/9/07
to
On Mon, 09 Apr 2007 19:42:09 +0100, Jonathan Lewis wrote:

>
> Inserts into indexes are (except for your sequential key example) more
> likely to be scattered randomly across all leaf blocks.

Jonathan, that depends on the application. Especially in OPS world, I
remember going to great lengths ensuring that there were enough
FREELIST_GROUPS which were making sure that people were not inserting into
the same blocks. The ITL lists were also extremely important. Having too
many people insert rows into the same block was a constant fear.

> Unless you have
> a large enough memory to buffer most of the leaf blocks of most of the
> indexes,

You will probably not be surprised but in these days of gigantic
memories, that is very frequently the case, especially if the DBA is
using STATSPACK as a tuning tool. You tend to see cache hit ratios of
99.99% in spreport.txt whenever that is the case. You'd probably be
surprised to learn how many of DBA's do not have a clue about event
based tuning and response time tuning and are still using BCHR as an
important metrics for measuring the system performance.


> then inserts into (cached) table blocks cause increasing
> numbers of physical reads of index leaf blocks as the number of indexes
> increases. An insert into a table with 10 indexes could easily result in
> 9 or 10 physical read requests - which shouldn't affect the performance
> for a relatively low insert rate; but if you push the concurrency up you
> will find a lot of interference between inserts as the discs fail to
> respond to a high demand for random I/O.
>
> It is still true that you should be cautious about adding indexes
> because of the maintenance overheads; but the balance of power has
> changed over the years , and can be quite hard to determine in general,
> due to the opposing effects of increased memory, and decreased spindle
> counts.
>

Oh, no doubt about that, but negative impact of indexes was an
important boogey man in the tales of the Oracle crypt. In reality,
there are many other factors that will hit an application before
over-indexing. C'ya on May 16th.

--
http://www.mladen-gogala.com

joel garry

unread,
Apr 9, 2007, 4:43:42 PM4/9/07
to
On Apr 9, 11:10 am, Mladen Gogala <mgogala.SPAM-ME....@verizon.net>
wrote:

I've seen business impact because the batch needs to fit into a
window. Interestingly enough, I only was looking at that because of a
report that was running off that same table took too long for the
online version. On investigation, it turned out the real problem was
an in-memory table was being used not as documented, a consequence of
that database-independent thing. But fixing that made me a hero and
led to thousands of hours of other work. They hadn't even realized
there was a fix for the business impact, and wouldn't have to known to
ask about it, or who to ask (which is my major complaint about Method-
R - sometimes people just think [or are told] "that's just the way it
is" or otherwise make invalid decisions about ordering business impact
importance). Subsequent hardware improvements made it all moot,
anyways. So you are kind of wrong and kind of right, since on a
saturated system something that adds a small effect (such as Jonathan
posted about scattered index inserts) can rapidly deteriorate
performance.

Did make the idiots who couldn't read doco mad at me, though.

jg
--
@home.com is bogus.
I want my flying car! http://news.com.com/2300-11389_3-6173182-2.html

Charles Hooper

unread,
Apr 9, 2007, 4:52:45 PM4/9/07
to
On Apr 9, 2:10 pm, Mladen Gogala <mgogala.SPAM-ME....@verizon.net>
wrote:

This is an interesting discussion (I find that it is helping my
understanding). I performed additional translation of the wait events
in the TEST_WITH_INDEXES trace file to identify the objects indicated
in the wait events. If one were to manually navigate the 10046 trace
file, you might see the following at dep=1:
select file# from file$ where ts#=:1
update tsq$ set blocks=:3,maxblocks= ...
update seg$ set type#=:4,blocks=:5,extents= ...

True, there were no writes indicated in the wait events. We can see
where the SELECT statement apparently affects the execution
performance - I can tie each of the above three statements to a
particular group of wait events listed below.

Translation of the TEST_WITH_INDEXES trace file wait events (with time
delta from the start of the trace file):
Time Wait Time Wait Event Object
0.088 0.079 db file sequential read TABLE OWNER.T3 Block:
3295081 Blocks Read: 1
0.089 0.000 db file sequential read TABLE OWNER.T3 Block:
3295082 Blocks Read: 1
0.102 0.013 db file scattered read INDEX OWNER.T3_1 Block:
3295340 Blocks Read: 5
0.112 0.009 db file scattered read INDEX OWNER.T3_2 Block:
3295700 Blocks Read: 5
0.124 0.012 db file scattered read INDEX OWNER.T3_3 Block:
3295828 Blocks Read: 5
0.127 0.003 db file scattered read INDEX OWNER.T3_4 Block:
3296028 Blocks Read: 5
0.139 0.011 db file scattered read INDEX OWNER.T3_5 Block:
3296092 Blocks Read: 5
0.152 0.013 db file scattered read INDEX OWNER.T3_7 Block:
3296164 Blocks Read: 5
0.156 0.003 db file scattered read INDEX OWNER.T3_10 Block:
3296188 Blocks Read: 5
0.158 0.002 db file scattered read INDEX OWNER.T3_11 Block:
3296204 Blocks Read: 5
0.161 0.002 db file scattered read INDEX OWNER.T3_12 Block:
3296220 Blocks Read: 5
0.165 0.004 db file scattered read INDEX OWNER.T3_13 Block:
3296236 Blocks Read: 5
0.178 0.013 db file scattered read INDEX OWNER.T3_14 Block:
3296308 Blocks Read: 5
0.195 0.016 db file scattered read INDEX OWNER.T3_15 Block:
3296404 Blocks Read: 5
0.196 0.001 db file scattered read INDEX OWNER.T3_16 Block:
3296412 Blocks Read: 5
0.197 0.001 db file scattered read INDEX OWNER.T3_17 Block:
3296420 Blocks Read: 5
0.209 0.011 db file scattered read INDEX OWNER.T3_18 Block:
3296484 Blocks Read: 5
0.221 0.011 db file scattered read INDEX OWNER.T3_19 Block:
3296548 Blocks Read: 5
0.227 0.000 db file sequential read INDEX OWNER.T3_2 Block:
3295698 Blocks Read: 1
0.228 0.000 db file sequential read INDEX OWNER.T3_2 Block:
3295697 Blocks Read: 1
0.311 0.033 db file sequential read INDEX OWNER.T3_14 Block:
3296306 Blocks Read: 1
0.311 0.000 db file sequential read INDEX OWNER.T3_14 Block:
3296305 Blocks Read: 1
0.324 0.010 db file sequential read INDEX OWNER.T3_1 Block:
3295338 Blocks Read: 1
0.324 0.000 db file sequential read INDEX OWNER.T3_1 Block:
3295337 Blocks Read: 1
0.345 0.010 db file sequential read INDEX OWNER.T3_3 Block:
3295826 Blocks Read: 1
0.345 0.010 db file sequential read INDEX OWNER.T3_3 Block:
3295826 Blocks Read: 1
0.346 0.000 db file sequential read INDEX OWNER.T3_3 Block:
3295825 Blocks Read: 1
0.353 0.006 db file sequential read INDEX OWNER.T3_13 Block:
3296234 Blocks Read: 1
0.354 0.000 db file sequential read INDEX OWNER.T3_13 Block:
3296233 Blocks Read: 1
0.360 0.002 db file sequential read INDEX OWNER.T3_4 Block:
3296026 Blocks Read: 1
0.371 0.011 db file sequential read INDEX OWNER.T3_4 Block:
3296025 Blocks Read: 1
0.382 0.011 db file sequential read INDEX OWNER.T3_5 Block:
3296090 Blocks Read: 1
0.383 0.000 db file sequential read INDEX OWNER.T3_5 Block:
3296089 Blocks Read: 1
0.397 0.012 db file sequential read INDEX OWNER.T3_17 Block:
3296418 Blocks Read: 1
0.397 0.000 db file sequential read INDEX OWNER.T3_17 Block:
3296417 Blocks Read: 1
0.398 0.000 db file sequential read INDEX OWNER.T3_18 Block:
3296482 Blocks Read: 1
0.399 0.000 db file sequential read INDEX OWNER.T3_18 Block:
3296481 Blocks Read: 1
0.400 0.000 db file sequential read INDEX OWNER.T3_19 Block:
3296546 Blocks Read: 1
0.401 0.000 db file sequential read INDEX OWNER.T3_19 Block:
3296545 Blocks Read: 1
0.431 0.023 db file scattered read INDEX OWNER.T3_8 Block:
3296172 Blocks Read: 5
0.528 0.077 db file scattered read INDEX OWNER.T3_6 Block:
3296156 Blocks Read: 5
0.528 0.077 db file scattered read INDEX OWNER.T3_6 Block:
3296156 Blocks Read: 5
0.788 0.123 db file sequential read INDEX OWNER.T3_10 Block:
3296186 Blocks Read: 1
0.788 0.123 db file sequential read INDEX OWNER.T3_10 Block:
3296186 Blocks Read: 1
0.909 0.121 db file sequential read INDEX OWNER.T3_10 Block:
3296185 Blocks Read: 1
1.093 0.135 db file sequential read INDEX OWNER.T3_12 Block:
3296218 Blocks Read: 1
1.093 0.135 db file sequential read INDEX OWNER.T3_12 Block:
3296218 Blocks Read: 1
1.106 0.013 db file sequential read INDEX OWNER.T3_12 Block:
3296217 Blocks Read: 1
1.750 0.270 log buffer space INDEX OWNER.T3_12 Block: 0
Blocks Read: 0
1.750 0.270 log buffer space INDEX OWNER.T3_12 Block: 0
Blocks Read: 0
2.281 0.402 log buffer space INDEX OWNER.T3_12 Block: 0
Blocks Read: 0
2.281 0.402 log buffer space INDEX OWNER.T3_12 Block: 0
Blocks Read: 0
2.483 0.170 db file sequential read INDEX OWNER.T3_11 Block:
3296202 Blocks Read: 1
2.483 0.170 db file sequential read INDEX OWNER.T3_11 Block:
3296202 Blocks Read: 1
2.484 0.000 db file sequential read INDEX OWNER.T3_11 Block:
3296201 Blocks Read: 1
2.627 0.041 log buffer space INDEX OWNER.T3_11 Block: 0
Blocks Read: 0
2.627 0.041 log buffer space INDEX OWNER.T3_11 Block: 0
Blocks Read: 0
2.901 0.153 db file sequential read INDEX OWNER.T3_6 Block:
3296154 Blocks Read: 1
2.901 0.153 db file sequential read INDEX OWNER.T3_6 Block:
3296154 Blocks Read: 1
2.901 0.000 db file sequential read INDEX OWNER.T3_6 Block:
3296153 Blocks Read: 1
3.063 0.150 log buffer space INDEX OWNER.T3_6 Block: 0
Blocks Read: 0
3.457 0.018 log buffer space INDEX OWNER.T3_6 Block: 0
Blocks Read: 0
3.457 0.018 log buffer space INDEX OWNER.T3_6 Block: 0
Blocks Read: 0
3.881 0.405 db file sequential read INDEX OWNER.T3_15 Block:
3296402 Blocks Read: 1
3.881 0.405 db file sequential read INDEX OWNER.T3_15 Block:
3296402 Blocks Read: 1
3.881 0.000 db file sequential read INDEX OWNER.T3_15 Block:
3296401 Blocks Read: 1
4.074 0.178 db file sequential read INDEX OWNER.T3_16 Block:
3296410 Blocks Read: 1
4.074 0.178 db file sequential read INDEX OWNER.T3_16 Block:
3296410 Blocks Read: 1
4.075 0.000 db file sequential read INDEX OWNER.T3_16 Block:
3296409 Blocks Read: 1
4.083 0.000 db file sequential read INDEX OWNER.T3_7 Block:
3296162 Blocks Read: 1
4.083 0.000 db file sequential read INDEX OWNER.T3_7 Block:
3296162 Blocks Read: 1
4.083 0.000 db file sequential read INDEX OWNER.T3_7 Block:
3296161 Blocks Read: 1
4.613 0.298 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
4.613 0.298 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
5.585 0.933 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
6.413 0.587 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
6.413 0.587 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
7.268 0.818 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
7.268 0.818 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
8.509 1.000 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
8.710 0.031 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
9.596 0.845 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0
9.596 0.845 log buffer space INDEX OWNER.T3_7 Block: 0
Blocks Read: 0

0 new messages