Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ASSM bug: slow INSERT after uncomitted DELETE

303 views
Skip to first unread message

ca111026

unread,
Aug 7, 2009, 6:35:46 AM8/7/09
to
In my previous post I mentioned ASSM bug (present in 10.2.0.4) that
makes single-row INSERT extremely slow when table contains large
uncomitted DELETE. Here is the test case:

Tests were run on AIX 5.3 (64-bit) and Red Hat Linux (32-bit),
Oracle 10.2.0.4

1. Create tablespace, it uses default 8K block size

create tablespace assm
extent management local uniform size 1m
segment space management auto
datafile
'/abc/db01/oracle/ABC1P/oradata/assm_01.dbf' size 1000m;


2. Create table


create table test_assm
(
n1 number,
v1 varchar2(50),
v2 varchar2(50),
v3 varchar2(50),
v4 varchar2(50),
v5 varchar2(50),
v6 varchar2(50),
v7 varchar2(50),
v8 varchar2(50),
v9 varchar2(50),
v10 varchar2(50)
)
tablespace assm;


3. Populate table with 1,000,000 rows, COMMIT at the end


begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');


end loop;
end;
/


COMMIT;


4. Insert additional 1000 rows into the table using ***SINGLE_ROW***
inserts.
I used following script to generate INSERT statements


select
'insert into test_assm values(' || n1 ||
',''123456789*123456789*123456789*123456789*1234567'',' ||


'''123456789*123456789*123456789*123456789*1234567'',' ||


'''123456789*123456789*123456789*123456789*1234567'',' ||


'''123456789*123456789*123456789*123456789*1234567'',' ||


'''123456789*123456789*123456789*123456789*1234567'',' ||


'''123456789*123456789*123456789*123456789*1234567'',' ||


'''123456789*123456789*123456789*123456789*1234567'',' ||


'''123456789*123456789*123456789*123456789*1234567'',' ||


'''123456789*123456789*123456789*123456789*1234567'',' ||


'''123456789*123456789*123456789*123456789*1234567'');'
from
test_assm
where
rownum < 1001;


It took 1 second to insert 1000 rows through single-row inserts.


5. Delete all rows from the table, don't commit


6. Re-execute script that inserts 1000 rows from a different session.
Runtime > 20 min.


There were no indexes on the table.

Insert into table containing uncomitted DELETE should not be
significantly slower
than insert into table without DELETE.

When test was repeated using table in non-ASSM tablespace
(still locally managed) there were no performance degradation
when table contained uncomitted DELETE.


Cristian Cudizio

unread,
Aug 7, 2009, 8:40:09 AM8/7/09
to

I've made this test.
Really each insert on the second session is very slow, i've traced the
session and Oracle scans all (it seems) table blocks, with "db file
sequential read"s
while in the first session doesn't.
also after commit on first session insert on secondo session is fast.
I'm just curious to know why oracle make those *consistent* gets on
a insert.

Regards,
Cristian

Jonathan Lewis

unread,
Aug 7, 2009, 8:50:35 AM8/7/09
to

"Cristian Cudizio" <cristian...@yahoo.it> wrote in message
news:c8ba2866-cda9-45c8...@q14g2000vbi.googlegroups.com...

Regards,
Cristian

The problem is when the bitmap blocks are updated to show free space.
It's a difficult one to handle. In this case, Oracle has updated the bitmap
blocks as the 100,000 rows were deleted, so all the bitmaps show that
the target blocks may have free space.

The second session comes in and reads the first hinted bitmap block,
which points it to a data block, which turns out to have an uncommitted
delete and doesn't have enough free space.

So the session works through every single bitmap block in turn, and each
bitmap block points to 64 or 128 table blocks which all "might" have space
but none of them do because the delete hasn't committed. Ultimately
you examine every single block in the table before you get past all the
blocks affected by the delete.


In an earlier version of a similar bug, Oracle DIDN'T mark the bitmap
blocks on the delete, and then failed to clean up, so ended up with lots
of free space in tables that couldn't be reached because the bitmaps
said the blocks were full.


Look on this as a demonstration that ASSM is supposed to help with
OLTP activities only - and DSS/DW bulk activity will cause problems.

(The trouble is, highly concurrency OLTP activity can still run into
problems
with ASSM).

--
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com

Author: Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


Cristian Cudizio

unread,
Aug 7, 2009, 9:51:50 AM8/7/09
to
On 7 Ago, 14:50, "Jonathan Lewis" <jonat...@jlcomp.demon.co.uk> wrote:
> "Cristian Cudizio" <cristian.cudi...@yahoo.it> wrote in message
>
....
> Jonathan Lewishttp://jonathanlewis.wordpress.com
>
> Author: Cost Based Oracle: Fundamentalshttp://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

>
> The Co-operative Oracle Users' FAQhttp://www.jlcomp.demon.co.uk/faq/ind_faq.html

Thank Jonathan,
so, first session (that wich as made the deletes) is quicker because
it can use directly the first block ?

Regards,
Cristian

ca111026

unread,
Aug 7, 2009, 5:47:32 PM8/7/09
to
The example given above is not artificial. I work in environment where
we have more than hundred production
databases, and Apps Support is mostly outsourced/offshored. As
databases grow the data needs to be purged,
some databases have partitioned tables but many don't. So DELETE needs
to be run, often using condition
on a column without index like LAST_UPDATED_DATE. So Apps Support runs
this delete, they make a mistake
a delete too much, inserts are affected, CPU usage/disk I/O goes
through the roof, people start blaming SAN,
SRDF gets switched to adaptive mode, etc. It is not an easy problem to
diagnoze, normally when going through
Statspack reports people just skip past single-row insert statements
as they rarely experience problems.

Which brings me back to the original question - Why use ASSM in the
first place? It does not look like it offers
any benefits (may be ALTER TABLE SHRINK but how often you use it?).
Could someone point to a simple test
(however artificial) that shows advantage of ASSM?

John Hurley

unread,
Aug 7, 2009, 7:26:05 PM8/7/09
to
On Aug 7, 6:35 am, ca111026 <ca111...@gmail.com> wrote:

snip

Thanks for the complete writeup.

I tried your test case. My system is an 11.1.0.7 Enterprise Edition
64 bit system on OEL 5.2 64 bit. It has 32 gig of RAM and a very nice
IO subsystem. Using ASM for storage.

My times to complete steps 4 and 6 ( yes from different sessions )
were the same basically ... not different times as you reported. My
testing was done using Toad going into the database thru network not
locally on the database server using sqlplus. My times were more like
3 secs than 1 ( as your first step 4 was ).

I can repeat again eventually using sqlplus locally if you want to
know if that changes anything.

I do have one system back at 10.2.0.4 that I can eventually also test
on.

ca111026

unread,
Aug 8, 2009, 12:42:16 AM8/8/09
to
Hi John,

Thanks for testing on 11.1.0.7. It is nice to know that they finally
fixed the bug.

Charles Hooper

unread,
Aug 8, 2009, 11:04:11 AM8/8/09
to

My test in your related thread indicates that the problem may still be
present in 11.1.0.7:
http://groups.google.com/group/comp.databases.oracle.server/browse_thread/thread/289b13afe6cf8443

In my test, I saw results which were consistent with what you
experienced, which resulted from multiple waits on 'db file sequential
read' with sub 1ms access times for each insert in the second
session. However, due to the number of 'db file sequential read'
waits, a fairly consistent 69,802 per row inserted, each insert
required roughly 20 seconds.

John's post made be a bit curious, so I started performing a couple
tests.

CREATE SMALLFILE TABLESPACE "LOCAL_UNIFORM1M" DATAFILE 'C:\ORACLE
\ORADATA\OR11\locun1MOR1101.dbf' SIZE 2G REUSE AUTOEXTEND ON NEXT 10M
MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
SEGMENT SPACE MANAGEMENT MANUAL;
CREATE SMALLFILE TABLESPACE "LOCAL_ASSM" LOGGING DATAFILE 'C:\Oracle
\OraData\OR11\locassmOR1101.dbf' SIZE 2G REUSE AUTOEXTEND ON NEXT 10M
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT
AUTO;

SELECT TABLESPACE_NAME,
BLOCK_SIZE,ALLOCATION_TYPE,SEGMENT_SPACE_MANAGEMENT,EXTENT_MANAGEMENT
FROM DBA_TABLESPACES;

TABLESPACE_NAME BLOCK_SIZE ALLOCATIO SEGMEN EXTENT_MAN
--------------- ---------- --------- ------ ----------
LOCAL_UNIFORM1M 8192 UNIFORM MANUAL LOCAL
LOCAL_ASSM 8192 SYSTEM AUTO LOCAL

We now have a new locally managed tablespace with 1MB extents not
using ASSM, and another new tablespace using ASSM with autoallocated
extents (my original test used a old ASSM autoallocate tablespace
containing other data).

(Session 1)


create table test_assm
(
n1 number,
v1 varchar2(50),
v2 varchar2(50),
v3 varchar2(50),
v4 varchar2(50),
v5 varchar2(50),
v6 varchar2(50),
v7 varchar2(50),
v8 varchar2(50),
v9 varchar2(50),
v10 varchar2(50)
)

tablespace LOCAL_UNIFORM1M;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');

end loop;
end;
/

COMMIT;

(build insertstatements.sql using the select statement provided by the
OP, which will include statements like the following:
insert into test_assm values
(15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
(16,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
(17,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
(18,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
(19,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567');
...
)

@c:\insertstatements.sql

DELETE FROM test_assm;


(Session 2)
SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCAL1UM';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
@c:\insertstatements.sql
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
EXIT

(All 1000 of the insert statements complete in less than a second.)
Reconnect session 2

Using the LOCAL_UNIFORM1M tablespace, the insert completed in less
than a second.

Repeated the test with the KEEP pool at 1MB, which allowed the default
buffer pool to grow:
(Session 1)
DROP TABLE TEST_ASSM PURGE;

create table test_assm
(
n1 number,
v1 varchar2(50),
v2 varchar2(50),
v3 varchar2(50),
v4 varchar2(50),
v5 varchar2(50),
v6 varchar2(50),
v7 varchar2(50),
v8 varchar2(50),
v9 varchar2(50),
v10 varchar2(50)
)

tablespace LOCAL_ASSM;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');

end loop;
end;
/

COMMIT;

@c:\insertstatements.sql

DELETE FROM test_assm;


(Session 2)
SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCALAM';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
@c:\insertstatements.sql
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
EXIT

(Each insert statements reported an elapsed time of 0.14 to 0.15
seconds.)
Reconnect session 2


__DB_CACHE_SIZE floated to 7,449,083,904

This test run was significantly faster than the test run performed
yesterday.

Repeating the test again with a smaller __DB_CACHE_SIZE:
(Session 1)
DROP TABLE TEST_ASSM PURGE;

ALTER SYSTEM SET DB_KEEP_CACHE_SIZE=6G;

create table test_assm
(
n1 number,
v1 varchar2(50),
v2 varchar2(50),
v3 varchar2(50),
v4 varchar2(50),
v5 varchar2(50),
v6 varchar2(50),
v7 varchar2(50),
v8 varchar2(50),
v9 varchar2(50),
v10 varchar2(50)
)

tablespace LOCAL_ASSM;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');

end loop;
end;
/

COMMIT;

@c:\insertstatements.sql

DELETE FROM test_assm;


(Session 2)
SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCALAM2';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
@c:\insertstatements.sql
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
EXIT

(Each insert statement after the first reported an elapsed time of
0.14 to 0.15 seconds.)

__DB_CACHE_SIZE floated to 1,073,741,824

The execution time was about the same as with the larger
__DB_CACHE_SIZE. Apparently only the first insert experienced a large
number of 'db file sequential read' waits, totalling about 28 seconds
based on the timing reported in SQL*Plus.

What if we flood the KEEP and DEFAULT buffer pools:
(Session 3 connected as SYS)
SET LINESIZE 150
SET PAGESIZE 10000
SPOOL C:\TABLES.SQL
SELECT
'SELECT * FROM '||OWNER||'.'||TABLE_NAME||' ORDER BY 1;' T
FROM
DBA_TABLES;

SPOOL OFF
SET AUTOTRACE TRACEONLY STATISTICS;
@C:\TABLES.SQL
SET AUTOTRACE OFF


(Session 1)
DROP TABLE TEST_ASSM PURGE;

create table test_assm
(
n1 number,
v1 varchar2(50),
v2 varchar2(50),
v3 varchar2(50),
v4 varchar2(50),
v5 varchar2(50),
v6 varchar2(50),
v7 varchar2(50),
v8 varchar2(50),
v9 varchar2(50),
v10 varchar2(50)
)

tablespace LOCAL_ASSM;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');

end loop;
end;
/

COMMIT;

@c:\insertstatements.sql

DELETE FROM test_assm;


(Session 2)
SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCALAM3';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
@c:\insertstatements.sql
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
EXIT

(Each insert statement after the first reported an elapsed time of
0.17 to 0.19 seconds.)

OK, that increased the time slightly, but not as much seen yesterday.

Maybe it has to due with the process ID - luck of the draw regarding
which blocks session 2 attempts to insert into due to the way ASSM
attempts to reduce block contention? I repeated the test again using
the same old ASSM tablespace which I used yesterday - insert times for
the second session where roughly 0.15 seconds each after the first
insert completed. Of course, I bounced the database since the test
run yesterday, so maybe that has an impact?

The first couple EXEC and STAT lines from the first trace today with
the 6GB KEEP pool in effect:
EXEC
#1:c=3541222,e=26125284,p=54865,cr=69793,cu=83979,mis=0,r=1,dep=0,og=1,plh=0,tim=314231018338
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=54865 pw=0 time=0 us)'

EXEC
#2:c=171601,e=187295,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=314231205633
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=0 pw=0 time=0 us)'

EXEC
#1:c=156001,e=155942,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=314231361575
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=0 pw=0 time=0 us)'

EXEC
#2:c=171602,e=156113,p=0,cr=69793,cu=83959,mis=0,r=1,dep=0,og=1,plh=0,tim=314231517688
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=0 pw=0 time=0 us)'
------------------

The first couple from yesterday's trace file also with the 6GB KEEP
pool in effect:
EXEC
#3:c=3759624,e=20904025,p=69802,cr=69793,cu=83979,mis=0,r=1,dep=0,og=1,plh=0,tim=220862828163
STAT #3 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=69802 pw=0 time=0 us)'

EXEC
#2:c=3978025,e=15984033,p=69802,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=220878812196
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=69802 pw=0 time=0 us)'

EXEC
#1:c=3666024,e=13540824,p=69802,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=220892353020
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=69802 pw=0 time=0 us)'

EXEC
#3:c=3744024,e=13634412,p=69802,cr=69793,cu=83959,mis=0,r=1,dep=0,og=1,plh=0,tim=220905987432
STAT #3 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=69802 pw=0 time=0 us)'

EXEC
#2:c=3650423,e=13447212,p=69803,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=220919434644
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=69803 pw=0 time=0 us)'

Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.

John Hurley

unread,
Aug 8, 2009, 11:38:10 AM8/8/09
to
On Aug 8, 11:04 am, Charles Hooper <hooperc2...@yahoo.com> wrote:

snip

> > Thanks for testing on 11.1.0.7. It is nice to know that they finally
> > fixed the bug.
>
> My test in your related thread indicates that the problem may still be

> present in 11.1.0.7:http://groups.google.com/group/comp.databases.oracle.server/browse_th...


>
> In my test, I saw results which were consistent with what you
> experienced, which resulted from multiple waits on 'db file sequential
> read' with sub 1ms access times for each insert in the second
> session.  However, due to the number of 'db file sequential read'
> waits, a fairly consistent 69,802 per row inserted, each insert
> required roughly 20 seconds.
>
> John's post made be a bit curious, so I started performing a couple
> tests.

Charles I got a little lost trying to follow your reply ... sorry.

Wonder if I either screwed something up ( I don't think so ... was
careful to check tablespace was ASSM and that inserts were in
different sessions ) or if there is something fixed in 11.1.0.7 in
linux 64 bit that still has problems on other platforms and/or if the
fact that my setup is using ASM is also throwing a wrinkle into
things?

Noons seemed to report that he did not see the same problem either but
I think that was before the complete test case was posted.

Jonathan Lewis

unread,
Aug 8, 2009, 12:26:46 PM8/8/09
to

"John Hurley" <johnb...@sbcglobal.net> wrote in message
news:27de14dd-1d33-461a...@v2g2000vbb.googlegroups.com...


John,

Given Charles comments about his timing on 11.1.0.7,
how did you do the 1,000 single row inserts ? Was it
1,000 separate statements, or a pl/sql loop with 1,000
inserts and commits inside the loop /

If the latter (or some Toad equivalent of the latter) that would
explain why you don't see the problem.


--
Regards

Jonathan Lewis

John Hurley

unread,
Aug 8, 2009, 1:38:29 PM8/8/09
to
On Aug 8, 12:26 pm, "Jonathan Lewis" <jonat...@jlcomp.demon.co.uk>
wrote:

snip

> Given Charles comments about his timing on 11.1.0.7,
> how did you do the 1,000 single row inserts ?  Was it
> 1,000 separate statements, or a pl/sql loop with 1,000
> inserts and commits inside the loop /
>
> If the latter (or some Toad equivalent of the latter) that would
> explain why you don't see the problem.

Ooops ... also known as John screwed up the test case.

John Hurley

unread,
Aug 8, 2009, 1:42:20 PM8/8/09
to
On Aug 7, 6:35 am, ca111026 <ca111...@gmail.com> wrote:

snip

Never mind ... Charles was correct it can be reproduced on
11.1.0.7 ... sorry about that.

I wish I could say that beer was responsible ... but it was a long
week at work. Somehow I managed to run just the select statement to
create the insert statements and not the insert statements themselves.

Step 6 ran in about 5 minutes on my system versus 1 sec for step 4.

Charles Hooper

unread,
Aug 8, 2009, 1:55:24 PM8/8/09
to
On Aug 8, 11:38 am, John Hurley <johnbhur...@sbcglobal.net> wrote:
> Charles I got a little lost trying to follow your reply ... sorry.
>
> Wonder if I either screwed something up ( I don't think so ... was
> careful to check tablespace was ASSM and that inserts were in
> different sessions ) or if there is something fixed in 11.1.0.7 in
> linux 64 bit that still has problems on other platforms and/or if the
> fact that my setup is using ASM is also throwing a wrinkle into
> things?
>
> Noons seemed to report that he did not see the same problem either but
> I think that was before the complete test case was posted.

I was not implying that you made a mistake in your test run, I just
saw a couple potential problems with the original test case
description which might cause someone (including myself) to conduct
the test a little different than the OP. That might include:
* Batching the inserts in the second session, rather than using 1,000
individual SQL statements for the insert.
* Creating the tablespace with the wrong specifications (that happened
to me when I initially tried to set up the tablespace with the 1MB non-
ASSM tablespace - I actually created an ASSM tablespace with 1MB
uniform extents which resulted in roughly the same slow timing as the
ASSM autoallocate tablespace).
* A 10046 trace at level 8 was not set up in the original test case,
which would make it hard to determine where the time was spent.

This is partial output from the 10046 trace file captured yesterday
and today, which targetted a pre-existing ASSM tablespace with roughly
the same size default buffer cache in effect (note that the database
was bounced between runs, and that may be a source of the time
difference).
Yesterday:
PARSING IN CURSOR #3 len=532 dep=0 uid=56 oct=2 lid=56
tim=220841924138 hv=471712922 ad='2778b31b8' sqlid='dyqznk8f1vj4u'


insert into test_assm values
(15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')

END OF STMT
PARSE
#3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220841924138
WAIT #3: nam='db file sequential read' ela= 17613 file#=7
block#=1900672 blocks=1 obj#=67153 tim=220841943750
WAIT #3: nam='db file sequential read' ela= 458 file#=7 block#=1900680
blocks=1 obj#=67153 tim=220841944275
WAIT #3: nam='db file sequential read' ela= 617 file#=7 block#=1900681
blocks=1 obj#=67153 tim=220841944980
WAIT #3: nam='db file sequential read' ela= 73 file#=7 block#=1900682
blocks=1 obj#=67153 tim=220841945113
...
PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56
tim=220862828163 hv=1479200138 ad='2778aa958' sqlid='6su1s3tc2pmca'


insert into test_assm values
(16,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')

END OF STMT
PARSE
#2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220862828163
WAIT #2: nam='db file sequential read' ela= 4517 file#=7
block#=1900672 blocks=1 obj#=67153 tim=220862850571
WAIT #2: nam='db file sequential read' ela= 484 file#=7 block#=1900680
blocks=1 obj#=67153 tim=220862851087
WAIT #2: nam='db file sequential read' ela= 548 file#=7 block#=1900681
blocks=1 obj#=67153 tim=220862851684
WAIT #2: nam='db file sequential read' ela= 33 file#=7 block#=1900682
blocks=1 obj#=67153 tim=220862851760
...
PARSING IN CURSOR #1 len=532 dep=0 uid=56 oct=2 lid=56
tim=220878812196 hv=3933466223 ad='2737d43f8' sqlid='g6dd19gp77vmg'


insert into test_assm values
(17,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')

END OF STMT
PARSE
#1:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220878812196
WAIT #1: nam='db file sequential read' ela= 8169 file#=7
block#=1900672 blocks=1 obj#=67153 tim=220878847836
WAIT #1: nam='db file sequential read' ela= 470 file#=7 block#=1900680
blocks=1 obj#=67153 tim=220878848364
WAIT #1: nam='db file sequential read' ela= 510 file#=7 block#=1900681
blocks=1 obj#=67153 tim=220878848923
WAIT #1: nam='db file sequential read' ela= 37 file#=7 block#=1900682
blocks=1 obj#=67153 tim=220878849003
...
PARSING IN CURSOR #3 len=532 dep=0 uid=56 oct=2 lid=56
tim=220892353020 hv=1578030285 ad='273749f28' sqlid='a7tcr1tg0xp6d'


insert into test_assm values
(18,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')

END OF STMT
PARSE
#3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220892353020
WAIT #3: nam='db file sequential read' ela= 6309 file#=7
block#=1900672 blocks=1 obj#=67153 tim=220892365476
WAIT #3: nam='db file sequential read' ela= 507 file#=7 block#=1900680
blocks=1 obj#=67153 tim=220892366027
WAIT #3: nam='db file sequential read' ela= 476 file#=7 block#=1900681
blocks=1 obj#=67153 tim=220892366551
WAIT #3: nam='db file sequential read' ela= 37 file#=7 block#=1900682
blocks=1 obj#=67153 tim=220892366630
...
PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56
tim=220905987432 hv=2708362693 ad='2737a1a18' sqlid='7mbckzyhqwpf5'


insert into test_assm values
(19,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')

END OF STMT
PARSE
#2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220905987432
WAIT #2: nam='db file sequential read' ela= 7892 file#=7
block#=1900672 blocks=1 obj#=67153 tim=220905999307
WAIT #2: nam='db file sequential read' ela= 513 file#=7 block#=1900680
blocks=1 obj#=67153 tim=220905999847
WAIT #2: nam='db file sequential read' ela= 518 file#=7 block#=1900681
blocks=1 obj#=67153 tim=220906000413
WAIT #2: nam='db file sequential read' ela= 37 file#=7 block#=1900682
blocks=1 obj#=67153 tim=220906000493
...
PARSING IN CURSOR #1 len=532 dep=0 uid=56 oct=2 lid=56
tim=220919434644 hv=3773067906 ad='2778aa2c8' sqlid='4gsb2p3hf8wn2'
insert into test_assm values
(20,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE
#1:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220919434644
WAIT #1: nam='db file sequential read' ela= 4513 file#=7
block#=1900672 blocks=1 obj#=67153 tim=220919467246
WAIT #1: nam='db file sequential read' ela= 483 file#=7 block#=1900680
blocks=1 obj#=67153 tim=220919467788
WAIT #1: nam='db file sequential read' ela= 474 file#=7 block#=1900681
blocks=1 obj#=67153 tim=220919468320
WAIT #1: nam='db file sequential read' ela= 45 file#=7 block#=1900682
blocks=1 obj#=67153 tim=220919468416

----------------
----------------

Today, same tablespace, roughly the same default buffer size:
PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56
tim=314564780904 hv=471712922 ad='2772d7f28' sqlid='dyqznk8f1vj4u'


insert into test_assm values
(15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')

END OF STMT
PARSE
#2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314564780904
WAIT #2: nam='db file sequential read' ela= 12462 file#=7
block#=1901256 blocks=1 obj#=67277 tim=314564795862
WAIT #2: nam='db file sequential read' ela= 531 file#=7 block#=1901257
blocks=1 obj#=67277 tim=314564796472
WAIT #2: nam='db file sequential read' ela= 72 file#=7 block#=1901258
blocks=1 obj#=67277 tim=314564796577
WAIT #2: nam='db file sequential read' ela= 331 file#=7 block#=1901259
blocks=1 obj#=67277 tim=314564796941
...
PARSING IN CURSOR #1 len=532 dep=0 uid=56 oct=2 lid=56
tim=314585990954 hv=1479200138 ad='2772d7a28' sqlid='6su1s3tc2pmca'


insert into test_assm values
(16,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')

END OF STMT
PARSE
#1:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314585990954
EXEC
#1:c=156001,e=156030,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=314586146984


STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=0 pw=0 time=0 us)'

WAIT #1: nam='SQL*Net message to client' ela= 5 driver id=1413697536
#bytes=1 p3=0 obj#=67277 tim=314586149201
WAIT #1: nam='SQL*Net message from client' ela= 1674 driver
id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586150901
CLOSE #1:c=0,e=0,dep=0,type=0,tim=314586146984

PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56
tim=314586146984 hv=3933466223 ad='2734efe38' sqlid='g6dd19gp77vmg'


insert into test_assm values
(17,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')

END OF STMT
PARSE
#2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314586146984
EXEC
#2:c=140401,e=124827,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=314586271811


STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=0 pw=0 time=0 us)'

WAIT #2: nam='SQL*Net message to client' ela= 5 driver id=1413697536
#bytes=1 p3=0 obj#=67277 tim=314586294069
WAIT #2: nam='SQL*Net message from client' ela= 1347 driver
id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586295442
CLOSE #2:c=0,e=0,dep=0,type=0,tim=314586271811
...
(A bit later in the trace)
PARSING IN CURSOR #2 len=533 dep=0 uid=56 oct=2 lid=56
tim=314624211019 hv=1207345385 ad='2734b6ff8' sqlid='15mbakd3zd879'
insert into test_assm values
(239,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE
#2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314624211019
WAIT #2: nam='db file sequential read' ela= 519 file#=7 block#=1972923
blocks=1 obj#=67277 tim=314624366489
EXEC
#2:c=140401,e=124781,p=1,cr=69794,cu=83960,mis=0,r=1,dep=0,og=1,plh=0,tim=314624335800


STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL

(cr=69794 pr=1 pw=0 time=0 us)'
WAIT #2: nam='SQL*Net message to client' ela= 2 driver id=1413697536
#bytes=1 p3=0 obj#=67277 tim=314624366655
WAIT #2: nam='SQL*Net message from client' ela= 814 driver
id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314624367493
CLOSE #2:c=0,e=0,dep=0,type=0,tim=314624367010

In yesterday's run, Oracle kept performing single block reads on
exactly the same blocks for each insert statement (additional blocks
were added one at a time on later inserts). Today this only happened
for the first insert statement, with occasional single block reads
after that point.

Jonathan had commented somewhere that ASSM is like freelists(16) (or
maybe it was freelist groups (16)). The blocks selected for insert
are dependent on the v$process.pid for the session (I have seen a
couple good descriptions of how this works, but cannot locate those
descriptions right now). See the follow up to comment 10 here:
http://jonathanlewis.wordpress.com/2009/01/14/books/

I suspect that this might have something to do with the problem I
experienced yesterday, but not today (which free blocks are available
to the session).

In this link, there are a couple posts which describe this, or a
similar problem:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2929412562998
(April 24, 2003 by Jan van Mourik)
(December 21, 2004 by Steve)

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6020271977738
(December 4, 2005 by Jonathan Lewis)

On December 24, 2008 a thread was started on Oracle's OTN forums
titled "Performance degradation of repeated delete/inserts" which
seems to describe a similar problem. It is too bad the thread
disappeared from Oracle's OTN forums. In that thread Mr. Burleson
told me "YOU CANNOT PROVE ANYTHING ABOUT ORACLE PERFORMANCE. EVER. NO
EQUATIONS, NO PROOFS, NO WAY, NO HOW...". I think that it was
comments like that which caused the thread to be pulled.

John Hurley

unread,
Aug 9, 2009, 2:09:59 PM8/9/09
to
On Aug 8, 1:55 pm, Charles Hooper <hooperc2...@yahoo.com> wrote:

snip

> > Charles I got a little lost trying to follow your reply ... sorry.


>
> > Wonder if I either screwed something up ( I don't think so ... was
> > careful to check tablespace was ASSM and that inserts were in
> > different sessions ) or if there is something fixed in 11.1.0.7 in
> > linux 64 bit that still has problems on other platforms and/or if the
> > fact that my setup is using ASM is also throwing a wrinkle into
> > things?
>
> > Noons seemed to report that he did not see the same problem either but
> > I think that was before the complete test case was posted.
>
> I was not implying that you made a mistake in your test run, I just
> saw a couple potential problems with the original test case
> description which might cause someone (including myself) to conduct
> the test a little different than the OP.  That might include:
> * Batching the inserts in the second session, rather than using 1,000
> individual SQL statements for the insert.
> * Creating the tablespace with the wrong specifications (that happened
> to me when I initially tried to set up the tablespace with the 1MB non-
> ASSM tablespace - I actually created an ASSM tablespace with 1MB
> uniform extents which resulted in roughly the same slow timing as the
> ASSM autoallocate tablespace).
> * A 10046 trace at level 8 was not set up in the original test case,
> which would make it hard to determine where the time was spent.

Well as it turns out I had messed up the testing and after doing it a
little/lot more carefully my run showed the same problem as noted by
the OP.

It was a pretty well written test case I just missed the part for step
4 where you had to run the select first to generate the insert
statements.

ca111026

unread,
Aug 9, 2009, 8:11:42 PM8/9/09
to
I tested using Oracle 11.1.0.7.0 (64-bit) on AIX 5.3 - the problem is
still there, it has not been fixed.

Now, these bitmaps, shouldn't they be updated in transactionally
consistent way like everything else?
Let's say all rows have been deleted from the table, but DELETE hasn't
been commited. Another session comes
and attempts to run insert. It checks bitmap for free space in the
blocks that have already been allocated to the table.
If bitmap is transactionally consistent then it will show that all
blocks are full, so instead of trying to insert into existing
blocks Oracle will allocate another extent. Do I miss something here?
Is there a reason why bitmaps cannot be part of transaction?

Is it a bug or a "feature"?


Jonathan Lewis

unread,
Aug 10, 2009, 2:07:36 AM8/10/09
to

|"ca111026" <ca11...@gmail.com> wrote in message
news:23f576d0-f399-4cbb...@l35g2000pra.googlegroups.com...

In my demonstration case (with 100,000 rows of about the length
you specified in your test case) I had about 440 bitmap blocks
pointing to 7,000 blocks that were full of deleted but not committed data.

According to your suggestion, what do you want Oracle to do on
the commit ? At present it marks the transaction table slot and
calls the log writer to write - optionally it does a "commit cleanout"
on some of the blocks it has changed.

Do you want Oracle to visit all the bitmap blocks on commit and
update them to show that they are now empty ? Even if some,
or all, or them have been written to disc already and need to be
re-read ? Think about the impact that has on the way the commit
time for a transaction could vary.

Consider the opposite situation - you insert 1,000,000 rows that
fill thousands of blocks referenced by hundreds of bitmap blocks,
but don't commit... Should the bitmap blocks say the table blocks
are still empty - which means a single row insert would have to do
exactly the same trip through all the bitmap blocks you're now
seeing and doing the same access to the undo segment headers
to check that the insert is not yet committed.

This is the problem with bitmap blocks - when do you change
their state.

That's why they're not a good idea for DSS and DW activity,
and why you should only use them with real OLTP activity -
they work best for commits after small changes of data.

ca111026

unread,
Aug 10, 2009, 3:02:33 AM8/10/09
to
Thanks for explanation.

So how freelists work? Why we don't have this issue with non-ASSM
tablespaces?

Also, what are advantages of ASSM? I can see advantages of Locally
Managed Tablespaces:
- Less load on SYSTEM
- DROP or TRUNCATE of tables/indexes with thousands extents is much
faster
- If locally managed tablespace is created with uniform extent size
then there is no fragmentation - all free space is in chunks
of "right" size.

However what as advantages of ASSM?

Helma

unread,
Aug 10, 2009, 11:19:29 AM8/10/09
to

You may have a look at the book by Tom Kyte : " Effective oracle By
Design". He explains with statspack reports the impact of several
oracle features. Page 232 - 234 are called : The Case for ASSM.

joel garry

unread,
Aug 10, 2009, 1:21:39 PM8/10/09
to
On Aug 8, 10:55 am, Charles Hooper <hooperc2...@yahoo.com> wrote:

> On December 24, 2008 a thread was started on Oracle's OTN forums
> titled "Performance degradation of repeated delete/inserts" which

If you google that with the quotes, it seems it is still there,
apparently you need moderator permissions to actually view it.

Just a reminder in case anyone doesn't know, if you see any
informative forums.oracle.com posts or metalink notes, keep them
locally, they can disappear at any time. This goes double for your
SR's.

jg
--
@home.com is bogus.
Redundant data in 911 system. Lots of redundant data.
http://www3.signonsandiego.com/stories/2009/aug/08/1m8fixit02046-wrecked-car-airlifted-torrey-pines-c/?uniontrib

Noons

unread,
Aug 10, 2009, 11:37:35 PM8/10/09
to
On Aug 8, 7:47 am, ca111026 <ca111...@gmail.com> wrote:
> The example given above is not artificial. I work in environment where
> we have more than hundred production
> databases, and Apps Support is mostly outsourced/offshored. As
> databases grow the data needs to be purged,
> some databases have partitioned tables but many don't. So DELETE needs
> to be run, often using condition
> on a column without index like LAST_UPDATED_DATE. So Apps Support runs
> this delete, they make a mistake
> a delete too much, inserts are affected, CPU usage/disk I/O goes
> through the roof, people start blaming SAN,
> SRDF gets switched to adaptive mode, etc. It is not an easy problem to
> diagnoze, normally when going through
> Statspack reports people just skip past single-row insert statements
> as they rarely experience problems.

Yes but realistically speaking, wouldn't it be easier to just commit
at end of big number of deletes? We know that fixes the problem with
the bitmap slowness as described by Jonathan. In fact, if you commit
the first session while the second session is doing the second batch
of inserts, the second session speeds up to normal times.

Perhaps even more significantly, if you delete only a portion of the
table instead of the lot, then the second session's insert is not
affected. Normally, I would expect anyone wanting to clear a table to
truncate it, as opposed to deleting all rows: truncate is far more
efficient.

We do a lot of delete-insert in the ETL tables of our DW and don't
have this problem at all.
Mostly because a big delete is always followed by a commit, rather
than letting other concurrent processing go through in that table. Or
we do a truncate instead of total table delete.

In general, I wouldn't consider designing in the requirement for big
deletes in long transactions, when a table is also under multi-user
access: that will sooner or later cause a performance problem, assm or
no assm. If one tries a similar concurrent exercise with heavily
indexed tables, things will go haywire anyway whatever storage model
we use.

>
> Which brings me back to the original question - Why use ASSM in the
> first place? It does not look like it offers
> any benefits (may be ALTER TABLE SHRINK but how often you use it?).
> Could someone point to a simple test
> (however artificial) that shows advantage of ASSM?


To me the biggest advantage in using assm is not having to constantly
worry about buffer busy waits: they virtually disappear with this type
of storage.
Provided it doesn't affect other normal access, I'm happy with the
trade-offs.
In other words: I'd rather ask developers to not do big deletes
uncommitted - a rather dubious practice to start with - than having to
figure out at what time and where should I be changing per table
allocation parameters to resolve busy waits and other similar
contention.

Noons

unread,
Aug 10, 2009, 11:43:43 PM8/10/09
to
On Aug 10, 4:07 pm, "Jonathan Lewis" <jonat...@jlcomp.demon.co.uk>
wrote:

>


> This is the problem with bitmap blocks - when do you change
> their state.
>
> That's why they're not a good idea for DSS and DW activity,
> and why you should  only use them with real OLTP activity -
> they work best for commits after small changes of data.
>

Actually, I think the problem is more related to the poor practice of
using delete to clear an entire table. A truncate should be used,
rather than a delete. And definitely a commit should be in there
somewhere, rather than leaving entire table deletes uncomitted.
We have all been there and seen what large uncomitted deletes do,
particularly if the table is indexed. assm or no assm.
As well, if you delete only a portion of the table rather than the
lot, the problem doesn't show up as easily: the second session's
insert proceeds at normal speed.

As such, I think it is a bit premature to blanket-classify assm as not
suitable to DW/DSS because of a special case when it is perfectly
suitable if normaly accepted coding practices are followed?

ca111026

unread,
Aug 11, 2009, 3:10:26 AM8/11/09
to
Sorry, how buffer busy waits are related to ASSM? I thought buffer
busy waits ('read by other session' in later versions) is event that
happens when multiple sessions are reading the same block from memory.
This is why it is called 'buffer' (from 'buffer cache'). If table is
"narrow" i.e. rows are short then there will be many rows per block,
ASSM or no ASSM.

Helma

unread,
Aug 11, 2009, 5:06:05 AM8/11/09
to
On Aug 11, 9:10 am, ca111026 <ca111...@gmail.com> wrote:
> Sorry, how buffer busy waits are related to ASSM?

Again:

You may have a look at the book by Tom Kyte : " Effective oracle By
Design". He explains with statspack reports the impact of several
oracle features. Page 232 - 234 are called : The Case for ASSM.

And yes, his example goes specificly about buffer busy waits.

Randolf Geist

unread,
Aug 11, 2009, 5:42:00 AM8/11/09
to
On Aug 11, 5:43 am, Noons <wizofo...@gmail.com> wrote:
> Actually, I think the problem is more related to the poor practice of
> using delete to clear an entire table.  A truncate should be used,
> rather than a delete.  And definitely a commit should be in there
> somewhere, rather than leaving entire table deletes uncomitted.

I wouldn't agree to that entirely. In general a commit should be used
along the logical transaction, and there are certainly cases where you
want to "reload" a table as part of a larger transaction, and only if
everything else succeeds perform a final commit. You don't want to see
other sessions an intermediate state of the on-going transaction, that
might lead to other undesirable side-effects.

Also think of materialized view refresh groups that by default use the
atomic refresh option. In case of a complete refresh these also will
delete the whole table and re-insert the data in a single transaction
without any intermediate commits.

However I agree that doing so (delete all rows + re-insert within a
single transaction) with very large tables and/or a large number of
indexes will potentially pose problems, regardless of ASSM being used
or not.

One approach that allows to efficiently "replace" table contents
almost in an "atomic" way is to use partitioning with the exchange
table feature. There you can re-load the "new" version of the table to
a "exchange" table, making usage of all work-reducing features
available (unusable indexes, truncate, optionally nologging, direct-
path inserts, even parallel DML / parallel index rebuild for huge data
volumes), and only at the end of the operation "swap" the new table
contents into the partition, making them available almost instantly to
the remaining processing. This could also be used for multiple tables,
although it's not going to be that "atomic" anymore then.

Of course this approach requires an enterprise edition license for the
partitioning option I think.

Regards,
Randolf

Oracle related stuff blog:
http://oracle-randolf.blogspot.com/

SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/

Mladen Gogala

unread,
Aug 12, 2009, 11:23:53 AM8/12/09
to
On Mon, 10 Aug 2009 08:19:29 -0700, Helma wrote:

> You may have a look at the book by Tom Kyte : " Effective oracle By
> Design". He explains with statspack reports the impact of several oracle
> features. Page 232 - 234 are called : The Case for ASSM.

The case for ASSM is based on discussing the freelist manipulation.

--
http://mgogala.freehostia.com

Mladen Gogala

unread,
Aug 12, 2009, 11:31:46 AM8/12/09
to
On Fri, 07 Aug 2009 13:50:35 +0100, Jonathan Lewis wrote:

> "Cristian Cudizio" <cristian...@yahoo.it> wrote in message

> news:c8ba2866-cda9-45c8-
bca8-9b6...@q14g2000vbi.googlegroups.com...

Jonathan, the situation that you are describing here reads almost exactly
like something called "dirty read" by the MS SQL Server folks. In other
words, you are saying ASSM bitmaps are not protected by the transaction
mechanism, like the free lists in the "normal" tablespaces and we have
"phantom reads", blocks are declared free before they actually are free.
I think (I am trying to develop some tests to test that hypothesis) that
"normal" free list blocks are protected by the transaction mechanism. If
that is the case, it would be a huge vote against ASSM.


--
http://mgogala.freehostia.com

joel garry

unread,
Aug 12, 2009, 12:02:49 PM8/12/09
to
On Aug 12, 8:31 am, Mladen Gogala <mla...@bogus.email.invalid> wrote:
> On Fri, 07 Aug 2009 13:50:35 +0100, Jonathan Lewis wrote:
> > "Cristian Cudizio" <cristian.cudi...@yahoo.it> wrote in message
> > news:c8ba2866-cda9-45c8-
>
> bca8-9b646f450...@q14g2000vbi.googlegroups.com...
> --http://mgogala.frehostia.com

I don't think so. If it were like dirty read, the performance problem
wouldn't show up and bitmaps would get corrupted (if I'm understanding
correctly what you mean, which I'm probably not). Tests will be
interesting. Maybe a better way to put it is the type of transaction
mechanism used favors small transactions, and the previous bug handled
large transactions even more ham-handedly than now. Perhaps it was
just one of those bugs where the scope of the feature wasn't really
thought through as Oracle waited to see how people would use it.

jg
--
@home.com is bogus.

"Beverly Hills, that's where I want to be." - Weezer
http://www3.signonsandiego.com/stories/2009/aug/11/jury-acquits-beverly-hills-real-estate-agent-stars/?uniontrib

Mladen Gogala

unread,
Aug 12, 2009, 6:16:11 PM8/12/09
to
On Wed, 12 Aug 2009 09:02:49 -0700, joel garry wrote:

> I don't think so. If it were like dirty read, the performance problem
> wouldn't show up and bitmaps would get corrupted (if I'm understanding
> correctly what you mean, which I'm probably not). Tests will be
> interesting. Maybe a better way to put it is the type of transaction
> mechanism used favors small transactions, and the previous bug handled
> large transactions even more ham-handedly than now. Perhaps it was just
> one of those bugs where the scope of the feature wasn't really thought
> through as Oracle waited to see how people would use it.

The phrase "dirty read" means that the effects of uncommitted
transaction, a delete operation in this case, are visible to other
transactions. Basically, "delete" puts the block back on the "free"
list, before the transaction has committed. An insert transaction tries
to find free blocks to insert rows, reads the bitmap blocks and finds out
that bitmap blocks are lying and ends up scanning the entire table.
Effects of the "delete" are visible before the transaction has committed.
What happens if the delete transaction is rolled back? Will the blocks be
re-declared as full? Now that is an extremely interesting scenario that
I intend to test in an ASS managed tablespace and an old school
tablespace.

--
http://mgogala.freehostia.com

Mladen Gogala

unread,
Aug 12, 2009, 6:17:31 PM8/12/09
to
On Mon, 10 Aug 2009 20:37:35 -0700, Noons wrote:

> Perhaps even more significantly, if you delete only a portion of the
> table instead of the lot, then the second session's insert is not
> affected. Normally, I would expect anyone wanting to clear a table to
> truncate it, as opposed to deleting all rows: truncate is far more
> efficient.

Unfortunately, in the situations where you want to delete everything
older than 60 days, truncate is not an option.

--
http://mgogala.freehostia.com

Jonathan Lewis

unread,
Aug 13, 2009, 5:31:04 AM8/13/09
to

"Mladen Gogala" <mla...@bogus.email.invalid> wrote in message
news:h5vevb$nnq$1...@solani.org...


Mladen,

There are problems with "metadata" of this type whether you
are using ASSM or freelist management.

Here's an equivalent freelists "anomaly":

User A inserts 1,000,000 rows into the table - Oracle adds
a load of extents, moves the highwater mark upwards, and
takes loads of blocks OFF the freelist before the commit.

If this weren't the case then everyone would have to scan a
huge freelist to discover that read consistency said the blocks
were empty when in fact they were full. The fact that they don't
do this is something you are labelling a dirty read.

It doesn't matter what approach you take - at some point an insert,
delete, commit, or rollback, will leave you in a position where the
behaviour of metadata looks like a dirty read of data.

Jonathan Lewis

unread,
Aug 13, 2009, 5:33:49 AM8/13/09
to


"Mladen Gogala" <mla...@bogus.email.invalid> wrote in message

news:h5vf1r$nnq$2...@solani.org...


It is if you partition by day and use only local indexes ;)

Really, this discussion is only picking up the point that
we need to know that each feature has some side effects
that may need special consideration.

Jonathan Lewis

unread,
Aug 13, 2009, 5:37:31 AM8/13/09
to


"ca111026" <ca11...@gmail.com> wrote in message

news:d4a0b7a5-aa4e-41bc...@g1g2000pra.googlegroups.com...

There are several causes of "buffer busy waits". They can also appear
when two people want to modify the same block at the same - and
if you have a table with freelists = 1, then there is only one insertion
point available at a time, so concurrent inserts can see buffer busy
waits.

There's a lot that could be said about multiple freelists, multiple
freelist groups, ASSM, and RAC at this point and the various
pros and cons of different technologies - but essentially ASSM
was introduced to hide a common problem instead of instructing
people about an existing solution.

joel garry

unread,
Aug 13, 2009, 12:22:43 PM8/13/09
to
> --http://mgogala.freehos.com

Thanks to you and Jonathan for clarifying this. I'm thinking the key
to unraveling this semantic knot is that the "metadata" is block
description data not user data, and needs its own transactional
analysis outside of the user data transactions. I think it would be a
mistake to judge it as a point for other rdbms's that encourage dirty
reading by app programmers, or against Oracle for using dirty reading
for housekeeping - as always, if you tour the sausage factory, prepare
for some eye-opening experience. The important points are that we
understand what is happening when a trade-off is involved, and that it
works bug-free - that latter must be judged at a very high standard,
much higher than app code.

jg
--
@home.com is bogus.

http://www3.signonsandiego.com/stories/2009/aug/13/standards-sought-electronic-books/?uniontrib

Noons

unread,
Aug 16, 2009, 7:00:43 AM8/16/09
to
Randolf Geist wrote,on my timestamp of 11/08/2009 7:42 PM:

>
> I wouldn't agree to that entirely. In general a commit should be used
> along the logical transaction, and there are certainly cases where you
> want to "reload" a table as part of a larger transaction, and only if
> everything else succeeds perform a final commit. You don't want to see
> other sessions an intermediate state of the on-going transaction, that
> might lead to other undesirable side-effects.


Yes of course. But if you abstract special cases you mentioned, you will notice
that the vast majority of design advice is against such practices. Of course
there are *always* exceptions. Those however do not make up a general case.

To use a general rule of "ASSM is not for DW" is IMHO a bit too general,
particularly when the cases supporting it are very specific.


> Also think of materialized view refresh groups that by default use the
> atomic refresh option. In case of a complete refresh these also will
> delete the whole table and re-insert the data in a single transaction
> without any intermediate commits.

Again, a special case.


>
> However I agree that doing so (delete all rows + re-insert within a
> single transaction) with very large tables and/or a large number of
> indexes will potentially pose problems, regardless of ASSM being used
> or not.
>

Exactly.


> One approach that allows to efficiently "replace" table contents
> almost in an "atomic" way is to use partitioning with the exchange
> table feature.

Yes but as you noted, partitioning is not only a special case, but also a paid
for option: it doesn't come with standard Oracle nor Enterprise Edition. Not
everyone can use it. And partitioning is not always practical to use.

Noons

unread,
Aug 16, 2009, 7:03:18 AM8/16/09
to
Jonathan Lewis wrote,on my timestamp of 13/08/2009 7:33 PM:

>> Unfortunately, in the situations where you want to delete everything
>> older than 60 days, truncate is not an option.
>>
>>
>>
>> --
>> http://mgogala.freehostia.com
>
>
> It is if you partition by day and use only local indexes ;)
>
> Really, this discussion is only picking up the point that
> we need to know that each feature has some side effects
> that may need special consideration.
>

Exactly!

0 new messages