For example:
Bug 7575925 – ASSM blocks may be marked with wrong “freeness” (private
bug)
Affected version: 10.2.0.4
Fixed in: 10.2.0.5, 11.2 (Future Release)
Bug 5728380 – DML may spin searching for space in ASSM (private bug)
Affected versions: 9.2.0.8, 10.2.0.3
We had several instances where Apps Support would run large uncomitted
DELETE (in different apps) and performance of single-row INSERT would
deteriorate dramatically: from 10 buffer gets per INSERT to 100,000.
We use ASSM extensively.
And most of the bugs in 10.2.0.3 have patches and fixes. Which we have
installed: I reckon it's a better use of our time than the silly CPUs.
In my tests second insert takes much, much longer - an hour instead of
several seconds.
I'll test that for you tomorrow our time.
Do you have an index on the table as well?
How long does that same test take without ASSM?
snip
Write up this as a test case and supply sql and I will run it ...
Our compnay's DBs all use ASSM. It's fine! we don't need rebuild
indexes again.
Something is definitely different in your system.
Here are my results:
SQL> select count(1) from zot;
COUNT(1)
--------------
1339456
Elapsed: 00:00:00.24
SQL> desc zot;
Name Null? Type
----------------------------------------- --------
----------------------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
COLUMN_NAME NOT NULL VARCHAR2(30)
DATA_TYPE VARCHAR2(106)
DATA_TYPE_MOD VARCHAR2(3)
DATA_TYPE_OWNER VARCHAR2(30)
DATA_LENGTH NOT NULL NUMBER
DATA_PRECISION NUMBER
DATA_SCALE NUMBER
NULLABLE VARCHAR2(1)
COLUMN_ID NUMBER
DEFAULT_LENGTH NUMBER
NUM_DISTINCT NUMBER
LOW_VALUE RAW(32)
HIGH_VALUE RAW(32)
DENSITY NUMBER
NUM_NULLS NUMBER
NUM_BUCKETS NUMBER
LAST_ANALYZED DATE
SAMPLE_SIZE NUMBER
CHARACTER_SET_NAME VARCHAR2(44)
CHAR_COL_DECL_LENGTH NUMBER
GLOBAL_STATS VARCHAR2(3)
USER_STATS VARCHAR2(3)
AVG_COL_LEN NUMBER
CHAR_LENGTH NUMBER
CHAR_USED VARCHAR2(1)
V80_FMT_IMAGE VARCHAR2(3)
DATA_UPGRADED VARCHAR2(3)
HISTOGRAM VARCHAR2(15)
(came from dba_tab_columns. zot1 is a subset)
SQL> insert into zot select * from zot1;
13214 rows created.
Elapsed: 00:00:00.21
SQL> delete from zot;
1352670 rows deleted.
Elapsed: 00:01:45.35
SQL> insert into zot select * from zot1;
13214 rows created.
Elapsed: 00:00:00.08
SQL> commit;
Commit complete.
Elapsed: 00:00:00.06
SQL> select count(1) from zot;
COUNT(1)
--------------
13214
Elapsed: 00:00:00.14
In simple terms: bugger all difference between first insert and second
insert.
If anything, second was faster.
It's not clear from your output if you've followed the OP's
description to perform the final insert from a different session,
after the deletion of all rows from the table in the first session.
This might make a significant difference.
However I agree with previous contributors that providing a
reproducible test case in form of a script including the setup/DDL to
create/initialize the objects will be the most efficient approach,
along with his results and the exact version/platform this was tested
on.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Let me show you something:
-------------------------------------------------------------------------------------------------
9iSQL> create table t as select * from fb6 where rownum < 100000;
Table created.
Elapsed: 00:00:00.03
9iSQL> insert into t select * from t;
99999 rows created.
Elapsed: 00:00:01.00
Execution Plan
----------------------------------------------------------
0 INSERT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'T'
Statistics
----------------------------------------------------------
182 recursive calls
11122 db block gets
3136 consistent gets
1104 physical reads
9007392 redo size
793 bytes sent via SQL*Net to client
802 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
99999 rows processed
9iSQL> delete t;
199998 rows deleted.
Elapsed: 00:00:05.03
Execution Plan
----------------------------------------------------------
0 DELETE STATEMENT Optimizer=CHOOSE
1 0 DELETE OF 'T'
2 1 TABLE ACCESS (FULL) OF 'T'
Statistics
----------------------------------------------------------
305 recursive calls
219083 db block gets
2422 consistent gets
1104 physical reads
63639616 redo size
793 bytes sent via SQL*Net to client
781 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
199998 rows processed
------------------------------------------------------------------------------------------
when you delete rows before commit, the redo size is very big. So the
problem is not regarding ASSM, it's redo size.
> It's not clear from your output if you've followed the OP's
> description to perform the final insert from a different session,
> after the deletion of all rows from the table in the first session.
>
> This might make a significant difference.
Second insert was from a different session and it makes no difference
whatsoever, at least in my system.
Running 10.2.0.3 in AIX 5.3, with following patches applied:
5386204,6455161,5556081,5557962
Like kevin said: I suspect this has more to do with undo/locking than
ASSM itself.
Like you said: we really need a scripted session to be able to
reproduce the problem.
There is still a difference to the OP's description: The 10,000 rows
inserts were supposed to be single-row inserts.
We also need to know the exact environment, e.g. the block size used
in the tablespace, if it is a default or non-default block size in his
database, indexes on the table being modified, block size of
tablespace where the indexes are stored etc... There were some
oddities in the past with ASSM and larger block sizes like 16KB, in
particular with index maintenance, rows growing due to updates etc.
So we need not only a test case in form of a script for the DML
performed, but also at least the exact description of the environment,
or even better, the test case covers these details as well (tablespace
creation etc.).
Tests were run on AIX 5.3 (64-bit) and Red Hat Linux (32-bit),
Oracle 10.2.0.4
1. Create tablespace, it uses default 8K block size
create tablespace assm
extent management local uniform size 1m
segment space management auto
datafile
'/abc/db01/oracle/ABC1P/oradata/assm_01.dbf' size 1000m;
2. Create table
create table test_assm
(
n1 number,
v1 varchar2(50),
v2 varchar2(50),
v3 varchar2(50),
v4 varchar2(50),
v5 varchar2(50),
v6 varchar2(50),
v7 varchar2(50),
v8 varchar2(50),
v9 varchar2(50),
v10 varchar2(50)
)
tablespace assm;
3. Populate table with 1,000,000 rows, COMMIT at the end
begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/
COMMIT;
4. Insert additional 1000 rows into the table using ***SINGLE_ROW***
inserts.
I used following script to generate INSERT statements
select
'insert into test_assm values(' || n1 ||
',''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'');'
from
test_assm
where
rownum < 1001;
It took 1 second to insert 1000 rows through single-row inserts.
5. Delete all rows from the table, don't commit
6. Re-execute script that inserts 1000 rows from a different session.
Runtime > 20 min.
There were no indexes on the table.
Insert into table containing uncomitted DELETE should not be
significantly slower
than insert into table without DELETE.
> Our compnay's DBs all use ASSM. It's fine! we don't need rebuild
> indexes again.
I can't think of any good reason why you would have needed to
rebuild indexes before switching to ASSM, and not needed to
afterwards. Any ideas ?
--
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html
The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html
Interesting test case. I executed the test on 64 bit Windows with a
fairly slow disk system (little front end caching from the disk
subsystem) running Oracle 11.1.0.7, 8KB block size, with the
__DB_CACHE_SIZE currently floating at 0.9375GB due to a much larger
DB_KEEP_CACHE_SIZE value. What do I see?
SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_ASSM';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
@c:\insertstatements.sql
1 row created.
Elapsed: 00:00:20.92
1 row created.
Elapsed: 00:00:15.98
1 row created.
Elapsed: 00:00:13.52
1 row created.
...
Elapsed: 00:00:12.41
1 row created.
Elapsed: 00:00:11.84
1 row created.
Elapsed: 00:00:12.32
1 row created.
...
Interesting... becoming faster as more blocks are cached.
So, what is in the trace file?
PARSING IN CURSOR #3 len=532 dep=0 uid=56 oct=2 lid=56
tim=220841924138 hv=471712922 ad='2778b31b8' sqlid='dyqznk8f1vj4u'
insert into test_assm values
(15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE
#3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220841924138
WAIT #3: nam='db file sequential read' ela= 17613 file#=7
block#=1900672 blocks=1 obj#=67153 tim=220841943750
WAIT #3: nam='db file sequential read' ela= 458 file#=7 block#=1900680
blocks=1 obj#=67153 tim=220841944275
WAIT #3: nam='db file sequential read' ela= 617 file#=7 block#=1900681
blocks=1 obj#=67153 tim=220841944980
WAIT #3: nam='db file sequential read' ela= 73 file#=7 block#=1900682
blocks=1 obj#=67153 tim=220841945113
WAIT #3: nam='db file sequential read' ela= 387 file#=7 block#=1900683
blocks=1 obj#=67153 tim=220841945532
WAIT #3: nam='db file sequential read' ela= 72 file#=7 block#=1900684
blocks=1 obj#=67153 tim=220841945656
WAIT #3: nam='db file sequential read' ela= 14610 file#=7
block#=1900685 blocks=1 obj#=67153 tim=220841960301
...
WAIT #3: nam='db file sequential read' ela= 28 file#=7 block#=1972309
blocks=1 obj#=67153 tim=220862843585
WAIT #3: nam='db file sequential read' ela= 29 file#=7 block#=1972325
blocks=1 obj#=67153 tim=220862843638
WAIT #3: nam='db file sequential read' ela= 69 file#=7 block#=1972341
blocks=1 obj#=67153 tim=220862843732
WAIT #3: nam='db file sequential read' ela= 41 file#=7 block#=1972102
blocks=1 obj#=67153 tim=220862843817
EXEC
#3:c=3759624,e=20904025,p=69802,cr=69793,cu=83979,mis=0,r=1,dep=0,og=1,plh=0,tim=220862828163
STAT #3 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL
(cr=69793 pr=69802 pw=0 time=0 us)'
Looks like a lot of single block reads, some in the range of 0.017613
seconds, others in the range of 0.000028 seconds.
A summary of the first insert looks like this:
First Reference: Cursor 3 Ver 1 Parse at 0.000000
|PARSEs 1|CPU S 0.000000|CLOCK S 0.000000|ROWs 0|
PHY RD BLKs 0|CON RD BLKs (Mem) 0|CUR RD BLKs
(Mem) 0|
|EXECs 1|CPU S 3.759624|CLOCK S 20.904025|ROWs 1|
PHY RD BLKs 69802|CON RD BLKs (Mem) 69793|CUR RD BLKs
(Mem) 83979|
|FETCHs 0|CPU S 0.000000|CLOCK S 0.000000|ROWs 0|
PHY RD BLKs 0|CON RD BLKs (Mem) 0|CUR RD BLKs
(Mem) 0|
CPU S 100.00% CLOCK S 100.00%
* 18.032425 seconds of time related data file I/O
* 0.001392 seconds of time related to client/network events
Wait Event Summary:
db file sequential read 18.032425 On DB Server Min
Wait: 0.000022 Avg Wait: 0.000258 Max Wait: 0.071639
SQL*Net message to client 0.000003 On Client/Network Min
Wait: 0.000003 Avg Wait: 0.000003 Max Wait: 0.000003
SQL*Net message from client 0.001389 On Client/Network Min
Wait: 0.001389 Avg Wait: 0.001389 Max Wait: 0.001389
69,802 physical block reads, 69,793 consistent gets, 83,979 current
mode gets, 18.03 seconds spent performing single block reads. This
seems to be behaving similar to the bug that Jonathan Lewis found with
16KB block sizes last year when column values in existing rows were
changed from NULL to a value. In that case, the current mode gets
were the tipoff that there was a problem.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
I repeated the test with an ASSM tablespace with 1MB uniform extents.
The first insert performed 71,250 physical block reads, 71,206
consistent gets, 85,473 current mode gets, 18.85 seconds performing
single block reads with an elapsed time of 21.53 and for some reason 0
CPU seconds (the next insert reported 3.59 CPU seconds).
I also repeated the test with a locally managed table with with 1MB
uniform extents without ASSM: "SIZE 2G REUSE AUTOEXTEND ON NEXT 10M
MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
SEGMENT SPACE MANAGEMENT MANUAL". The results in the final test were
a little disappointing. The *totals* from the script execution for
all of the inserts:
Total for Trace File:
|PARSEs 1003|CPU S 0.234002|CLOCK S 0.312034|ROWs 0|
PHY RD BLKs 0|CON RD BLKs (Mem) 0|CUR RD BLKs
(Mem) 0|
|EXECs 1003|CPU S 0.031200|CLOCK S 0.062434|ROWs 1002|
PHY RD BLKs 0|CON RD BLKs (Mem) 1051|CUR RD BLKs
(Mem) 1343|
|FETCHs 2|CPU S 0.000000|CLOCK S 0.000000|ROWs 1|
PHY RD BLKs 0|CON RD BLKs (Mem) 3|CUR RD BLKs
(Mem) 0|
Wait Event Summary:
SQL*Net message to client 0.001472 On Client/Network Min
Wait: 0.000001 Avg Wait: 0.000001 Max Wait: 0.000076
SQL*Net message from client 0.683966 On Client/Network Min
Wait: 0.000402 Avg Wait: 0.000684 Max Wait: 0.001799
The totals for all of the inserts performed 0 physical block reads,
1,051 consistent gets, 1,343 current mode gets, 0 seconds performing
single block reads with an elapsed time of 0.374468 seconds (0.312034
of that was for parsing) and 0.265202 CPU seconds (0.234002 for
parsing).
Looks like a problem.
snip
I tried your test case. My system is an 11.1.0.7 Enterprise Edition
64 bit system on OEL 5.2 64 bit. It has 32 gig of RAM and a very nice
IO subsystem. Using ASM for storage.
My times to complete steps 4 and 6 ( yes from different sessions )
were the same basically ... not different times as you reported.
There is a difference between ASM (automatic storage mgmt) and ASSM
(automatica segment space mgmt). Are you also on ASSM ?
snip
> > I tried your test case. My system is an 11.1.0.7 Enterprise Edition
> > 64 bit system on OEL 5.2 64 bit. It has 32 gig of RAM and a very nice
> > IO subsystem. Using ASM for storage.
>
> > My times to complete steps 4 and 6 ( yes from different sessions )
> > were the same basically ... not different times as you reported.
>
> There is a difference between ASM (automatic storage mgmt) and ASSM
> (automatica segment space mgmt). Are you also on ASSM ?
If you look at the test script that was supplied it included a create
tablespace statement. That was the ASSM part ... already included
baked in as part of the test script.
I added the details about the system I ran the test script(s) for
completeness. It uses ASM and EMC based storage.