Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Tested 11g OLTP compression and found rather serious issue

78 views
Skip to first unread message

tamk...@gmail.com

unread,
Jul 31, 2009, 4:50:11 AM7/31/09
to
Hi,

I ran the below script in a series of 11g OLTP compression tests. Note
that the compression rate is fine right after the inserts. And after
we update the OBJECT_ID column, the compression rate remains the same.
But after we update a (mostly) NULL column (SUBOBJECT_NAME), the
compressed segment *explodes* to a size even bigger than the
uncompressed one! The latter update also takes much longer time to
complete (no surprise).

Seems like a pitfall..Imagine a 100 Gb compressed table in production
and we make an seemingly innocent update on it...

Any ideas? (No comments on the commits inside a loop, thank you, as
mentioned below it is for simulating OLTP).

- Kenneth Koenraadt

***************

SQL> -- Create an ASSM tablespace with small extent size for testing
SQL> create tablespace t1 uniform size 128k;

TabelomrÕde er oprettet.

SQL>
SQL> -- Create 2 logically identical tables, one non-compressed and
one OLTP compressed

SQL> create table c1 tablespace t1 as select * from dba_objects where
0 = 1;

Tabel er oprettet.

SQL> create table c2 compress for all operations tablespace t1 as
select * from dba_objects where 0 = 1;

Tabel er oprettet.

SQL>
SQL>
SQL> -- Simulate OLTP
SQL> declare
2
3 cursor c is
4 select * from dba_objects;
5 i binary_integer;
6 j binary_integer;
7 begin
8
9 for i in 1..10 loop
10 for c_rec in c loop
11 j := j + 1;
12 insert into c1 values c_rec;
13 insert into c2 values c_rec;
14 if mod(j,10) = 0 then
15 commit;
16 end if;
17 end loop;
18 end loop;
19 end;
20 /

PL/SQL-procedure er udf°rt.

SQL> commit;

Bekrµftelse er udf°rt.

SQL>
SQL> -- Calculate compress ratio
SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio from
dba_segments a, dba_segments b
2 where a.segment_name = 'C1'
3 and b.segment_name = 'C2';

COMPRESS_RATIO
--------------
,48

SQL>
SQL> -- now update every row in compressed table, non-null column
SQL>
SQL> update c2 set object_id = object_id;

139820 rµkker er opdateret.

SQL> commit;

Bekrµftelse er udf°rt.

SQL>
SQL> -- Calculate compress ratio again
SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio from
dba_segments a, dba_segments b
2 where a.segment_name = 'C1'
3 and b.segment_name = 'C2';

COMPRESS_RATIO
--------------
,48

SQL>
SQL> -- now update every row in compressed table, null column
SQL> update c2 set subobject_name = 'a';

139820 rµkker er opdateret.

SQL> commit;

Bekrµftelse er udf°rt.

SQL>
SQL> -- Calculate compress ratio again
SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio from
dba_segments a, dba_segments b
2 where a.segment_name = 'C1'
3 and b.segment_name = 'C2';

COMPRESS_RATIO
--------------
1,19

SQL>
*********************

tamk...@gmail.com

unread,
Jul 31, 2009, 5:02:55 AM7/31/09
to
Oh, I forgot this important info, regret :

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for 32-bit Windows: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production

- Kenneth Koenraadt

Mark D Powell

unread,
Jul 31, 2009, 9:46:37 AM7/31/09
to
On Jul 31, 5:02 am, "Tamkat...@gmail.com" <tamkat...@gmail.com> wrote:
> Oh, I forgot this important info, regret :
>
> SQL> select * from v$version;
>
> BANNER
> ---------------------------------------------------------------------------­-----

> Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
> PL/SQL Release 11.1.0.7.0 - Production
> CORE    11.1.0.7.0      Production
> TNS for 32-bit Windows: Version 11.1.0.7.0 - Production
> NLSRTL Version 11.1.0.7.0 - Production
>
> - Kenneth Koenraadt

Ken, I do not know how table compression works on 11g but when first
introduced if you updated a row in a compressed table Oracle stored
the updated row back in uncompressed format. Compression was
suggested for tables what had little to no update activity. Your
results could be partially due to migrated rows if no enhancement to
recompress the row was not added in 11g.

Interested in any updates you come up with.

HTH -- Mark D Powell --


tamk...@gmail.com

unread,
Jul 31, 2009, 12:35:45 PM7/31/09
to

Hi Mark,

Tried to run the test case again with an "ordinarily" compressed
table :

create table c2 compress tablespace t1 as


select * from dba_objects where 0 = 1;


The result was quite interesting :

The compress ratios were 0.48, 0.48 and 1.19 respectively above, with
an OLTP compressed table.

With ordinary compression, they were 0.91, 0.91 and 0.91
respectively!

So while initially less efficient, ordinary compression seems much
more robust to updates than OLTP compression. A little surprising.

The whitepaper on OLTP claims that OLTP compression works on the block
level, entirely different than ordinary compression.

On the other hand, it also claims that OLTP compression works
"between" transactions and compresses a block in batches when certain
thresholds are reached.

So if the test above was extended with further updates on table C2, it
may actually shrink again ?

Then, of course, if the compression ratio fluctuates between 50 and
120% as above, how much is OLTP compression worth ? You would need to
reserve not 50% of the original storage, but 120%.

More tests are required, I guess.

- Kenneth Koenraadt


joel garry

unread,
Jul 31, 2009, 12:59:10 PM7/31/09
to

Of course, I have no idea how this really works, but see metalink
Note: 466362.1

"As an example on what would happen when inserting into block that is
part of a compressed table, the cycle would be like :

- The block is empty and available for inserts.
- When you start inserting into this block, data is stored in an
uncompressedformat (like for uncompressed tables).
- However, as soon as you reach the PCTFREE of that block, the data is
automatically compressed, potentially reducing the
space it originally occupied.
- This allows for new uncompressed inserts to take place in the same
block, until PCTFREE is reached again. At that point
compression is triggered again to reduce space occupation in the
block."

So I'm wondering if either there is a bug, or Kenneth has some
combination of PCTFREE and compressible data that doesn't trigger the
compression with the particular pattern of updating - since each row
is adding only one byte, the additional length is only the number of
rows in the block, and the wasted space below PCTFREE on the original
load is less than that (assuming a row inserted originally that would
go over PCTFREE winds up in the next block). I'm also wondering what
the first update of object_id=object_id actually does.

Again, this is all speculation, but suggests experiments that could be
verified with tracing and block dumps, especially my assumptions,
which may be saying the last line of the quote from metalink is
misleading. If that line is not misleading, this would imply the
update works completely differently than the insert which jams as much
as possible in the block under PCTFREE, and any large proportion of
updates would necessarily explode in size with massive chaining -
which would seem to say the docs lie about the applicability of this
feature to OLTP: http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#CNCPT1133
Even though it does say this should only be used on data that isn't
changing much, what a weasel line.

That's probably unfair, I think what the docs should be making clear
is the feature is to be used for the parts of OLTP systems that have
settled down and aren't being updated any more, which in the past
would be archived out. Now you'd probably do something like move the
data into compressed partitions or tablespaces, depending on options.
This could make a simple app upgrade that simply adds a column or
something into a huge undertaking.

jg
--
@home.com is bogus.
So, how long do you need data?
http://www3.signonsandiego.com/stories/2009/jul/31/1m31plane001745-wwii-planes-number-doesnt-match-re/?metro&zIndex=141519

Mark D Powell

unread,
Jul 31, 2009, 4:03:36 PM7/31/09
to
> feature to OLTP:http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema...

> Even though it does say this should only be used on data that isn't
> changing much, what a weasel line.
>
> That's probably unfair, I think what the docs should be making clear
> is the feature is to be used for the parts of OLTP systems that have
> settled down and aren't being updated any more, which in the past
> would be archived out.  Now you'd probably do something like move the
> data into compressed partitions or tablespaces, depending on options.
> This could make a simple app upgrade that simply adds a column or
> something into a huge undertaking.
>
> jg
> --
> @home.com is bogus.
> So, how long do you need data?http://www3.signonsandiego.com/stories/2009/jul/31/1m31plane001745-ww...- Hide quoted text -
>
> - Show quoted text -

Interesting comments JG. It would seem from Ken's work so far that
OLTP compression might be suitable only for static tables and perhaps
audit history tables where the DML activity consists only of inserts
of new audit/history data.

John Hurley

unread,
Jul 31, 2009, 7:18:40 PM7/31/09
to
On Jul 31, 4:50 am, "tamkat...@gmail.com" <tamkat...@gmail.com> wrote:

snip

You might want to volley this over to asktom ( http://asktom.oracle.com
) esp since you have a test case and get his read on it.

tamk...@gmail.com

unread,
Aug 8, 2009, 2:09:50 PM8/8/09
to

Hi again,

Further findings on OLTP compression :

1) Reducing a column's width (e.g update mytable set col1 = substr
(col1,1,trunc(length(col1)/2))) and then expanding it again seems to
make the segment explode as well

2) You have a compressed table with app. 130.00 rows and 2 columns
having exactly the same content. You then issue an update to set one
of the columns equal to the other one. No logical change, though a
physical one. The segment explodes again....case is below.

As for now, I would be reluctant to use OLTP compression on even
rarely updated tables. Any comments/findings ?

******************************

SQL> -- Create an ASSM tablespace with small extent size for testing

SQL> drop tablespace t1 including contents and datafiles;

TabelomrÕde er droppet.

SQL> create tablespace t1 uniform size 128k;

TabelomrÕde er oprettet.

SQL>
SQL>
SQL> create table c1 (col1 varchar2(30),col2 number(10),col3 varchar2
(30),col4 number(10)) tablespace t1 ;

Tabel er oprettet.

SQL>
SQL> create table c2 (col1 varchar2(30),col2 number(10),col3 varchar2
(30),col4 number(10)) compress for all operations

Tabel er oprettet.

SQL>
SQL>
SQL>
SQL>
SQL> -- Simulate OLTP
SQL> declare
2
3 cursor c is

4 select object_id,object_name
5 from dba_objects;
6
7 i binary_integer;
8 j binary_integer;
9
10 begin
11
12 for i in 1..10 loop
13 for c_rec in c loop
14 j := j + 1;
15 insert into c1 values
(c_rec.object_name,c_rec.object_id,c_rec.object_name,c_rec.object_id);
16 insert into c2 values
(c_rec.object_name,c_rec.object_id,c_rec.object_name,c_rec.object_id);
17 if mod(j,10) = 0 then
18 commit;
19 end if;
20 end loop;
21 end loop;
22 end;
23 /

PL/SQL-procedure er udf°rt.

SQL>
SQL> commit;

Bekrµftelse er udf°rt.

SQL>
SQL> -- Calculate compress ratio
SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio from

2 dba_segments a, dba_segments b
3 where a.segment_name = 'C1'
4 and b.segment_name = 'C2';

COMPRESS_RATIO
--------------
,71

SQL>
SQL> commit;

Bekrµftelse er udf°rt.

SQL>
SQL>
SQL> update c1 set col3 = col1;

131440 rµkker er opdateret.

SQL>
SQL> update c2 set col3 = col1;

131440 rµkker er opdateret.

SQL>
SQL> commit;

Bekrµftelse er udf°rt.

SQL>
SQL>


SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio from

2 dba_segments a, dba_segments b
3 where a.segment_name = 'C1'
4 and b.segment_name = 'C2';

COMPRESS_RATIO
--------------
1,2

Jonathan Lewis

unread,
Aug 10, 2009, 2:10:44 AM8/10/09
to

"Tamk...@gmail.com" <tamk...@gmail.com> wrote in message
news:c3c48bcf-640d-4351...@j9g2000vbp.googlegroups.com...

>Hi again,
>
>Further findings on OLTP compression :
>

But this is just like your previous test.
You are testing an OLTP feature with a DSS type of update.

General point with Oracle - understand what a feature is for,
and then use it when it is supposed to be used, and think
carefully before using it where it is not supposed to be used.

Congratulations, by the way, on thinking up a couple of
good test demonstrations of the types of scenario where
online compression is not appropriate.

--
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com

Author: Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


tamk...@gmail.com

unread,
Aug 11, 2009, 6:53:39 AM8/11/09
to
On 10 Aug., 08:10, "Jonathan Lewis" <jonat...@jlcomp.demon.co.uk>
wrote:
> "Tamkat...@gmail.com" <tamkat...@gmail.com> wrote in message

>
> news:c3c48bcf-640d-4351...@j9g2000vbp.googlegroups.com...
>
> >Hi again,
>
> >Further findings on OLTP compression :
>
> But this is just like your previous test.
> You are testing an OLTP feature with a DSS type of update.
>
> General point with Oracle - understand what a feature is for,
> and then use it when it is supposed to be used, and think
> carefully before using it where it is not supposed to be used.
>
> Congratulations, by the way, on thinking up a couple of
> good test demonstrations of the types of scenario where
> online compression is not appropriate.
>
> --
> Regards
>
> Jonathan Lewishttp://jonathanlewis.wordpress.com
>
> Author: Cost Based Oracle: Fundamentalshttp://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

>
> The Co-operative Oracle Users' FAQhttp://www.jlcomp.demon.co.uk/faq/ind_faq.html

Hi Jonathan,


Thanks for your comments.

True, this is not strictly OLTP.

Just a comment on evaluating a feature before use : I guess the
challenge here (and in general) is that the terms change so rapidly.

My experience is that almost every system encounters a mixed workload,
sooner of later. The rapidly changing business requirements (that
being from a short-term wish of saving expenses, developing business
or whatever) make it hard for the DBA to predict the future usage of
the data. A set of seemingly strict OLTP tables may be included in
some DSS/WH application, changing the volatility of those tables
suddenly and/or steadily over time.

The DBA can thus suggest a set of proper/ideal solution for that
problem (at least from the DBA perspective), but in this world, his/
her solution will be mercilessly cost/benefit evaluated and may be
picked or chosen.

The good news is that that DBA's work won't be boring, nor obsolete in
the future...:-)

- Kenneth Koenraadt

0 new messages