What is Oracle version you are speaking about?
I did some very basic tests on 10.2.0.4 and couldn't see anything
similar to compression. I assumed, compression rate should be (doesn't
matter what compression method is used) highest for most redundant data
- in my testcase dumpfile size was the same for clobs filled either with
dbms_random or with blanks. What size return
sum(dbms_lob.getlength(lob_column)) ?
Does it nearly correlate with 45Gb ? If not, i would assume a massive
space overallocation ( for whatever reason) during import. Another
points to consider could be of course the charactersets on both source
and target database and the original size of lob segments in the source.
If you are on 11g however - and export was done with data compression
option,i would not wonder about compression rate, as it depends
massively on your data pattern.
Best regards
Maxim
On Sun, 12 Oct 2008 12:09:13 +0200, Maxim Demenko wrote:
> What is Oracle version you are speaking about?
10.2.0.4
> I did some very basic
> tests on 10.2.0.4 and couldn't see anything similar to compression. I
> assumed, compression rate should be (doesn't matter what compression
> method is used) highest for most redundant data - in my testcase
> dumpfile size was the same for clobs filled either with dbms_random or
> with blanks. What size return sum(dbms_lob.getlength(lob_column)) ? Does
> it nearly correlate with 45Gb ?
No, it doesn't:
1 select sum(dbms_lob.getlength(segment_text))/1048576 MB 2* from
news_segments partition(data0544)
SQL> /
MB
----------
8060.59686
It correlates more closely with 9GB.
> If not, i would assume a massive space overallocation ( for whatever
> reason) during import.
Yes, that's probably it. I will attempt the normal export, no "dp"
version, and see what happens. That's probably the dirty little secret of
data pump: waste humongous amounts of space. I am not surprised that
Oracle Corp. decided to venture into storage business.
> Another points to
> consider could be of course the charactersets on both source and target
> database and the original size of lob segments in the source.
Both character sets are the same.
I just tried with normal export, with the same result. It seems that
oracle grossly overallocates LOB columns in 10g. In other words, LOB
columns are enormous space wasters. In my case, it wasted +45GB of space
to store just 8.5GB of data. Yeah, I understand now why Oracle Corp.
went into storage business. LOB columns are their greatest booster.
I think, what you observe, should have rational explanation - though, it
is difficult to make sensible assumptions without knowing details.
Anyway, a note 386341.1 may give you some pointers if you like to
investigate it further. Shrink should probably do a workaround if your
nearest goal is the space deallocation, however one should be cautious -
iirc, there were a plenty of bugs with shrink and lob corruptions in
previous releases/patchsets.
Best regards
Maxim
When doing this did you store your LOB in an LOB segment or in-line
in the table? Did you use the UTL_COMPRESS? If so which compression
quality? With ASM?
There is no question but that the explosion in database size points
to storage vendors doing rather well. But from where I sit it seems
Oracle, the database group under Andy Mendelson, is doing a lot to
minimize storage requirements.
But Oracle Corp. is not in any way of which I am aware in the "storage
business." Where did you get this idea?
--
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damo...@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
> I think, what you observe, should have rational explanation - though, it
> is difficult to make sensible assumptions without knowing details.
> Anyway, a note 386341.1 may give you some pointers if you like to
> investigate it further. Shrink should probably do a workaround if your
> nearest goal is the space deallocation, however one should be cautious -
> iirc, there were a plenty of bugs with shrink and lob corruptions in
> previous releases/patchsets.
>
> Best regards
>
> Maxim
I opened a TAR. This is completely unacceptable. I'll keep you informed.
--
Mladen Gogala
http://mgogala.freehostia.com
>
> There is no question but that the explosion in database size points
> to storage vendors doing rather well. But from where I sit it seems
> Oracle, the database group under Andy Mendelson, is doing a lot to
> minimize storage requirements.
>
> But Oracle Corp. is not in any way of which I am aware in the "storage
> business." Where did you get this idea?
> --
> Daniel A. Morgan
> Oracle Ace Director & Instructor
> University of Washington
> damor...@x.washington.edu (replace x with u to respond)
> Puget Sound Oracle Users Groupwww.psoug.org
Maybe from http://www.oracle.com/technology/products/bi/db/exadata/pdf/exadata-faq.pdf
(Which relevant statements, oddly enough, I don't seem to be able to
cut and quote into this window or notepad/wordpad, I guess because of
new version of adobe).
jg
--
@home.com is bogus.
Haka
Chunka
Lilly
- local school board election sign.