I have created all the tablespaces in the target database as locally
managed uniform extents. I thought that Oracle would then ignore the
storage clause on create table/index statements etc and as there is no
data, just allocate a single extent for each table.
However when I run the import, some of the tables have large extent
sizes defined on the source database and Oracle seems to be creating
the first extent at the size it gives in the import ddl command.
I don't have enough space on the target database for this, so I need
to override the initial extent sizes somehow. Is there any sane way
to do this (apart from somehow pre creating all the objects - there
are over 600 tables in the schema!)?
Thanks,
Stephen.
Reading the documentation is always worthwhile:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#CEGFIAGE
Notice that compress defaults to y, and you're painfully aware of the
consequences. You need to set this parameter to n on the command line
or in your parfile:
exp .../... file=.... compress=n .....
This will prevent exp from 'resizing' the initial extent.
David Fitzjarrell
I know what compress = y/n does - but if my original create table
statement on the big database said
"create table x .... storage initial 80M .... "
Then will export not generate the DDL to have the same initial extent
size if compress is N?
It will, which is why I believe that compress is not set properly for
your export.
David Fitzjarrell
You might want to run:
set long 50000
select dbms_metadata.get_ddl('TABLE','<table name here>')
from dual;
to see what Oracle believes the storage parameters to be. You might
be surprised at what you find.
David Fitzjarrell
OK, I ran the sql you suggested and the end of the output is:
...
STORAGE(INITIAL 67108864 NEXT 4194304 MINEXTENTS 1 MAXEXTENTS
2147483645
So my storage clause is coming up as ~67Meg. If that size comes up in
the export file, when I import it into locally managed tablespace with
uniform extents of 64K will that table, when empty consume 67M or 64K
on Oracle 10g?
I tested this again - compress is set to N for sure. The table that
reported 67Meg initial extent size using DBMS_Metadata consumes 67Meg
when I import it. It seems Oracle just ignores the uniform extent if
you specify one that is bigger in your DDL.
Is there any way to override this, or am I going to have to manually
(or via a script) adjust the storage parameters of every table before
exporting?
Look at the exp doc again. If compress=y, it makes a big extent. If
compress=n it uses the storage parameters of each table. Either
precreate the tables with the proper initial, or adjust the tables
before export.
Note that the imp command INDEXFILE allows you to get the create table
statements that you can modify to precreate the tables.
Uniform extents are uniform, Oracle doesn't ignore them, but may use a
whole lot of them because of the DDL initial being large.
jg
--
@home.com is bogus.
Rich Niemiec's 10gR2 book: http://www.amazon.com/Oracle-Database-Performance-Tuning-Techniques/dp/0072263059
"Customers viewing this page may be interested in these Sponsored
Links
Advanced Finger Cymbals
www.ansuya.com Zill Drills with Ansuya Movement Demonstration &
Technique"
It doesn't "ignore" the request for 67MB. It seeks to honour it,
although it won't do it in the precise way the storage clause is
demanding it.
That is, if you've got INITIAL 67M, your table will be created with 67
1MB extents (or 134 512KB extents etc). You will thus get your 67MB,
but not quite in the way you were expecting it.
Locally managed tablespaces have always tried to 'honour in the
intention' the iNITIAL clause; it's the NEXT they completely ignore!
If your import was done with consistent=Y (one of the worst defaults
Oracle ever invented), then -for large tables- your import will be
demanding large amounts of free space up-front. It won't be demanding
a single piece of 67MB free space, but it will still be demanding 67MB
of free space somewhere. If you haven't got it, you'll run into
trouble.
Do what David told you to do a long time ago: compress=N is the only
sensible option for export anyway, and you might as well do it. Then
your dump file will be asking for INITIAL 1000M NEXT 512K(for
example), and it will be given 1 1MB initial extent and as many 1MB
next extents as it will need to import the entire data set.
If you really and truly want to muck about with manipulating storage
clauses (not a sensible option, but if you insist...) then do a Google
Search for DataBee. Allows you to override storage clauses in a dump
file without having to modify them in the source database.
Yea I did do this - the storage parameters on the table are massive
( initial 67Meg in the case I looked at, and there are over 600 tables
in the schema!). I suspect what has happened that once upon a time
this database was created from an export with compress=Y and all the
tables have had they storage parameters changed because of this.
> Note that the imp command INDEXFILE allows you to get the create table
> statements that you can modify to precreate the tables.
That might just work - at least I can write a quick Perl script to fix
the statements.
Thats where my understanding went missing, as I thought it just
ignored the entire storage clause on the table, but its makes sense
that it doesn't.
> If you really and truly want to muck about with manipulating storage
> clauses (not a sensible option, but if you insist...) then do a Google
> Search for DataBee. Allows you to override storage clauses in a dump
> file without having to modify them in the source database.
It may not be sensible for production, but I am attempting to recreate
a DEV copy of a terra byte+ database that contains only the lookup
table data, all objects and stored procedures so I need to get the
storage parameters down to a manageable size (over 600 tables in the
schema at 50MB each = too big!). I will checkout DataBee and see what
it can do for me!
Thanks,
Stephen.
Databee is here: http://www.databee.com/
Their free DDL extraction wizard is probably what you want... but that
requires a rows=n export dump file. It's very good, though, and I've
used it with a 10g dump file without incident.
Someone else has already mentioned the INDEXFILE=Y ROWS=N versions of
import: that will get you a text file you can hack to pieces that can
act as a tables & index creation script.
You could also use dbms_metadata.get_ddl to achieve the same sort of
thing, but it's going to be on a table-by-table basis. Something like
select 'exec dbms_metadata.get_ddl('||owner, table_name||');' from
user_tables would be in order, though, to generate the script that
will generate the script that will create the objects you want in one
hit -and with a bit of spooling and editing, the storage clauses can
be sorted.
The basic principle in either case is to pre-create your objects using
new storage parameters and then run import with ignore=y just to
populate the already-created objects.
Not sure if you mentioned an Oracle version, but if it's 10g or higer
then you would probably do better with the Data Pump version of export/
import, which has a TRANSFORM parameter that lets you over-ride
storage clauses (amongst other things) as you are doing the import.
I found a tool called DDL Wizard made by the same people who make
DataBee I believe, and it allowed me to get all the objects created
without storage clauses - quite a useful tool.
I am importing into Oracle 10G, but exporting from 9i so I am stuck
with imp/exp for the time being.
Thanks for all the help on this thread - its got me what I needed
finally!
Cheers,
Stephen.
Apparently this table was exported with compress=y, dropped and
imported from that dmp file. At this point having compress=n only
keeps Oracle from making that even larger.
> If that size comes up in
> the export file, when I import it into locally managed tablespace with
> uniform extents of 64K will that table, when empty consume 67M or 64K
> on Oracle 10g?- Hide quoted text -
>
> - Show quoted text -
67 M, or thereabouts, as it will consume as many 64K extents as
necessary to achieve that size. Howard has explained this well.
David Fitzjarrell
Since you are using LMT's, Oracle uses your INITIAL extent as a guide
for allocating space when creating the table. Basically, Oracle will
allocate enough space to have at least INITIAL megabytes allocated. If
yo do not want this behavior, then precreate your tables as needed and
then import with IGNORE=Y.
HTH,
Brian
--
===================================================================
Brian Peasland
d...@nospam.peasland.net
http://www.peasland.net
Remove the "nospam." from the email address to email me.
"I can give it to you cheap, quick, and good.
Now pick two out of the three" - Unknown
--
Posted via a free Usenet account from http://www.teranews.com