Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Paralelize CONSTRAINT VALIDATION during impdp

6,106 views
Skip to first unread message

nor...@maillard.im

unread,
Aug 4, 2013, 6:00:55 PM8/4/13
to
Hello,

Is it normal that with IMPDP PARALLEL=8, the CONSTRAINT VALIDATION step
is not taking advantage of the PARALLEL parameter ?

Given the fact this is the longest step during our datapump job, it is a
pity we can't parallelize it.

This is with Oracle Enterprise 11gR2 (11.2.0.2.0) and Oracle 11g client
tools.

Thank you
--
XMA

Mladen Gogala

unread,
Aug 4, 2013, 10:32:47 PM8/4/13
to
On Mon, 05 Aug 2013 00:00:55 +0200, noreply wrote:

> Is it normal that with IMPDP PARALLEL=8, the CONSTRAINT VALIDATION step
> is not taking advantage of the PARALLEL parameter ?

Yes. Constraint validation step, to the best of my knowledge, doesn't use
parallelism. Never has. What kind of constraint is the problem? You can
create constraints separately from the import using a SQL script and then
use "NOVALIDATE" clause for the big tables. It's a bit more of the DBA
time, but the import will get completed sooner.



--
Mladen Gogala
The Oracle Whisperer
http://mgogala.byethost5.com

nor...@maillard.im

unread,
Aug 5, 2013, 4:56:10 PM8/5/13
to
Mladen Gogala <gogala...@gmail.com> writes:

> On Mon, 05 Aug 2013 00:00:55 +0200, noreply wrote:
>
>> Is it normal that with IMPDP PARALLEL=8, the CONSTRAINT VALIDATION step
>> is not taking advantage of the PARALLEL parameter ?
>
> Yes. Constraint validation step, to the best of my knowledge, doesn't use
> parallelism. Never has. What kind of constraint is the problem? You can
> create constraints separately from the import using a SQL script and then
> use "NOVALIDATE" clause for the big tables. It's a bit more of the DBA
> time, but the import will get completed sooner.

I do not have the exact numbers but what is sure is that the data
loading is just the quickest step and takes less than 1h30. The IMPDP
total time is more than 7 hours !

Thank you for your suggestion.
--
XMA

Mladen Gogala

unread,
Aug 7, 2013, 12:07:48 PM8/7/13
to
On Mon, 05 Aug 2013 22:56:10 +0200, noreply wrote:

> I do not have the exact numbers but what is sure is that the data
> loading is just the quickest step and takes less than 1h30. The IMPDP
> total time is more than 7 hours !
>
> Thank you for your suggestion.

You can export constraints only like this:


expdp userid=system schemas=SCOTT include=CONSTRAINT content=metadata_only dumpfile=constraints.dmp directory=TMP

Export: Release 11.2.0.3.0 - Production on Wed Aug 7 11:54:23 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": userid=system/******** schemas=SCOTT include=CONSTRAINT content=metadata_only dumpfile=constraints.dmp directory=TMP
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/tmp/constraints.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:54:37

Once you have the constraints exported, you can do something like this:

oracle@oradb tmp]$ impdp userid=system schemas=SCOTT dumpfile=constraints.dmp directory=TMP sqlfile=scott.sql


Import: Release 11.2.0.3.0 - Production on Wed Aug 7 11:59:07 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_SQL_FILE_SCHEMA_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_SQL_FILE_SCHEMA_01": userid=system/******** schemas=SCOTT dumpfile=constraints.dmp directory=TMP sqlfile=scott.sql
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

The file itself looks like this:

-- CONNECT SYSTEM
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ALTER TABLE "SCOTT"."DEPT" ADD CONSTRAINT "PK_DEPT" PRIMARY KEY ("DEPTNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ENABLE;
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ENABLE;
-- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE;

When you have that, you can use the universal hammer to get rid of the
validation step:

perl -i.bak -pe 's/ENABLE;/ENABLE NOVALIDATE;/;' scott.sql

and happily run the file thereafter. That's how it's done.

nor...@maillard.im

unread,
Aug 7, 2013, 4:10:01 PM8/7/13
to
Mladen Gogala <gogala...@gmail.com> writes:

> On Mon, 05 Aug 2013 22:56:10 +0200, noreply wrote:
>
>> I do not have the exact numbers but what is sure is that the data
>> loading is just the quickest step and takes less than 1h30. The IMPDP
>> total time is more than 7 hours !
>>
>> Thank you for your suggestion.
>
> You can export constraints only like this:
>
>
> expdp userid=system schemas=SCOTT include=CONSTRAINT content=metadata_only dumpfile=constraints.dmp directory=TMP

[...snip...]

> and happily run the file thereafter. That's how it's done.

All of this suppose I am doing a 2-step impdp job, right ?

First schemas objects without constraints and then, only constraints but
using the novalidate "universal hammer". Is that right ?

Thank you.
--
XMA

Mladen Gogala

unread,
Aug 7, 2013, 7:52:46 PM8/7/13
to
On Wed, 07 Aug 2013 22:10:01 +0200, noreply wrote:


> All of this suppose I am doing a 2-step impdp job, right ?
>
> First schemas objects without constraints and then, only constraints but
> using the novalidate "universal hammer". Is that right ?

Yup. That's it.

Mladen Gogala

unread,
Aug 7, 2013, 8:07:11 PM8/7/13
to
On Wed, 07 Aug 2013 23:52:46 +0000, Mladen Gogala wrote:

> On Wed, 07 Aug 2013 22:10:01 +0200, noreply wrote:
>
>
>> All of this suppose I am doing a 2-step impdp job, right ?
>>
>> First schemas objects without constraints and then, only constraints
>> but using the novalidate "universal hammer". Is that right ?
>
> Yup. That's it.

BTW, the phrase "universal hammer" refers to Perl, not to "NOVALIDATE"
clause. Perl is an incredibly mighty tool which, in this case, can
alleviate the need for using the screen editor on potentially very large
text file. There are other tool that can do the same thing, like "sed",
but my personal preference is Perl. When the only tool you have is a
hammer, everything looks like a nail. In this case, the hammer is ultra
powerful.
Of course, if something is my personal preference, that must be right for
everybody. I am a DBA, after all.

nor...@maillard.im

unread,
Aug 8, 2013, 4:50:28 PM8/8/13
to
Mladen Gogala <gogala...@gmail.com> writes:

> Of course, if something is my personal preference, that must be right for
> everybody. I am a DBA, after all.

:) That's right. I do prefer using sed for this but YMMV.

--
XMA
0 new messages