Export Table Dump In Oracle 11g Command

0 views
Skip to first unread message

Dunstan Jomphe

unread,
Aug 5, 2024, 3:18:48 AM8/5/24
to ounorsenli
OracleData Pump is a newer, faster and more flexible alternative to the "exp" and "imp" utilities used in previous Oracle versions. In addition to basic import and export functionality data pump provides a PL/SQL API and support for external tables.

This article was originally written against Oracle 10g, but the information is still relevant up to and including the latest versions of Oracle. New features are broken out into separate articles, but the help section at the bottom is up to date with the latest versions.


For the examples to work we must first unlock the SCOTT account and create a directory object it can access. The directory object is only a pointer to a physical directory, creating it does not actually create the physical directory on the file system of the database server.


Data Pump is a server-based technology, so it typically deals with directory objects pointing to physical directories on the database server. It does not write to the local file system on your client PC.


The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export/import. When the EXCLUDE parameter is used, all objects except those specified by it will be included in the export/import. The two parameters are mutually exclusive, so use the parameter that requires the least entries to give you the result you require. The basic syntax for both parameters is the same.


The way you handle quotes on the command line will vary depending on what you are trying to achieve. Here are some examples that work for single tables and multiple tables directly from the command line.


In the case of exports, the NETWORK_LINK parameter identifies the database link pointing to the source server. The objects are exported from the source server in the normal manner, but written to a directory object on the local server, rather than one on the source server. Both the local and remote users require the EXP_FULL_DATABASE role granted to them.


For imports, the NETWORK_LINK parameter also identifies the database link pointing to the source server. The difference here is the objects are imported directly from the source into the local server without being written to a dump file. Although there is no need for a DUMPFILE parameter, a directory object is still required for the logs associated with the operation. Both the local and remote users require the IMP_FULL_DATABASE role granted to them.


The exp utility used the CONSISTENT=Y parameter to indicate the export should be consistent to a point in time. By default the expdp utility exports are only consistent on a per table basis. If you want all tables in the export to be consistent to the same point in time, you need to use the FLASHBACK_SCN or FLASHBACK_TIME parameter.


Not surprisingly, you can make exports consistent to an earlier point in time by specifying an earlier time or SCN, provided you have enough UNDO space to keep a read consistent view of the data during the export operation.


Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read. The same wildcard can be used during the import to allow you to reference multiple files.


Oracle have incorporated support for data pump technology into external tables. The ORACLE_DATAPUMP access driver can be used to unload data to data pump export files and subsequently reload it. The unload of data occurs when the external table is created using the "AS" clause.


The syntax to create the external table pointing to an existing file is similar, but without the "AS" clause. In this case we will do it the same schema, but this could be in a different schema in the same instance, or in an entirely different instance.


That database user performing the export and import operations will need the appropriate level of privilege to complete the actions. For example, if the user can't create a table in the schema, it will not be able to import a table into a schema.


Some of operations, including those at database level will need the DATAPUMP_EXP_FULL_DATABASE and/or DATAPUMP_IMP_FULL_DATABASE roles. These are very powerful, so don't grant them without careful consideration.


All data pump actions are performed by multiple jobs (DBMS_SCHEDULER not DBMS_JOB jobs). These jobs are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process. The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter. Cancelling the client process does not stop the associated data pump job. Issuing "CTRL+C" on the client during a job stops the client output and puts you into interactive command mode. You can read more about this in more detail here. Typing "status" at this prompt allows you to monitor the current job.


Transferring data from a higher database version to a lower version is possible by using the VERSION parameter on the export. For example, if I am exporting from a 19c database and I want to import into a 18c database I would do the following.


The second thing to consider is the time zone file version. It isn't possible to transfer data between databases if they don't have the same time zone file version. Later versions of the database seem more sensitive to this issue. The import will give the following error.


Looking for the best way to replicate data from Oracle? Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready.


A directory object is like a pointer pointing to the directory. It will be used by expdp for reference. This step should be done by privileged users only, for example, sys user. Also, this step includes granting some privileges to the object.


A directory object is like a pointer pointing to the directory. It will be used by expdp for reference. This step should be done by privileged users only, for example, sys user. Also, this step includes granting some privileges to the object.


Tablespace refers to the storage area where the database stores data logically. So, when you say exporting a tablespace, you mean exporting all the tables in that storage area along with all the dependent objects of that table.


A directory object is like a pointer pointing to the directory. It will be used by expdp for reference. This step should be done by privileged users only, for example, sys user. Also, this step includes granting some privileges to the object.


A directory object is like a pointer pointing to the directory. It will be used by expdp for reference. This step should be done by privileged users only, for example, sys user. Also, this step includes granting some privileges to the object.


Because user hr is exporting tables from his own schema, it is unnecessary to mention the schema name for the tables. The NOLOGFILE=YES argument indicates that an export log file for the operation is not generated.


Oracle highly recommends utilizing Data Pump export/import utilities rather than the original export/import tools. The following are the major differences between Data Pump and the original exp/imp tools.


These dump files have disks that contain table data, database metadata, etc. But using expdp utility to export data from the Oracle database is a complex and time-consuming process. To avoid all these challenges, you can directly opt for a fully automated No-code Data Pipeline, Hevo. Hevo will not only migrate your data from the Oracle database to your desired location but will also make sure that your data is safe and consistent.


You can import a dump file set only by using the Oracle Data Pump Import utility. You can import the dump file set on the same system, or import it to another system, and load the dump file set there.


The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Oracle Data Pump Import utility uses these files to locate each database object in the dump file set.


Oracle Data Pump Export enables you to specify that you want a job to move a subset of the data and metadata, as determined by the export mode. This subset selection is done by using data filters and metadata filters, which are specified through Oracle Data Pump Export parameters.


Several system schemas cannot be exported, because they are not user schemas; they contain Oracle-managed data and metadata. Examples of schemas that are not exported include SYS, ORDSYS, and MDSYS. Secondary objects are also not exported, because the CREATE INDEX run at import time will recreate them.


The characteristics of the Oracle Data Pump export operation are determined by the Export parameters that you specify. You can specify these parameters either on the command line, or in a parameter file.


Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the PARFILE parameter, because parameter files cannot be nested. If you are using parameters whose values require quotation marks, then Oracle recommends that you use parameter files.

3a8082e126
Reply all
Reply to author
Forward
0 new messages