8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23c | Misc | PL/SQL | SQL | RAC | WebLogic | Linux

Home » Articles » Misc » Here

SQLcl : Data Pump

In this article we demonstrate the Data Pump functionality in SQLcl. From SQLcl version 21.4 onward, we can issue some data pump commands. This is a wrapper over the DBMS_DATAPUMP package. Prior to version 22.1 the functionality is limited to schema exports and imports. From version 22.1 onward table exports are possible.

Related articles.

Setup

The examples in this article require the following tables.

-- drop table emp purge;
-- drop table dept purge;

create table dept (
  deptno number(2) constraint pk_dept primary key,
  dname varchar2(14),
  loc varchar2(13)
) ;

create table emp (
  empno number(4) constraint pk_emp primary key,
  ename varchar2(10),
  job varchar2(9),
  mgr number(4),
  hiredate date,
  sal number(7,2),
  comm number(7,2),
  deptno number(2) constraint fk_deptno references dept
);

insert into dept values (10,'ACCOUNTING','NEW YORK');
insert into dept values (20,'RESEARCH','DALLAS');
insert into dept values (30,'SALES','CHICAGO');
insert into dept values (40,'OPERATIONS','BOSTON');

insert into emp values (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,null,20);
insert into emp values (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,30);
insert into emp values (7521,'WARD','SALESMAN',7698,to_date('22-2-1981','dd-mm-yyyy'),1250,500,30);
insert into emp values (7566,'JONES','MANAGER',7839,to_date('2-4-1981','dd-mm-yyyy'),2975,null,20);
insert into emp values (7654,'MARTIN','SALESMAN',7698,to_date('28-9-1981','dd-mm-yyyy'),1250,1400,30);
insert into emp values (7698,'BLAKE','MANAGER',7839,to_date('1-5-1981','dd-mm-yyyy'),2850,null,30);
insert into emp values (7782,'CLARK','MANAGER',7839,to_date('9-6-1981','dd-mm-yyyy'),2450,null,10);
insert into emp values (7788,'SCOTT','ANALYST',7566,to_date('13-JUL-87','dd-mm-rr')-85,3000,null,20);
insert into emp values (7839,'KING','PRESIDENT',null,to_date('17-11-1981','dd-mm-yyyy'),5000,null,10);
insert into emp values (7844,'TURNER','SALESMAN',7698,to_date('8-9-1981','dd-mm-yyyy'),1500,0,30);
insert into emp values (7876,'ADAMS','CLERK',7788,to_date('13-JUL-87', 'dd-mm-rr')-51,1100,null,20);
insert into emp values (7900,'JAMES','CLERK',7698,to_date('3-12-1981','dd-mm-yyyy'),950,null,30);
insert into emp values (7902,'FORD','ANALYST',7566,to_date('3-12-1981','dd-mm-yyyy'),3000,null,20);
insert into emp values (7934,'MILLER','CLERK',7782,to_date('23-1-1982','dd-mm-yyyy'),1300,null,10);
commit;

We also need a directory object to work with. The following code connects to a privileged user, creates a directory object and grants access to our test user. We then connect as the test user.

-- Connect to a privileged user.
conn sys/SysPassword1@//localhost:1521/pdb1 as sysdba

-- Create a directory object.
create or replace directory tmp_dir as '/tmp/';
grant read, write on directory tmp_dir to testuser1;

-- Connect to the test user.
conn testuser1/testuser1@//localhost:1521/pdb1

Export

The USER_OBJECTS view shows us the contents of the test schema. We have two tables, with their respective primary key indexes.

column object_name format a30
column object_type format a30

select object_name,
       object_type
from   user_objects
order by 1, 2;

OBJECT_NAME                    OBJECT_TYPE
------------------------------ ------------------------------
DEPT                           TABLE
EMP                            TABLE
PK_DEPT                        INDEX
PK_EMP                         INDEX

4 rows selected.

SQL>

We use the DATAPUMP EXPORT command, passing the current user in the schemas argument. This is the default action if the schemas argument is not used. We also supply the directory, dumpfile and logfile names.

datapump export -
     -schemas testuser1 -
     -directory tmp_dir -
     -dumpfile testuser1.dmp -
     -logfile testuser1.log

** Datapump Command Start ** at 2022.03.18-13.37.18
Initiating DATA PUMP
DATABASE TIME ZONE: VERSION:35 CON_ID:0
Log Location: TMP_DIR:testuser1.log
Starting "TESTUSER1"."ESQL_642":
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported "TESTUSER1"."DEPT"                          6.031 KB       4 rows
. . exported "TESTUSER1"."EMP"                           8.781 KB      14 rows
Master table "TESTUSER1"."ESQL_642" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTUSER1.ESQL_642 is:
  /tmp/testuser1.dmp
Job "TESTUSER1"."ESQL_642" successfully completed at Fri Mar 18 13:38:16 2022 elapsed 0 00:00:55
DataPump Operation Status 'COMPLETED'
Jobname = ESQL_642
** Datapump Command End ** at 2022.03.18-13.38.19
SQL>

Once the export is complete we see the two tables have been exported, and a master table for the export operation has been created.

We can perform the same operation with a short-form of the command, and the arguments.

dp export -
     -s testuser1 -
     -d tmp_dir -
     -f testuser1.dmp -
     -lf testuser1.log

** Datapump Command Start ** at 2022.03.18-13.39.17
Initiating DATA PUMP
DATABASE TIME ZONE: VERSION:35 CON_ID:0
Log Location: TMP_DIR:testuser1.log
Starting "TESTUSER1"."ESQL_682":
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported "TESTUSER1"."ESQL_642"                      332.9 KB    1598 rows
. . exported "TESTUSER1"."DEPT"                          6.031 KB       4 rows
. . exported "TESTUSER1"."EMP"                           8.781 KB      14 rows
Master table "TESTUSER1"."ESQL_682" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTUSER1.ESQL_682 is:
  /tmp/testuser1.dmp
Job "TESTUSER1"."ESQL_682" successfully completed at Fri Mar 18 13:40:13 2022 elapsed 0 00:00:53
DataPump Operation Status 'COMPLETED'
Jobname = ESQL_682
** Datapump Command End ** at 2022.03.18-13.40.16
SQL>

The result is similar, but this time we've also exported the master table from first export operation. We have a new master table created for the second export operation.

From version 22.1 onward we can include and exclude objects, and use name lists to target specific objects. You can get the full list of arguments here, or from the Help text below.

In the following example we use the -namelist flag, to perform a table-level export.

datapump export -
     -schemas testuser1 -
     -namelist TABLE=emp -
     -directory tmp_dir -
     -dumpfile emp.dmp -
     -logfile emp.log

** Datapump Command Start ** at 2022.04.10-14.26.26
Initiating DATA PUMP
DATABASE TIME ZONE: VERSION:35 CON_ID:0
Log Location: TMP_DIR:emp.log
Starting "TESTUSER1"."ESQL_84":
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported "TESTUSER1"."EMP"                           8.781 KB      14 rows
Master table "TESTUSER1"."ESQL_84" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTUSER1.ESQL_84 is:
  /tmp/emp.dmp
Job "TESTUSER1"."ESQL_84" successfully completed at Sun Apr 10 13:27:09 2022 elapsed 0 00:00:40
DataPump Operation Status 'COMPLETED'
Jobname = ESQL_84
** Datapump Command End ** at 2022.04.10-14.27.12
SQL>

Import

We drop the EMP and DEPT tables so we can import them.

drop table emp purge;
drop table dept purge;

This time we use the DATAPUMP IMPORT command, with the same arguments as before. Once the operation completes we can see we imported the two tables.

datapump import -
     -schemas testuser1 -
     -directory tmp_dir -
     -dumpfile testuser1.dmp -
     -logfile testuser1.log

** Datapump Command Start ** at 2022.03.18-13.44.26
Initiating DATA PUMP
DATABASE TIME ZONE: VERSION:35 CON_ID:0
Log Location: TMP_DIR:testuser1.log
Master table "TESTUSER1"."ISQL_722" successfully loaded/unloaded
Starting "TESTUSER1"."ISQL_722":
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39151: Table "TESTUSER1"."ESQL_642" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TESTUSER1"."DEPT"                          6.031 KB       4 rows
. . imported "TESTUSER1"."EMP"                           8.781 KB      14 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "TESTUSER1"."ISQL_722" completed with 1 error(s) at Fri Mar 18 13:44:55 2022 elapsed 0 00:00:27
DataPump Operation Status 'COMPLETED'
Jobname = ISQL_722
** Datapump Command End ** at 2022.03.18-13.44.58
SQL>

We check for the master tables using the ESQL and ISQL prefixes. We see the two master tables for the export operations, and one for the import operation.

column table_name format a30

select table_name
from   user_tables
where  table_name like 'ESQL%'
or     table_name like 'ISQL%'
order by 1;

TABLE_NAME
------------------------------
ESQL_642
ESQL_682
ISQL_722

3 rows selected.

SQL>

From version 22.1 onward we can include and exclude objects, and use name lists to target specific objects. You can get the full list of arguments here, or from the Help text below.

In the following example we use the -namelist flag, to perform a table-level import. We drop the table before importing.

drop table emp purge;

datapump import-
     -schemas testuser1 -
     -namelist TABLE=emp -
     -directory tmp_dir -
     -dumpfile emp.dmp -
     -logfile emp.log

** Datapump Command Start ** at 2022.04.10-14.39.37
Initiating DATA PUMP
DATABASE TIME ZONE: VERSION:35 CON_ID:0
Log Location: TMP_DIR:emp.log
Master table "TESTUSER1"."ISQL_122" successfully loaded/unloaded
Starting "TESTUSER1"."ISQL_122":
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TESTUSER1"."EMP"                           8.781 KB      14 rows
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "TESTUSER1"."ISQL_122" successfully completed at Sun Apr 10 13:40:10 2022 elapsed 0 00:00:29
DataPump Operation Status 'COMPLETED'
Jobname = ISQL_122
** Datapump Command End ** at 2022.04.10-14.40.13
SQL>

NoExec

In this example we repeat the original export command, but include the NOEXEC argument. This displays the PL/SQL code that would have been run for this command. It's mostly calls to the DBMS_DATAPUMP package.

datapump export -
     -schemas testuser1 -
     -directory tmp_dir -
     -dumpfile testuser1.dmp -
     -logfile testuser1.log -
     -noexec

** Datapump Command Start ** at 2022.03.18-13.48.09
DECLARE
    errorlog varchar2(8000):='';
    getStatusOutput boolean:=false;
    backtrace varchar2(8000):='';
BEGIN
DECLARE
    h1 number;
    s varchar2(1000):=NULL;
    errorvarchar varchar2(100):= 'ERROR';
    tryGetStatus number := 0;
    success_with_info EXCEPTION;
    PRAGMA EXCEPTION_INIT(success_with_info, -31627);
    jobname varchar2(500):='ESQL_724';
    schema varchar2(8000):='';
    job_status VARCHAR2(30):='Not set';
    cloud_service varchar2(1000);
    success_with_info_warning_text varchar2(1000):=':DPEXEC_SUCCESS_WITH_INFO:';
    new_line varchar2(10):='
';
    get_status varchar2(1000):='dbms_datapump.get_status(';
    wait_for_job_status varchar2(1000):=':DPEXEC_DATAPUMP_OPERATION_STATUS:';
begin
    success_with_info_warning_text:=success_with_info_warning_text||new_line;
    BEGIN
    h1 := dbms_datapump.open (operation => :OPERATION, job_mode => :JOB_MODE, job_name => jobname, version =>  :VERSION);
    EXCEPTION when success_with_info THEN errorlog:=errorlog||success_with_info_warning_text;
    END;
    tryGetStatus := 1;
    BEGIN
    dbms_datapump.add_file(handle => h1, filename => :FILENAME_DMP, directory => :DIRECTORY_DMP,  filesize => :FILESIZE,  filetype => 1, reusefile => 1);
    EXCEPTION when success_with_info THEN errorlog:=errorlog||success_with_info_warning_text; END;
    BEGIN
    dbms_datapump.add_file(handle => h1, filename => :FILENAME_LOG, directory => :DIRECTORY_LOG, filetype => 3);
    EXCEPTION when success_with_info THEN errorlog:=errorlog||success_with_info_warning_text; END;
    BEGIN
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN ('''||:SCHEMA1||''')');
    EXCEPTION when success_with_info THEN errorlog:=errorlog||success_with_info_warning_text;
    END;
    BEGIN
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 1);
    EXCEPTION when success_with_info THEN errorlog:=errorlog||success_with_info_warning_text;
    END;
    BEGIN
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
    EXCEPTION when success_with_info THEN errorlog:=errorlog||success_with_info_warning_text;
    END;
    BEGIN
    dbms_datapump.wait_for_job (handle    => h1, job_state => job_status);
    EXCEPTION when success_with_info THEN errorlog:=errorlog||success_with_info_warning_text; END;
    errorlog:=errorlog||(wait_for_job_status||job_status||''''||new_line);
    h1:=null;
    BEGIN
    IF (h1 IS NOT NULL)
    THEN
        dbms_datapump.detach(handle => h1);
    END IF;
    EXCEPTION when success_with_info THEN errorlog:=errorlog||success_with_info_warning_text; END;
    errorvarchar := 'NO_ERROR';
    :jobnameout:=jobname;
EXCEPTION
When Others then
    backtrace:=backtrace||(SQLERRM)||new_line;
    backtrace:=backtrace|| dbms_utility.format_error_backtrace || new_line;
    IF ((S IS NOT NULL) AND (S!='COMPLETED')) THEN
         errorlog:=errorlog||'WAIT_FOR_JOB JOB_STATE STATE='||s|| new_line;
    END IF;
    DECLARE
        ind NUMBER;
        percent_done NUMBER;
        job_state VARCHAR2(30);
        le ku$_LogEntry;
        js ku$_JobStatus;
        jd ku$_JobDesc;
        sts ku$_Status;
    BEGIN
        if ((errorvarchar = 'ERROR')AND(tryGetStatus=1)) then
            dbms_datapump.get_status(h1,
            dbms_datapump.ku$_status_job_error +
            dbms_datapump.ku$_status_job_status +
            dbms_datapump.ku$_status_wip,-1,job_state,sts);
            js := sts.job_status;
            if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
            then
                le := sts.wip;
            else
                if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
                then
                    le := sts.error;
                else
                    le := null;
                end if;
            end if;
            if le is not null
            then
                ind := le.FIRST;
                if ind is not null
                then
                    errorlog:=errorlog||(get_status||h1||'...)')||new_line;
                    getStatusOutput:=true;
                end if;
                while ind is not null loop
                    errorlog:=errorlog||(le(ind).LogText)||new_line;
                    ind := le.NEXT(ind);
                end loop;
            end if;
        END IF;
    EXCEPTION
    when others then null;
    END;
    BEGIN
        IF ((errorvarchar = 'ERROR')AND(tryGetStatus=1)AND(h1 IS NOT NULL)) THEN
            DBMS_DATAPUMP.DETACH(h1);
        END IF;
    EXCEPTION
    WHEN OTHERS THEN
        NULL;
    END;
END;
IF (getStatusOutput=false)
THEN
    errorlog:=errorlog||backtrace;
END IF;
:errorlogout:=errorlog;
END;
GEN DIRECTORY_DMP: TMP_DIR

GEN DIRECTORY_LOG: TMP_DIR

GEN FILENAME_DMP: testuser1.dmp

GEN FILENAME_LOG: testuser1.log

GEN FILESIZE: 500MB

GEN JOB_MODE: SCHEMA

GEN OPERATION: EXPORT

GEN SCHEMA1: TESTUSER1

GEN VERSION: COMPATIBLE

Execution is disabled due to -noexec option
** Datapump Command End ** at 2022.03.18-13.48.09
SQL>

Help

The DATAPUMP HELP command displays the full usage details. As mentioned before, this release only supports a subset of the functionality possible using the DBMS_DATAPUMP package.

SQL> datapump help

DP | DATAPUMP
----------------
Usage: dp help [examples|syntax] |
         export [,...] |
         import [,...] |

 :
 File Arguments
 --------------
  -credential,-c               Default: as specified on CLOUDSTORAGE command
                             CREDENTIAL for dump file access in ORACLE OBJECT STORE.
  -directory,-d               :  Default: DATA_PUMP_DIR
                             Default database DIRECTORY for reading and writing dump and log files.
  -dumpdirectory,-dd          :  Default: -directory if specified or DATA_PUMP_DIR
                             Database DIRECTORY for dump file.
  -dumpfile,-f               [,...] :  Default .DMP
                             Dump file name(s) when using database DIRECTORY.
                             Multiple files can be specified whether parallelism is enabled or not.
                             The number of files specified should be at least as large as the degree of parallelism.
  -dumpuri,-u                [[,...] | [,...]]  Default: object name is  .DMP
                              :  complete uri for the ORACLE OBJECT STORE file if a default is not set on CLOUDSTORAGE command
                               : name of the object, optionally qualified by the namespace and the bucket.
                             The qualified name concatenated to the uri specified on CLOUDSTORAGE command must fully identify the object uri.
                             Specify the option without a value when the full url is specified on CLOUDSTORAGE command.
                             CREDENTIAL must be set for direct read/write access to ORACLE OBJECT STORE from autonomous database.
                             For -copycloud between database DIRECTORY and ORACLE OBJECT STORE, OCI PROFILE must be set.
                             Multiple uris can be specified whether parallelism is enabled or not.
                             The number of files specified should be at least as large as the degree of parallelism.
  -logdirectory,-ld            Default: -directory if specified or DATA_PUMP_DIR
                             Database DIRECTORY for log file.
  -logfile,-lf                 Default: .LOG
                             Log file name in the database DIRECTORY.


 Command Arguments
 -----------------
  -noexec,-ne                [TRUE|FALSE]  Default: FALSE
                             TRUE : Validate and generate the PL/SQL, but do not execute it.
  -verbose,-ve               [TRUE|FALSE]  Default: FALSE
                             TRUE : Show additional diagnostic output.
 Common Arguments
 ----------------
  -copycloud,-cc             [TRUE|FALSE]  Default: FALSE
                             [TRUE] : Copy the dumpfile between database directory and ORACLE OBJECT STORE
                               On EXPORT, copy the dumpfile from the database directory to ORACLE OBJECT STORE after the datapump job completes.
                               On IMPORT, copy the dumpfile from ORACLE OBJECT STORE to the database directory before the datapump job starts
                               OCI PROFILE must be set using OCI command or CLOUDSTORAGE command.
                               Cloud storage uri must be set using the CLOUDSTORAGE command, SET DATAPUMP command or DATAPUMP command.
  -encryptionpassword,-enp     Default:  None
                             If password is not specified, a prompt for one will be given.
                             On export, encrypts the dump files using the password.
                             On import, the same password that was used for export must be provided.
  -excludeexpr,-ex             Default: None
                             Specify an expression identifying an object type or set of object types to exclude from the job.
                             Example: -excludeexpr "IN ('GRANT','INDEX','TRIGGER')"
  -excludelist,-el           [,...]   Default: None
                             Specify a comma separated value list of object types to exclude from the job.
                             Example: -excludelist GRANT,INDEX,TRIGGER
  -includeexpr,-ix             Default: None
                             Specify an expression identifying an object type or set of object types to include in the job.
                             Only matching object types and their dependents will be included in the job.
                             Use -excludelist or -excludeexpr to exclude dependent objects.
                             Example: -includeexpr "IN ('TABLE','VIEW')"
  -includelist,-il           [,...]  Default: None
                             Specify a comma separated value list of object types to include in the job.
                             Only matching object types and their dependents will be included in the job.
                             Use -excludelist or -excludeexpr to exclude dependent objects.
                             Example: -includelist TABLE,VIEW
  -includemetadata,-im       [TRUE|FALSE]  Default: TRUE
                             TRUE : Include metadata in the job. Default: TRUE
  -includerows,-ir           [TRUE|FALSE]  Default: TRUE
                             TRUE: Include data in the job.  Default: TRUE
  -jobname,-j                 Name for the datapump job. (default: ESQL_ | ISQL_) where n is a datapump generated number
                             Job Name is appended with a datapump generated number, unless it ends with a number.
                             Job Name is used when submitting the datapump job and as default name for dump and log filenames/objectnames.
  -nameexpr,-nx              {=}[;...]  Default: None
                             For specified object type, provide an expression identifying a set of object names to include in the job.
                             Example: -nameexpr TABLE="IN ('EMPLOYEES', 'DEPARTMENTS')";PROCEDURE="IN ('ADD_JOB_HISTORY','SECURE_DML')"
  -namelist,nl               {=[,...]}[;...]  Default: None
                             For specified object type, provide a comma separated value list of objects to include in the job.
                             Example: -namelist TABLE=employees,departments;PROCEDURE=add_job_history,secure_dml
  -parallel,-p                 Default 1
                             Adjusts the degree of parallelism within a job allowing multiple processes simultaneously.
                             Specify the same number of files as the degree or some processes may remain idle.
  -schemas,-s                [,...] Default: On export, schema for the current connection.
                             Default: On import, full import.
                             The schema or comma separated value list of schemas to process.
                             Example: schema1,schema2.
  -version,-v                {|COMPATIBLE|LATEST}  Default: COMPATIBLE
                              : A specific database version, for example, '11.0.0'.
                               When exporting from Oracle Database 11g release 2 (11.2.0.3) or later into an Oracle Database 12 c Release 1 (12.1):
                                 Specify a value of 12 to allow all existing database features, components, and options to be exported.
                                 This applies to either a multitenant container database (CDB) or a non-CDB.
                             COMPATIBLE :  Uses metadata version from database compatibility level and the compatibility release level for feature.
                             LATEST : The version of the metadata corresponds to the database version.
  -wait,-w                   [TRUE|FALSE]  Default: TRUE
                             TRUE : Wait for the datapump job to finish and show summary results.
                             FALSE : Submit the datapump job without waiting and showing results.

 Export Only Arguments
 ---------------------
  -compression,-cm           {ALL|DATA_ONLY|METADATA_ONLY|NONE}  Default: METADATA_ONLY
                             Indicates if compression should be done on user data and metadata.
                             ALL : Compress user data and metadata.
                             DATA_ONLY : Compress only user data.
                             METADATA_ONLY : Compress only metadata.
                             NONE : Do not compress user data or metadata.
  -encryption,-en            {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}
                             Default: NONE, or ALL if any other encryption option is specified.
                             Specifying any encryption option will turn encryption on if you do not specify -encryption NONE.
  -encryptionalgorithm,-ena  {AES128 | AES192 | AES256} Default: AES128
                             Identifies the cryptographic algorithm to be used.
  -encryptionmode,-enm       {DUAL | PASSWORD | TRANSPARENT}
                             Default(if encryption is on): TRANSPARENT, or PASSWORD if -encryptionpassword is specified.
                             Identifies the types of security used for encryption and decryption.
                               PASSWORD encrypts the dumpfiles using the provided password.
                               TRANSPARENT allows encryption if the Oracle Encryption Wallet is available.
                               DUAL allows import using the Oracle Encryption Wallet or the password.
                               When using DUAL, -encryptionpassword must be specified.
  -estimate,-e               {BLOCKS|STATISTICS}
                             Specifies the estimate method for the size of the tables should be performed before starting the job.
                             BLOCKS : Estimate is calculated using the count of blocks allocated to the user tables.
                             STATISTICS : Estimate is calculated using the statistics for each table.
                               If no statistics are available for a table, BLOCKS is used.
  -filesize,-fs              {{B|KB|MB|GB|TB}}  Default: 500MB
                             Limit for the size of files.
  -flashbackscn,-fb          [TRUE|FALSE]  Default: FALSE
                             TRUE : Use consistent database content based on scn at start time of execution.
  -reusefile,-r              [TRUE|FALSE]  Default: TRUE
                             TRUE : Replace existing dump file(s) with new file.

 Import Only Arguments
 ---------------------
  -columnencryption,-ce      [TRUE|FALSE]  Default: TRUE if supported by database
                             TRUE : Include column encryption clause on table metadata.
                             FALSE : Omit column encryption clause.
  -objectid,-oid             [TRUE|FALSE]  Default: TRUE
                             TRUE : Assign the exported OID.
                             FALSE : Assign a new OID.
  -remapschemas,-rs          {=[,...]}
                              objects in the job will be moved to .
                             Example: oldschema1=newschema1,oldschema2=newschema2.
  -remaptablespaces,-rt      {=[,...]}
                              storage segment in the job will be relocated to .
                             Example: oldtablespace1=newtablespace1,oldtablespace2=newtablespace2
  -segmentattributes,-sa     [TRUE|FALSE]  Default TRUE
                             TRUE : Include segment attributes clauses (physical attributes, storage attributes, tablespace, logging).
  -skipunusableindexes,-sui  [TRUE|FALSE]  Default: TRUE
                             TRUE : Tables or partitions with an index in the UNUSABLE state are loaded, as if the unusable index did not exist.
                             FALSE : Tables or partitions with an index in the UNUSABLE state are not loaded.
  -storage,-st               [TRUE|FALSE]  Default TRUE
                             TRUE : Include storage clauses.
  -tableexists,-te           {APPEND|REPLACE|SKIP|TRUNCATE} Default: SKIP when -includemetadata=true, otherwise APPEND
                             Action to be taken if table exists on import.
                             APPEND : New rows are added to the existing rows in the table.
                             REPLACE : Before creating the new table, the old table is dropped.
                             SKIP : The preexisting table is left unchanged.
                             TRUNCATE : Rows are removed from a preexisting table before inserting rows from the IMPORT.
  -tablecompression,-tc      [TRUE|FALSE]  Default: TRUE
                             TRUE : The table compression clause will be included if supported.
                             FALSE : The table will have the default compression for the tablespace.

SQL>

For more information see:

Hope this helps. Regards Tim...

Back to the Top.