How to Export an Oracle Schema with Data Pump
How to Export an Oracle Schema with Data Pump
Section titled “How to Export an Oracle Schema with Data Pump”Oracle Data Pump (expdp) is the standard tool for logical backups and migrations. It runs server-side, is significantly faster than the legacy exp utility, and supports parallelism, compression, and fine-grained filtering. This guide covers all common export scenarios.
Prerequisites
Section titled “Prerequisites”Data Pump requires a directory object — an Oracle object pointing to a filesystem path the database server can write to. You cannot point expdp directly at a path without one.
-- Check existing directory objectsSELECT directory_name, directory_pathFROM dba_directoriesORDER BY directory_name;
-- Create a directory object (as DBA, run as SYSDBA or with CREATE ANY DIRECTORY)CREATE OR REPLACE DIRECTORY dp_dir AS '/u01/datapump/exports';
-- Grant access to the exporting userGRANT READ, WRITE ON DIRECTORY dp_dir TO app_schema;GRANT READ, WRITE ON DIRECTORY dp_dir TO system;
-- Verify the OS path exists and is writable by the oracle OS user-- Run from OS: ls -ld /u01/datapump/exportsThe user running expdp needs the EXP_FULL_DATABASE role for full database exports, or ownership/SELECT privileges for schema exports.
Step 1: Basic Schema Export
Section titled “Step 1: Basic Schema Export”Run expdp from the OS command line as the oracle user (or any user with EXP_FULL_DATABASE).
# Export a single schemaexpdp system/manager@ORCL \ SCHEMAS=app_schema \ DIRECTORY=dp_dir \ DUMPFILE=app_schema_%U.dmp \ LOGFILE=app_schema_exp.log \ PARALLEL=4
# The %U in DUMPFILE creates numbered files: app_schema_01.dmp, app_schema_02.dmp...# PARALLEL=4 writes to 4 files simultaneously — faster for large schemasStep 2: Export Multiple Schemas
Section titled “Step 2: Export Multiple Schemas”# Export multiple schemas in one jobexpdp system/manager@ORCL \ SCHEMAS=app_schema,ref_schema,audit_schema \ DIRECTORY=dp_dir \ DUMPFILE=multi_schema_%U.dmp \ LOGFILE=multi_schema_exp.log \ PARALLEL=4 \ COMPRESSION=ALLStep 3: Full Database Export
Section titled “Step 3: Full Database Export”# Full database export (requires EXP_FULL_DATABASE role)expdp system/manager@ORCL \ FULL=Y \ DIRECTORY=dp_dir \ DUMPFILE=full_db_%U.dmp \ LOGFILE=full_db_exp.log \ PARALLEL=8 \ COMPRESSION=ALL \ METRICS=YESStep 4: Table-Level Export
Section titled “Step 4: Table-Level Export”# Export specific tables onlyexpdp system/manager@ORCL \ TABLES=app_schema.orders,app_schema.customers,app_schema.products \ DIRECTORY=dp_dir \ DUMPFILE=tables_exp_%U.dmp \ LOGFILE=tables_exp.log \ PARALLEL=2Step 5: Export with Compression
Section titled “Step 5: Export with Compression”Compression reduces dump file size by 60-80%. Requires the Advanced Compression Option license.
# Compress data and metadataexpdp system/manager@ORCL \ SCHEMAS=app_schema \ DIRECTORY=dp_dir \ DUMPFILE=app_schema_comp_%U.dmp \ LOGFILE=app_schema_comp_exp.log \ COMPRESSION=ALL \ COMPRESSION_ALGORITHM=MEDIUM \ PARALLEL=4Without the Advanced Compression license, use COMPRESSION=METADATA_ONLY (metadata is always free to compress).
Step 6: Export with Filtering and Exclusions
Section titled “Step 6: Export with Filtering and Exclusions”# Exclude specific object types (e.g., statistics, grants)expdp system/manager@ORCL \ SCHEMAS=app_schema \ DIRECTORY=dp_dir \ DUMPFILE=app_schema_nostat_%U.dmp \ LOGFILE=app_schema_nostat_exp.log \ EXCLUDE=STATISTICS \ EXCLUDE=GRANT \ PARALLEL=4
# Export only rows matching a condition (table-level filter)expdp system/manager@ORCL \ TABLES=app_schema.orders \ DIRECTORY=dp_dir \ DUMPFILE=orders_2024_%U.dmp \ LOGFILE=orders_2024_exp.log \ QUERY=app_schema.orders:'"WHERE order_date >= DATE '"'"'2024-01-01'"'"'"'
# Export only specific object types (e.g., just procedures and packages)expdp system/manager@ORCL \ SCHEMAS=app_schema \ DIRECTORY=dp_dir \ DUMPFILE=app_schema_code.dmp \ LOGFILE=app_schema_code_exp.log \ INCLUDE=PROCEDURE,PACKAGE,FUNCTION,TRIGGER,VIEWAdvanced Examples
Section titled “Advanced Examples”Consistent Export (Flashback SCN)
Section titled “Consistent Export (Flashback SCN)”For a consistent snapshot across a long-running export, use FLASHBACK_SCN or FLASHBACK_TIME.
-- Get current SCN before startingSELECT current_scn FROM v$database;-- Output: 12345678# Export consistent to a specific SCNexpdp system/manager@ORCL \ SCHEMAS=app_schema \ DIRECTORY=dp_dir \ DUMPFILE=app_schema_consistent.dmp \ LOGFILE=app_schema_consistent_exp.log \ FLASHBACK_SCN=12345678
# Or use FLASHBACK_TIME for time-based consistencyexpdp system/manager@ORCL \ SCHEMAS=app_schema \ DIRECTORY=dp_dir \ DUMPFILE=app_schema_time.dmp \ FLASHBACK_TIME='"TO_TIMESTAMP('"'"'2024-03-15 02:00:00'"'"', '"'"'YYYY-MM-DD HH24:MI:SS'"'"')"'Estimate Export Size Before Running
Section titled “Estimate Export Size Before Running”# Estimate size without actually exportingexpdp system/manager@ORCL \ SCHEMAS=app_schema \ DIRECTORY=dp_dir \ ESTIMATE_ONLY=YES \ PARALLEL=4Monitor a Running Export Job
Section titled “Monitor a Running Export Job”-- From another session, monitor the export jobSELECT job_name, state, degree, bytes_processed/1024/1024/1024 AS gb_processed, total_items, processed_itemsFROM dba_datapump_jobs jJOIN v$session_longops lo ON j.job_name = lo.messageWHERE state = 'EXECUTING';
-- Attach to a running job to check/control it-- From OS: expdp system/manager@ORCL ATTACH=SYS_EXPORT_SCHEMA_01-- Then at the Export> prompt: STATUS or KILL_JOBCommon Mistakes and Pitfalls
Section titled “Common Mistakes and Pitfalls”Missing directory object — expdp will fail immediately if the DIRECTORY object does not exist or the user has no READ/WRITE grant on it.
Insufficient disk space — Uncompressed Data Pump dumps are often larger than the logical data size. Estimate with ESTIMATE_ONLY=YES first, and ensure 2x space is available.
Exporting from wrong container — In a CDB, connect to the correct PDB: expdp system@pdb_prod not expdp system@cdb.
Using exp instead of expdp — The legacy exp utility is deprecated. It cannot export tables with XMLTYPE, LOB transformations, or objects from Oracle 11g+ features. Always use expdp.
Not testing the dump file — A dump file that was interrupted or partially written may appear complete. Always run impdp SQLFILE=test.sql to verify the dump can be read.
Exporting without PARALLEL — Single-threaded exports of multi-gigabyte schemas are slow. Use PARALLEL equal to half the available CPU cores.
Verification Queries
Section titled “Verification Queries”-- Check the Data Pump job completed successfullySELECT job_name, operation, job_mode, state, degree, attached_sessionsFROM dba_datapump_jobsORDER BY job_name DESCFETCH FIRST 10 ROWS ONLY;
-- Check the log file for errors (run from OS)-- grep -i error /u01/datapump/exports/app_schema_exp.log-- grep -i ORA- /u01/datapump/exports/app_schema_exp.log
-- Verify dump file exists and has reasonable size (run from OS)-- ls -lh /u01/datapump/exports/app_schema_*.dmp
-- Quick dump file validation (metadata check only, no data loaded)-- impdp system/manager@ORCL \-- DIRECTORY=dp_dir \-- DUMPFILE=app_schema_01.dmp \-- SQLFILE=dp_dir:app_schema_ddl.sqlRelated Topics
Section titled “Related Topics”- How to Import an Oracle Schema with Data Pump - Import the dump you just created
- Data Pump Documentation - Comprehensive Data Pump reference
- How to Create a Backup with RMAN - Physical backup alternative
- Oracle Errors: ORA-29283 - Directory access errors