How to Export & Import Large Oracle Dump

Files Without Network Issues – Step-by-Step

Guide

Scenario:

In this guide, we will explore how to use Oracle Data Pump Export (expdp) to split a large schema dump file into multiple pieces. This method helps prevent network conflicts and ensures smooth import operations on the target database.

Step 1: Setting Up a Backup Location

Before exporting data, we need to create a physical backup location where the dump files will be stored.

On Linux server:

mkdir -p /data/backup




On Oracle Database:

We create a logical directory pointing to the physical backup location:

Command:

create a directory TEST_DIR2 as '/data/backup';

TEST_DIR2 -->  Logical directory pointing to the physical backup location.

To verify the directory:

SELECT * FROM dba_directories WHERE directory_name = 'TEST_DIR2';







Step 2: Preparing for Export

We insert more records into the U1.T1 table, reaching over 2 million (20 lakh) records, making it a large dataset.












Step 3: Exporting Large Schema Data into Multiple Files

Since the table contains a massive amount of data, we will export it into multiple files using Data Pump Export (expdp) with the following script:

expdp schemas=U1 dumpfile=schemalimit_%U.dmp directory=test_dir2 logfile=schema_log.log filesize=2M parallel=3

Explanation of Parameters:

  • schemas=U1 → Backs up the entire schema U1.
  • dumpfile=schemalimit_%U.dmp → Generates multiple dump files (e.g., schemalimit_01.dmp, schemalimit_02.dmp, etc.).
  • filesize=2M → Limits each file to a maximum size of 2MB.
  • parallel=3 → Uses 3 parallel threads to speed up the export process.

Once the export is complete, multiple dump files will be generated in the backup directory.

To verify, check the size of each file. Most files should be around 2MB.

Explanation of Key Terms:

  • Dump File (.dmp) → A binary file containing schema objects, data, and metadata for backup.
  • Log File (.log) → Records the details of the export/import process, including any errors or warnings.

Exported successfully with multiple pieces











Check size of each file and most of the files is 2MB.








Step 4: Dropping the Schema (If Required)

Before importing, you may need to drop the existing schema to prevent conflicts.

Cascade -> To drop everything in the database.

Note: The CASCADE option removes all objects in the schema. Do not use it unless explicitly required, especially in non-production environments.







Step 5: Importing Multiple Dump Files

Now, we restore the schema using multiple dump files with the following command:

impdp schemas=U1 dumpfile=schemalimit_%U.dmp directory=test_dir2 logfile=schema_log.log parallel=3

After completion, verify that all objects from the U1 schema have been successfully restored.




















Summary of Steps

✔ Created a backup directory (/data/backup) on the Linux server.
✔ Created a logical directory (TEST_DIR2) in the Oracle Database.
✔ Exported large data dumps into multiple files using expdp.
✔ Dropped the existing schema (U1) before importing (if needed).
✔ Restored U1 schema using impdp and verified the objects.

Final Thoughts

By following these steps, you can efficiently export and restore large data dumps by splitting them into smaller pieces using Oracle Data Pump. This method is particularly useful for handling large exports without network conflicts and ensuring smooth imports into different environments.

💡 Have questions? Need help? Drop a comment below!

📢 Want more Oracle database tutorials? Follow this blog for the latest updates!


Comments

Popular posts from this blog