AlderPump logo

Version 2.5

Full support for 32 and 64-bit clients.

Covers all features of Oracle 10.1 to 11.2

Usability improvements and bugfixes

In this section:

In version 10g, Oracle Corporation introduced an enhancement to export and import utilities: Oracle Data Pump. Just like old exp and imp the new utilities called expdp and impdp are launched from command line. But their behavior differs:

Data Pump vs Import/Export

Old utilities execute the operation on the machine they were started, i.e. on the client. Export queries data from the database server, transfers data over network to the client, and stores them to files. Similarly, import reads data from files on the local computer and sends them over the network to database.

Data Pump works differently. The operation is still initiated from client computer, but expdp/impdp merely send commands to Oracle server for execution. They are calling PL/SQL package DBMS_DATAPUMP to initiate and monitor the operation, but the processing occurs on the server.

AlderProgs AlderPump works similarly: it is calling DBMS_DATAPUMP to initiate the desired operation and then waits for the feedback. The package is described in Oracle documentation and therefore open for developers.

How the Oracle Data Pump works

Hover mouse over image to animate
May take time on slow connections

Server-side execution speeds up exports and imports as network latencies are eliminated, but it also imposes several limitations. Obviously, now it is impossible to create export dumps on the client: they are created on the database server machine. Similarly, import expects the files to be accessible by Oracle server processes. For large dump files space should be allocated; the space can reside on a network share though. Another potential pitfall is the impact introduced to the server machine. Not only the engine is involved now; the file system is also busy reading and writing files. While Unix systems usually tolerate such type of load, Windows boxes may literally freeze while performing intensive file system IO.

Still, today Data Pump is the fastest way to get data in and out of Oracle databases leaving aside low-level methods such as transportable tablespaces and RAID mirror splitting. In fact, Data Pump does support transportable tablespaces: it is one of the modes exposed by Oracle command line utilities and AlderPump.

Speed is an important factor; however Data Pump offers other advantages. Jobs can now be paused and restarted, their parameters (such as degree of parallelism) changed dynamically, because of server-side executions users can attach and detach from running jobs without affecting them. Data Pump also provides fine grained control on objects being exported and imported. For example, one can restrict export to process only certain types of objects. Alternatively, specified types of objects can be ignored if listed in exclusion list.

Data Pump features

Beside speed and manageability, Data Pump offers some greater flexibility on transformation of data and metadata, making it a rudimentary ETL tool:

AlderPump features

AlderPump comes in two flavors: Lite and Professional. Lite edition is free, but its functionality is limited. The two major limitations of Lite Edition are:

For job creation in Lite mode, four wizards are currently available to create export and import jobs in schema and table modes. Professional mode, available for the 100 day trial period or after installing license key, unlocks full power of AlderPump. Here is comparison chart between AlderPump and Oracle command line utilities:

Feature AlderPump expdp/impdp
Basic job properties
Export and import tables
Export and import schemas
Export and import objects in tablespace
Full export and import
Transportable tablespace support
Export and import over database link
Customize job name
Restrict compatibility to specific version
Schema editions support
Data and metadata compression
Generate SQL file on import
Specifying data files
Overwrite exsiting datafiles on export
Specify %U mask in the name to produce multiple dump files
Option to overwrite existing dump file on export
Limit dump file size
... for each file inidividually
Verify dimp file existence interactively
Display list of available directories with permissions and server paths
Interactively create new directories
Object and Data Filters
Specify objects by name
Specify objects by type
Filter included and excluded objects
... with more than one filter
... with both include and exclude filters
Restrict object filters to the ones allowed for current operation and mode
Process tables from multiple schemas in TABLE mode
Use wildcard expression to filter objects
... use more than one wildcard expression
Browse database schemas and tablespaces to select objects
Display hints on filters and provide usage examples
Explain in plain English what is going to be filtered
Prevent table data from being processed
... for each table individually
Process specific partitions
Sample only given percentage of data
Use SQL subquery to restrict processed data
Remapping and Transformations
Remap table columns with PL/SQL function
Remap schema name
... basing on object type1
Remap tablespace name
... basing on object type1
Remap table names
Remap datafile name
Adjust objects size on import
Suppress tablespace, storage, and physical object attributes
Generate new object IDs for object types
Job parameters
Perform dry run to estimate result data volume
Consistent export as of user-specified timestamp or SCN
... computed at the time when job actually started
Exclude objects metadata
Exclude users and grants
Skip unusable indexes
When imported table exists, truncate, replace, append, or skip it
... chose the mode basing on other parameters
Skip constraint errors
Export XMLType columns as CLOB
Suppress APPEND hint on import
De-partition tables or convert partitions to individual tables
Check transportable tablespaces for external dependencies
Skip constraint errors
Encryption
Specify encryption password
Choosing between encrypting data, metadata, encrypted columns, or everything
Select between password, transparent, and dual encryption modes
Pick encryption algorithm
Distributed computing
Specify degree of parallelism
Distribute work betwwen RAC nodes
Select service name used by job workers
... dispaly list of available services
... see what instances are runnig selected service
Tracing and logging
Enable tracing of specific data Pump components
Write object counts and timing to the output log
While running
Stop, resume, or cancel jobs
Adjust degree of parallelism while job is running
Add custom entries to log file
... and broadcast them to all connected users
Add data files while job is running
Retrieve log file for viewing
Monitor worker processes
See job parameters and description
Manage files on the database server (setup required)
Browse files in database server directories
Retrieve and display selected information about dumpfile contents
Download files to client computer
Upload files to database server
View files on the database server
Rename files on server
Delete files in database server directories
Quick simplified tasks
Export or import tables in schema
Export or import schemas
Other
Generate corresponding expdp/impdp commandN/A
Checks job parameters for consistency before job is startedN/A
Work independently of target database version
Manage files on the database side without accessing machine's file systemN/A
Recongize legacy exp/imp parameters