AlderPump logo

Version 2.5

Full support for 32 and 64-bit clients.

Covers all features of Oracle 10.1 to 11.2

Usability improvements and bugfixes

Importing only some tables.

Download job definition

Dump files may contain large number of objects, and letting users to process or not process some of them would be a great convenience. Data Pump's approach to this is to allow users to add logical conditions to which objects should or should not conform. These logical conditions are called metadata filters, in AlderPump they are found on Object filters page. Each filter has rule name, value, and optional object type. Rule names are predefined, they are like "list of names" or "schema names expression". Value can be either a list or an expression. Type specifies types of objects the filter applies to.

Multiple filters combine together. On exports, filters apply to objects contained in the database. On import, the are ran against objects located in the dump file. PL/SQL Packages Reference describes filters in greater detail.

The design could make Data Pump easy and straightforward; unfortunately reality leaked and filters were thrown in without much thinking. Say, filters can be restricted to objects of certain type, but there are separate set of filters to include or exclude types. Now, what should be the type of such type filter? (they shouldn't have one). Or, tablespace filter applies to tables and indexes but not materialized views. On top of this, there are desupported filters used in older versions of Oracle, and there are undocumented ones used by expdp and impdp internally.

AlderPump supports all filters which are or were documented; support for undocumented filter may be added in the future. In addition, AlderPump watches operation and mode and ensures that "Types" dropdown list contains only applicable values. It displays brief description of each filter with little example and generates narrator's text. Finally, Objects Picker is used to select objects interactively; values then merged into current selection and new filters added when needed. But despite all these protective layers, it still takes virtually no effort to mess things up.


Our dump file contains two tables from schema SCOTT: EMP and DEPT. In this exercise we will import only EMP leaving DEPT intact. Before starting, please either ensure EMP either does not exist (by dropping it), or specify TABLE_EXISTS_ACTION as described in Importing into existing tables chapter. The process of setting of TABLE_EXISTS_ACTION is not shown in this tutorial.

Prerequisites:

  1. SCOTT schema
  2. scott_tables.dmp created on export step and located in DATA_PUMP_DIR

Demonstrated actions:

  1. Job Creation Wizard: Setting operation and mode, specifying input dump file, adding object filters, creating the job.
  2. Main window: Job creation, job launching, progress log, job completion.

Tips:

This step is common across our import tutorials: we IMPORT in TABLE mode. Selecting operation IMPORT and mode TABLES on Operations And Mode tab

Step 2/4: Specifying input file

Like before, we are using file created with table mode export. We type its name "scott_tables.dmp" and select default directory DATA_PUMP_DIR. We then add the dump file to the list of input files:

Specifying scott_tables.dmp as input file

Step 3/4: Adding object filters

The dump file contains tables EMP and DEPT, we only want to process EMP. Just add NAME_LIST with value EMP and that should do it. Pretty simple, eh? Not quite. This is where Data Pump shows its dark nature: having just NAME filter is not enough, SCHEMA must be supplied for the operation to succeed. Worse yet, they must be added in correct order: SCHEMA, then NAME.

Adding SCHEMA_LIST and NAME_LIST object filters

Step 4/4: Running import

Should we make a mistake by omitting a filter or specifying them in wrong order, we would not know about that till job is executed. Fortunately, we did well on previous step and the job completes successfully.

Creating and launching job. The job successfully completes.