AlderPump logo

Version 2.5

Full support for 32 and 64-bit clients.

Covers all features of Oracle 10.1 to 11.2

Usability improvements and bugfixes

Importing tables into different schema

Download job definition

Importing into same schema is good, but can we store data in other user's schema? Positively. Let's see how this is achieved in AlderPump.

When processing Metadata (i.e. description of objects rather than data they contain), Data Pump employs XML format. This allows it to portably store every little detail about every object. But even better, modifying object definitions has became easier by utilizing XSL transformations. DataPump developers didn't make every attribute of every object modifiable by us, the users - but they exposed most of the essentials. In Data Pump slang, the process of changing object's attribute is called remapping.

Among all available remappings, in this chapter we will only use the "Schema" one. As its name implies, it changes schema name to the one we specify. And changing schema name effectively achieves our goal of creating imported tables in other user's schema. Please note that remapping occurs only on object's creation, Data Pump is not changing attributes of existing objects.

We need a schema to import into, so launch a SqlPlus session, create user SCOTT2 and grant him privileges necessary to create objects:

  GRANT connect,resource,unlimited tablespace TO scott2 IDENTIFIED BY tiger;
  ALTER USER scott2 DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp;

Prerequisites:

  1. New empty schema SCOTT2 (created in this tutorial)
  2. scott_tables.dmp created on export step and located in DATA_PUMP_DIR

Demonstrated actions:

  1. Job Creation Wizard: setting operation and mode, adding input dump file, adding schema remap rule, creating job.
  2. Main window: Job creation, job launching, progress log, job completion.

Cleanup:

  DROP USER scott2 CASCADE;

Tips:

This step is common across our import tutorials: we IMPORT in TABLE mode.

Selecting operation IMPORT and mode TABLES on Operations And Mode tab

Step 2/4: Specifying input file

Like before, we are using file created by table mode export. We type its name "scott_tables.dmp" and select default directory DATA_PUMP_DIR. We then add the dump file to the set:

Specifying scott_tables.dmp as input file

Step 3/4: Adding schema remap

Schemas are metadata, no doubt they remapped on Metadata Remaps page. From the dropdown list we select SCHEMA transform and type schema names into From and To boxes. Note that names are case sensitive, so we use uppercase letters. Finally, we add transform to the list.

Adding SCOTT -> SCOTT2 schema remap

Step 4/4: Running import

By now you should be familiar with job running process, just note how schema names became SCOTT2 because of remap specified.

Creating and launching job. The job successfully completes.