- Restore entire database using Data Pump.
- Restore table(s)
- Restore tablespace(s)
- Restore schema(s)
- Restore using Transportable tablespaces (TTS)
- Restore from multiple small sizes of dump files
- Restore in parallel mode
There are single shot solution to all the above problem statement and it is IMPDP in Data Pump. It is one of various restore tools provided by Oracle and it is regularly improved by them.
One can access Data Pump utility using its interface DBMS_DATAPUMP PL/SQL package, DBMS_METADATA, expdp & impdp. The use of impdp facilitates database restore at various levels.
impdp can be utilized in various ways to achieve different objectives related to restore. However, to execute impdp command user needs to have DATAPUMP_IMP_FULL_DATABASE role assigned in case if any table beyond your schema needs to be exported.
A schema import is specified using the SCHEMAS parameter. However, full import is the default import mode. Indeed, impdp command is supplied with few parameters:
- directory : Name of directory which will be the location of file creation.
- dumpfile : Name of dump file. For multiple files generic format <name>_%u.dmp can also be given.
- logfile : Name of logfile.
- include : Specific object can be included by this parameter.
- exclude : Specific object can be excluded by this parameter.
- content : This will limit content of dump file. Possible values would be METADATA_ONLY, DATA_ONLY, ALL
- query : Filter rows of tables by supplying where clause. For example where clause can be supplied as SCOTT.DEPT:'”WHERE ROWNUM = 0″‘
- network_link : This is used to utilized database link during export.
- flashback_time : This feature is supplied to get data consistency. Possible values: systimestamp
- flashback_scn : This feature is same as flashback_time. However, this will use scn number instead of timestamp.
- tables : List of tables can be provided using this parameter.
- attach : If a impdp job is executing in background, user can connect with the specific job using this parameter.
- estimate : This will help to estimate required disk space. Possible value : Y, N
- job_name : Name of import job to be defined using this parameter.
- nologfile : Log file creation can be suppressed.
- parallel : Concurrent number of thread for execution of impdp job.
- parfile : Parameters can be defined with impdp on command line or can be defined in a file and name of the file is provided using this parameter.
- schemas : List of schemas to be provided.
- status : The frequency in seconds which job displays client feedback
- tablespaces : List of tablespaces to be provided.
- transport_full_check : This is helpful in case of transportable tablespaces for checking dependencies of objects to be exported.
- transport_tablesapces : Using this parameter metadata of DB can be imported. This is the method through which we can migrate/move/copy multi terabytes database to another server.
- full : Import everything from source (Y).
- remap_datafile : Redefine datafile references in all DDL statements.
- remap_schema : Objects from one schema are loaded into another schema.
- remap_tablespace : Tablespace object are remapped to another tablespace.
- reuse_datafiles : Tablespace will be initialized if it already exists (N).
- table_exists_action : Action to take if imported object already exists. Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
- transform : Metadata transform to apply (Y/N) to specific objects. Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE. ex. TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE.
- skip_unusable_indexes : Skip indexes that were set to the Index Unusable state.
- transport_datafiles : List of datafiles to be imported by transportable mode.
A typical impdp command can be written for backup in different schema than source:
Impdp oracle/********@orcl directory= TEST_DIR dumpfile= TEST_DIR: T1_%U.dmp logfile=impdpEMP_DEPT.log remap_schema=user:user2
[oracle@mishika expdp]$ Impdp oracle/********@orcl directory= TEST_DIR dumpfile= TEST_DIR: T1_%U.dmp logfile=impdpEMP_DEPT.log remap_schema=user:user2