Chinaunix首页 | 论坛 | 博客
  • 博客访问: 664806
  • 博文数量: 168
  • 博客积分: 2928
  • 博客等级: 中校
  • 技术积分: 1904
  • 用 户 组: 普通用户
  • 注册时间: 2010-01-04 09:56
文章分类

全部博文(168)

文章存档

2010年(168)

我的朋友

分类: Oracle

2010-03-22 15:02:42

 

Oracle Data Pump enables high-speed movement of data and metadata from one database to another. This technology is the basis for the following Oracle Database data movement utilities:

·         Data Pump Export (Export)

Export is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set is made up of one or more binary files that contain table data, database object metadata, and control information.

·         Data Pump Import (Import)

Import is a utility for loading an export dump file set into a database. You can also use Import to load a destination database directly from a source database with no intervening files, which allows export and import operations to run concurrently, minimizing total elapsed time.

Oracle Data Pump is made up of the following distinct parts:

·         The command-line clients expdp and impdp

These client make calls to the DBMS_DATAPUMP package to perform Oracle Data Pump operations (see ).

·         The DBMS_DATAPUMP PL/SQL package, also known as the Data Pump API

This API provides high-speed import and export functionality.

·         The DBMS_METADATA PL/SQL package, also known as the Metadata API

This API, which stores object definitions in XML, is used by all processes that load and unload metadata.

  •   Oracle Data Pump integrates with SQL*Loader and external tables. As shown, SQL*Loader is integrated with the External Table API and the Data Pump API to load data into (see ). Clients such as Database Control and can use the Oracle Data Pump infrastructure.

    REUSE_DUMPFILES

    The REUSE_DUMPFILES parameter can be used to prevent errors being issued if the export attempts to write to a dump file that already exists.

    REUSE_DUMPFILES={Y | N}

    When set to "Y", any existing dumpfiles will be overwritten. When the default values of "N" is used, an error is issued if the dump file already exists.

    expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
     reuse_dumpfiles=y
     

    TRANSPORTABLE

    The TRANSPORTABLE parameter is similar to the TRANSPORT_TABLESPACES parameter available previously in that it only exports and imports metadata about a table, relying on you to manually transfer the relevent tablespace datafiles. The export operation lists the tablespaces that must be transfered. The syntax is shown below.

    TRANSPORTABLE = {ALWAYS | NEVER}

    The value ALWAYS turns on the transportable mode, while the default value of NEVER indicates this is a regular export/import.

    The following restrictions apply during exports using the TRANSPORTABLE parameter:

    • This parameter is only applicable during table-level exports.
    • The user performing the operation must have the EXP_FULL_DATABASE privilege.
    • Tablespaces containing the source objects must be read-only.
    • The COMPATIBLE initialization parameter must be set to 11.0.0 or higher.
    • The default tablespace of the user performing the export must not be the same as any of the tablespaces being transported.

    Some extra restictions apply during import operations:

    • The NETWORK_LINK parameter must be specified during the import operation. This parameter is set to a valid database link to the source schema.
    • The schema performing the import must have both EXP_FULL_DATABASE and IMP_FULL_DATABASE privileges.
    • The TRANSPORT_DATAFILES parameter is used to identify the datafiles holding the table data.

    Examples of the export and import operations are shown below.

    expdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
     transportable=ALWAYS
     
    impdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
     transportable=ALWAYS network_link=DB11G transport_datafiles='/u01/oradata/DB11G/test01.dbf'
     

     

    REMAP_TABLE

    This parameter allows a table to be renamed during the import operations performed using the TRANSPORTABLE method. It can also be used to alter the base table name used during PARTITION_OPTIONS imports. The syntax is shown below.

    REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename

    An example is shown below.

    impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
     remap_table=TEST.TAB1:TAB2
  • Existing tables are not renamed, only tables created by the import.

     SKIP_CONSTRAINT_ERRORS

    During import operations using the external table acces method, setting the DATA_OPTIONS parameter to SKIP_CONSTRAINT_ERRORS allows load operations to continue through non-deferred constraint violations, with any violations logged for future reference. Without this, the default action would be to roll back the whole operation. The syntax is shown below.

    DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS

    An example is shown below.

    impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
     data_options=SKIP_CONSTRAINT_ERRORS

    This parameter has no impact on deferred constraints, which still cause the operation to be rolled back once a violation is detected. If the object being loaded has existing unique indexes or constraints, the APPEND hint will not be used, which may adversely affect performance.

     

    Table Exports/Imports

    The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax:

    expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
    impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log

    For example output files see and .

    Schema Exports/Imports

    The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax:

    expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
    impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log

     

    Database Exports/Imports

    The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax:

    expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
    impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log

     

    The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export. When the EXCLUDE parameter is used all objects except those specified by it will be included in the export:

    expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

     

    expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

     

    TABLE_EXISTS_ACTION – The original imp would allow rows to be appended to existing tables

    if IGNORE=Y was specified. The TABLE_EXISTS_ACTION parameter for Data Pump

    impdp provides four options:

    1. SKIP is the default: A table is skipped if it already exists.

    2. APPEND will append rows if the target table’s geometry is compatible. This

    is the default when the user specifies CONTENT=DATA_ONLY.

    3. TRUNCATE will truncate the table, then load rows from the source if the

    geometries are compatible and truncation is possible. For example, it is not

    possible to truncate a table if it is the target of referential constraints.

    4.  REPLACE will drop the existing table, then create and load it from the

    source.
    The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.

阅读(1378) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~