分类: Oracle
2010-04-01 19:50:05
007 | Oracle Data Pump
technology enables very high-speed movement of data and metadata from one
database to another. This technology is the basis for Oracle's data
movement utilities, Data Pump Export and Data Pump Import. | Oracle Data Pump
技术用于在数据库间高速地迁移数据及元数据。此项技术是 Oracle 的两个数据迁移工具 Data Pump 导入(Data Pump
Import)及 Data Pump 导出(Data Pump Export)的基础。 |
008 | Data Pump enables
you to specify whether a job should move a subset of the data and
metadata. This is done using data filters and metadata filters, which are
implemented through Export and Import parameters. | 用户在 Data Pump
中可以设定部分或全部地迁移数据及元数据。这个功能是通过数据过滤器(data filter)及元数据过滤器(metadata
filter)实现的,这两个过滤器可以在导入及导出工具的参数中进行设定。 |
009 |
Data Pump Export |
11.2.1 Data Pump 导出工具 |
010 | Data Pump Export
(hereinafter referred to as Export for ease of reading) is a utility for
unloading data and metadata into a set of operating system files called a
dump file set. The dump file set can be moved to another system and loaded
by the Data Pump Import utility. | Data Pump
导出工具(在下文中将简称为导出工具(Export))的功能是将数据库数据及元数据导出到被称为转存文件集(dump file
set)的操作系统文件中。转存文件集可以被移动到其他数据库系统,并可通过 Data Pump 导入工具进行加载。 |
011 | The dump file set is
made up of one or more disk files that contain table data, database object
metadata, and control information. The files are written in a proprietary,
binary format, which can be read only by Data Pump Import. During an
import operation, the Data Pump Import utility uses these files to locate
each database object in the dump file set. | 转存文件集由一个或多个磁盘文件组成,这些文件中包含了表数据,数据库对象元数据,及控制信息。这些文件是以不公开的二进制格式存储的,只能由
Data Pump 导入工具读取。当进行导入操作时,Data Pump 导入工具将从这些文件中获取数据库对象。 |
012 |
Data Pump Import |
11.2.2 Data Pump 导入工具 |
013 | Data Pump Import
(hereinafter referred to as Import for ease of reading) is a utility for
loading an export dump file set into a target system. The dump file set is
made up of one or more disk files that contain table data, database object
metadata, and control information. The files are written in a proprietary,
binary format. | Data Pump
导入工具(在下文中将简称为导入工具(Import))的功能是将导出的转存文件集(dump file set)加载到目标数据库(target
system)里。转存文件集由一个或多个磁盘文件组成,这些文件中包含表数据,数据库对象元数据,及控制信息。这些文件是以不公开的二进制格式存储的。 |
014 | Import can also be
used to load a target database directly from a source database with no
intervening files, which allows export and import operations to run
concurrently, minimizing total elapsed time. This is known as network
import. | 导入工具(Import)还能够直接从源数据库(source
database)向目标数据库进行加载,而无需借助于中间文件(intervening
file)。这使导出导入工作能够并行进行,最大限度地缩短了数据迁移所需的时间。这项功能被称为网络导入(network
import)。 |
015 | Import also enables
you to see all of the SQL DDL that the Import job will be executing,
without actually executing the SQL. This is implemented through the Import
SQLFILE parameter. | 导入工具还能让用户查看导入操作将要执行的 SQL DDL
语句,而无需实际执行这些语句。这个功能是通过导入工具的 SQLFILE
参数实现的。 |
016 |
Overview of the Data Pump API |
11.3 Data Pump API 概述 |
017 | The Data Pump API
provides a high-speed mechanism to move all or part of the data and
metadata for a site from one database to another. To use the Data Pump
API, you use the procedures provided in the DBMS_DATAPUMP PL/SQL package. The Data Pump Export
and Data Pump Import utilities are based on the Data Pump API. | Data Pump API
能够在数据库间进行部分或全部数据及元数据的迁移。用户可以通过 DBMS_DATAPUMP
PL/SQL 包中的存储过程来调用 Data Pump API。Data Pump 导入及导出工具也依赖于 Data Pump
API。 |
018 |
|
|
019 |
Overview of the Metadata API |
11.4 元数据 API 概述 |
020 | The Metadata
application programming interface (API), provides a means for you to do
the following:
| 用户可以通过元数据 API(Metadata API)进行以下工作:
|
021 | To use the Metadata
API, you use the procedures provided in the DBMS_METADATA PL/SQL package. For the purposes of the
Metadata API, every entity in the database is modeled as an object that
belongs to an object type. For example, the table scott.emp is an object and its object type is TABLE. When you fetch an object's metadata you must
specify the object type. | 用户可以通过 DBMS_METADATA PL/SQL 包中的存储过程来调用 Metadata API。对于
Metadata API 来说,数据库中的每一个实体(entity)都被看作属于某一对象类型(object
type)的对象(object)。例如,scott.emp 表被看做对象类型为 TABLE 的对象。当用户获取某对象的元数据时,必须指定其对象类型。 |
022 |
|
|
023 |
Overview of SQL*Loader |
11.5 SQL*Loader 概述 |
024 | SQL*Loader loads
data from external files into tables of an Oracle database. It has a
powerful data parsing engine that puts little limitation on the format of
the data in the datafile. You can use SQL*Loader to do the following:
| SQL*Loader 能够将数据从外部文件(external
file)导入 Oracle 数据库的表中。此工具拥有强大的数据解析引擎,因此她对数据文件中的数据格式几乎没有限制。用户可以利用
SQL*Loader 进行以下工作:
|
025 | A typical SQL*Loader
session takes as input a control file, which controls the behavior of
SQL*Loader, and one or more datafiles. The output of SQL*Loader is an
Oracle database (where the data is loaded), a log file, a bad file, and
potentially, a discard file. | 通常一个 SQL*Loader 会话需要一个控制文件(control
file)及一个或多个数据文件(data file)作为输入。控制文件的作用是控制 SQL*Loader 的执行。SQL*Loader
的输出为数据库表(数据加载的位置),日志文件(log file),错误数据文件(bad file),及丢弃数据文件(discard
file)。 |
026 |
Overview of External Tables |
11.6 外部表概述 |
027 | The external tables
feature is a complement to existing SQL*Loader functionality. It lets you
access data in external sources as if it were in a table in the database.
External tables can be written to using the ORACLE_DATAPUMP access driver. Neither data manipulation language (DML)
operations nor index creation are allowed on an external table.
Therefore, SQL*Loader may be the better choice in data loading situations
that require additional indexing of the staging table. | 外部表(external table)功能与 SQL*Loader
是互补的。此功能使用户能够像访问数据表一样访问数据库外部数据源的数据。外部表能够采用 ORACLE_DATAPUMP 作为访问驱动(access driver)。用户不能对外部表执行 DML
操作或在其上创建索引。因此当用户所加载的中转表(staging table)上需要建立索引时,SQL*Loader 更为适合。 |
028 | To use the external
tables feature, you must have some knowledge of the file format and record
format of the datafiles on your platform. You must also know enough about
SQL to be able to create an external table and perform queries against
it. | 为了使用外部表功能,用户必须了解相关系统平台上的数据文件的文件结构(file format)及记录格式(record
format)。用户还需掌握创建外部表及对其执行查询的 SQL 语句。 |
029 |
|
另见: |
030 |
Overview of LogMiner |
11.7 LogMiner 概述 |
031 | Oracle LogMiner
enables you to query redo log files through a SQL interface. All changes
made to user data or to the database dictionary are recorded in the Oracle
redo log files. Therefore, redo log files contain all the necessary
information to perform recovery operations. | 通过 LogMiner,用户可以使用 SQL
语句查询重做日志文件(redo log
file)。所有对用户数据及数据库数据字典的修改都会被记录在重做日志文件中。因此重做日志文件中包含执行数据库恢复操作的所有必要信息。 |
032 | LogMiner
functionality is available through a command-line interface or through the
Oracle LogMiner Viewer graphical user interface (GUI). The LogMiner Viewer
is a part of Oracle Enterprise Manager. | 用户可以通过命令行界面或 Oracle LogMiner
阅读器(Oracle LogMiner Viewer)的图形界面使用 LogMiner 功能。Oracle LogMiner 阅读器是 Oracle
企业管理器(Oracle Enterprise Manager)的组件。 |
033 | The following are
some of the potential uses for data contained in redo log files:
| 下面举例说明如何利用重做日志文件中的数据:
|
034 |
Overview of DBVERIFY Utility |
11.8 DBVERIFY 工具概述 |
035 | DBVERIFY is an
external command-line utility that performs a physical data structure
integrity check. It can be used on offline or online databases, as well on
backup files. You use DBVERIFY primarily when you need to ensure that a
backup database (or datafile) is valid before it is restored or as a
diagnostic aid when you have encountered data corruption problems. | DBVERIFY 是 Oracle
外部的一个命令行工具,其功能是执行物理数据完整性检查(physical data structure integrity
check)。检查的对象可以是联机(online)或脱机(offline)的数据库,及备份文件。当用户在进行恢复操作前需要确认一个备份数据库(或数据文件)是否有效,或对数据损坏(corruption)问题进行诊断时就可以使用
DBVERIFY 工具。 |
036 | Because DBVERIFY can
be run against an offline database, integrity checks are significantly
faster. | 由于 DBVERIFY
工具可以检测脱机的数据库,因此用其进行完整性检查的速度很快。 |
037 | DBVERIFY checks are
limited to cache-managed blocks (that is, data blocks). Because DBVERIFY
is only for use with datafiles, it will not work against control files or
redo logs. | DBVERIFY
工具只能针对通过缓存来管理的块(cache-managed block)(即数据块(data block))。因此DBVERIFY
工具只能检查数据文件(datafile),而不能检查控制文件(control file)和重做日志文件(redo log)。 |
038 | There are two
command-line interfaces to DBVERIFY. With the first interface, you specify
disk blocks of a single datafile for checking. With the second interface,
you specify a segment for checking. | DBVERIFY
工具有两种使用模式。第一种模式可以指定检查某个数据文件。第二种模式可以指定检查某个数据段(segment)。 |
039 |
Overview of DBNEWID Utility |
11.9 DBNEWID 工具概述 |
040 | DBNEWID is a
database utility that can change the internal, unique database identifier
(DBID) and the database name (DBNAME) for an operational database. The
DBNEWID utility lets you change any of the following:
| DBNEWID
工具用于修改一个数据库内部唯一的数据库标识(DBID)及数据库名(DBNAME)。用户可以使用 DBNEWID 工具进行以下操作:
|
041 | Therefore, you can manually create a
copy of a database and give it a new DBNAME and DBID by re-creating the
control file, and you can register a seed database and a manually copied
database together in the same RMAN repository. | 用户可以复制一个数据库,并使用此工具修改 DBNAME 和
DBID,之后重建控制文件(control file),从而创建一个原数据库的副本。用户可以将这两个数据库注册到同一个 RMAN
的资料库中。 |