Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1983908
  • 博文数量: 221
  • 博客积分: 10045
  • 博客等级: 上将
  • 技术积分: 2252
  • 用 户 组: 普通用户
  • 注册时间: 2005-01-25 20:28
文章分类

全部博文(221)

文章存档

2012年(1)

2008年(4)

2007年(11)

2006年(26)

2005年(179)

我的朋友

分类: Oracle

2005-05-17 19:35:14

Tim Gorman, SageLogix

Introduction

This paper is intended to explain the configuration and usage of the Log Miner feature.  This feature was first introduced in the Oracle8i v8.1.5 version of the RDBMS product and was updated considerably in Oracle9i v9.2.0.

The initial functionality of Log Miner was simply diagnostic, for the purpose of reading redo log files and interpreting the contents.  From its initial release, Log Miner was capable of reading individual redo records and reconstructing the data into SQL statements.  Used this way, Log Miner can be utilized to:

·         Debug the actions of one or more sessions

·         Provide a high-level audit-trail for one or more sessions

·         Recover deleted or modified data from prior to the commit of the transaction

To start a Log Miner “session”, the user would specify a set of specific redo log files and read from them via the SQL interface.  Thus, redo log files from the current database or other databases (i.e. Oracle8 v8.0 or higher) could be read and interpreted from any database.  For the purpose of translating data dictionary IDs into symbolic names (i.e. OBJECT_ID into OBJECT_OWNER, OBJECT_TYPE, and OBJECT_NAME or USER_ID into USER_NAME), an extract of information from the originating database’s data dictionary was necessary.

Later on, as Oracle9i was introduced, Log Miner became a component of other products used for the replication of data, namely:

·         Oracle Data Guard (Logical Standby Database)

·         Oracle Streams

In order to support these new features, the basic capability of reconstructing SQL statements from the binary information in the redo records is utilized, but Log Miner itself was enhanced to better support a continuous stream of information flowing through it.  Instead of interpreting a set of redo log files explicitly specified by the user, Log Miner must be capable of continuously reading redo log files as they are generated.  Moreover, the older method of passing data dictionary information, via generated flat-files, is completely impractical for this mode of operation.  Instead, the very same stream of redo log files now also communicates this data dictionary information continuously, simply and elegantly.

How to use Log Miner in Oracle8i for diagnosis?

There are four steps to using Log Miner in Oracle8i:

1.       Periodically capturing data dictionary information into flat-file extracts

2.       Creating a list of redo log files

3.       Starting a Log Miner session against that list of redo log files

4.       Querying data from the files using the SQL API

Capturing data dictionary information extracts

If you were to use a standard operating-system DUMP utility, you would see precious little recognizable text in redo log files.  Redo log files contain both data as well as metadata about the data structures holding the data, but the metadata (a.k.a. data dictionary data) is represented only in terms of numeric IDs, not human-recognizable symbolic strings.  For example, a schema named SCOTT might have a USERNAME = ‘SCOTT’ and USERID = 1327.  It is the USERID value that is stored in the redo log files.  Similarly, a table named EMP owned by the schema named SCOTT might have OBJECT_TYPE = ‘TABLE’ and OBJECT_NAME = ‘EMP’ and OBJECT_ID = 20887.  It is the OBJECT_ID value that is stored in the redo log files.

Displaying this numeric data is not very useful for most human beings.  Leaving the task of translating these ID values to symbolic names would not be user-friendly, to say the least.

Furthermore, if you were running Log Miner from a database named TEST and reading redo log files generated by another database entirely, you would have to perform that numeric-to-symbolic translation against the data dictionary in the other database.  If that other database were unavailable for any reason, you would be out of luck.  Again, not very user friendly.

Last, the data dictionary of any database only contains current metadata about the database.  In contrast, redo log files contain data from many points in time.  Objects and schemas referenced in the redo log files may no longer exist in the databases that generated the logs, and the data dictionary of that database would no longer have information for translating numeric IDs to symbolic names.

For these reasons, the DBMS_LOGMNR_D package supplied with Oracle8i contains a packaged procedure named BUILD to extract snapshots of data dictionary information into flat files.  These extracts can then be used by a subsequent Log Miner session.

Creating these flat file extracts of the data dictionary is the only real prerequisite to using Log Miner.  Generally, the decision to use Log Miner for diagnostic purposes is made after an anomaly has occurred and has been logged.  In this situation, it is very much like closing the barn door after the cow has already escaped.  Especially if the object to be diagnosed has since been dropped.  For this reason, I highly recommend considering now whether you anticipate ever running Log Miner against a database.  If you feel that there is a possibility that you might, then start running DBMS_LOGMNR_D.BUILD on a periodic basis now.  Using a job-scheduling program such as DBMS_JOB, start a daily extract:

 

SQL> variable jobno number

SQL> exec dbms_job.submit(:jobno, -

              ‘begin –

                 dbms_logmnr_d.build(dictionary_filename=>‘lmd_’||to_char(sysdate,‘dy’), -

                                     dictionary_locatio =>‘/d010/orabkup/PRD’); -

               end;’, -

               sysdate+(1/1440), -

               ‘sysdate + 1’);

SQL> select job, what, next_date, next_sec, broken, failures

 2   from user_jobs where job = :jobno;

SQL> exec dbms_job.run(:jobno)

 

The first line of this script declares a SQL*Plus variable to hold the number of the job after it is created.  The second line calls the DBMS_JOB.SUBMIT procedure to create a job and store the job number in the variable JOBNO.  The third through sixth lines (lines 2 through 5 of the DBMS_JOB.SUBMIT call) builds the SQL statement to be executed by the job.  In this case, the SQL statement is really an “anonymous PL/SQL block” containing the call to the DBMS_LOGMNR_D.BUILD procedure.

The parameters to the BUILD procedure are DICTIONARY_FILENAME and DICTIONARY_LOCATION.  The filename is comprised of the prefix “LMD_”, the first three characters of the day of the week (derived from the Oracle SQL function to_char(sysdate, ‘dy’), and the file-extension “.dct”.  Thus, if this job is run on a Sunday, then the data dictionary extract will be stored in a file named “lmd_sun.dct”.  On Monday, the filename would be “lmd_mon.dct”.

Using this file-naming scheme would allow a maximum of seven data dictionary extract files to be stored, one for each day of the week.  The BUILD procedure will overwrite any existing files.

The directory in which this file will be created must be accessible to the UTL_FILE package, as that is the mechanism used to create the flat file.  Thus, the initialization parameter UTL_FILE_DIR must either contain the directory name “/d010/orabkup/PRD” or the wildcard character “*”.

Calling DBMS_LOGMNR_D.BUILD from Oracle8 v8.0.x databases:

The SQL script “dbmslmd.sql” is used to create the DBMS_LOGMNR_D package.  This script resides in the ORACLE_HOME subdirectory “rdbms/admin”.  Normally, the package gets created during database creation as it is called during the standard “catproc.sql” script, when you are running Oracle8i software.

But what if you are running Log Miner against redo log files created by an Oracle8 v8.0.x database?  Even though Log Miner is a feature of Oracle8i v8.1.x and above, redo log files created by earlier Oracle8 v8.0.x databases can still be read.

But obviously Oracle8 v8.0.x software does not include the DBMS_LOGMNR_D package.  In this case, you can steal the file “dbmslmd.sql” from an Oracle8i v8.1.x software distribution and run it against the Oracle8 v8.0.x database.

Only the DBMS_LOGMNR_D package can be created on an Oracle8 v8.0.x database, and the presence of the UTL_FILE package (installed automatically by default) is a prerequisite.  The DBMS_LOGMNR package cannot be created on an Oracle8 v8.0.x database.

Creating a list of redo log files

In Oracle8i, the procedure ADD_LOGFILE is called to make a list of redo log files that will be read during the Log Miner session.  Redo log files can either be active online redo log files or inactive archived redo log files.  Naturally, it is easier and safer to read archived redo log files, since they will not be reused by the database instance and overwritten.

To create a list of ten archived redo log files:

 

SQL> exec dbms_logmnr.add_logfile(‘/d010/oraarch/PRD/1_1150.arc’, dbms_logmnr.new)

SQL> exec dbms_logmnr.add_logfile(‘/d010/oraarch/PRD/1_1151.arc’, dbms_logmnr.addfile)

SQL> exec dbms_logmnr.add_logfile(‘/d010/oraarch/PRD/1_1152.arc’, dbms_logmnr.addfile)

SQL> exec dbms_logmnr.add_logfile(‘/d011/oraarch/PRD/1_1153.arc’, dbms_logmnr.addfile)

SQL> exec dbms_logmnr.add_logfile(‘/d011/oraarch/PRD/1_1154.arc’, dbms_logmnr.addfile)

SQL> exec dbms_logmnr.add_logfile(‘/d011/oraarch/PRD/1_1155.arc’, dbms_logmnr.addfile)

SQL> exec dbms_logmnr.add_logfile(‘/d011/oraarch/PRD/1_1156.arc’, dbms_logmnr.addfile)

SQL> exec dbms_logmnr.add_logfile(‘/d012/oraarch/PRD/1_1157.arc’, dbms_logmnr.addfile)

SQL> exec dbms_logmnr.add_logfile(‘/d012/oraarch/PRD/1_1158.arc’, dbms_logmnr.addfile)

SQL> exec dbms_logmnr.add_logfile(‘/d012/oraarch/PRD/1_1159.arc’, dbms_logmnr.addfile)

 

The first call to ADD_LOGFILE, using the option NEW, created a new list.  The subsequent nine calls to ADD_LOGFILE, each using the ADDFILE option, added to the list.

While the DBMS_LOGMNR_D package relies on the UTL_FILE package to perform file I/O, the DBMS_LOGMNR package does not.  It uses I/O mechanisms built-in to the Oracle server processes to read from the redo log files in the list, similar to the way the database recovery mechanism works.  So, the directories in which the redo log files reside do not have to be included in the UTL_FILE_DIR parameter, as is the case with the data dictionary extract file.

Starting a Log Miner session

Once a list of redo log files has been created using the ADD_LOGFILE procedure, a Log Miner session can be started, using the START_LOGMNR procedure in the DBMS_LOGMNR package.

A Log Miner session exists within a database session.  Queries performed on the V$LOGMNR_CONTENTS view within a Log Miner session act as requests to perform I/O on the redo log files.  The Oracle server process supporting the database session performs all of the I/O for the Log Miner session from the redo log files.  It then populates the V$LOGMNR_CONTENTS view with the requested information, using information read from the data dictionary extract to translate IDs into symbolic names along the way.

In Oracle8i, the START_LOGMNR procedure has the following parameters:

·         DICTFILENAME

The parameter DICTFILENAME is used to specify the flat file in which the data dictionary information is stored.  This information will be used during the Log Miner session to translate the numeric ID information in the redo log files into symbolic names that people can understand.

·         OPTIONS

The parameter OPTIONS has only two possible values in Oracle8i.  These values are persistent variables defined in the DBMS_LOGMNR package:  SKIP_CORRUPT and USE_COLMAP.  The SKIP_CORRUPT flag is used when one (or more) of the redo log files is known to contain corrupted redo records.  In this case, Log Miner will bypass the corrupted redo record instead of giving an error message and terminating the current operation.  The USE_COLMAP flag is used to map the names of columns to specific tables that might be referenced in the redo records.  The flat file “logmnr.opt” should reside in the same directory as the data dictionary extract; it contains the mapping information for columns.  The “logmnr.opt” mapping file contains entries of the format:

 

colmap = SCOTT EMP (1, EMPNO, 2, SAL, 3, JOB, 4, MGR, 5, COMM);

 

which indicates that the table EMP in the schema SCOTT will have the column EMPNO mapped to the columns PH1_NAME, PH1_REDO, and PH1_UNDO in the V$LOGMNR_CONTENTS view.  Also, the column SAL will be mapped into the corresponding columns PH2_NAME, PH2_REDO, and PH2_UNDO.  In Oracle8i, there are only five sets of these “placeholder” columns in the V$LOGMNR_CONTENTS view, so only five columns can be specified.  These columns can then be used to query for specific data values in specific column names.

·         STARTSCN and ENDSCN

·         STARTTIME and ENDTIME

These parameters can be used to further restrict the range of redo records that can be queried during the Log Miner session.  Of course, the list of redo log files represents a restriction, but using these parameters, we can additionally restrict by a range of SCNs (a.k.a. system change numbers) or by time.

Once the Log Miner session has been started, then only the database session hosting it can view redo log file contents using the V$LOGMNR_CONTENTS view.

Querying data using the SQL API

There is only one V$ view comprising the Log Miner API in Oracle8i, named V$LOGMNR_CONTENTS:

 

Name                    Null?    Type         Description/comments

----------------------- -------- ------------ ---------------------------------------------------

SCN                              NUMBER       system change number of the redo record

TIMESTAMP                        DATE         approximate time of the redo record

THREAD#                          NUMBER       redo log thread (i.e. OPS or RAC)

LOG_ID                           NUMBER       log sequence number within the redo thread

XIDUSN                           NUMBER       rollback segment number in which the txn resided

XIDSLT                           NUMBER       slot in the RBS’s txn table in which txn resided

XIDSQN                           NUMBER       # of times slot in txn table in RBS reused

RBASQN                           NUMBER       log sequence number of redo byte address

RBABLK                           NUMBER       block within redo log file

RBABYTE                          NUMBER       byte offset within 512-byte block within redo log file

UBAFIL                           NUMBER       file ID # within rollback segment of undo info

UBABLK                           NUMBER       block ID # within rollback segment of undo info

UBAREC                           NUMBER       slot # within block within rollback segment of undo

UBASQN                           NUMBER       # of times slot within block within RBS reused

ABS_FILE#                        NUMBER       absolute file ID # of data

REL_FILE#                        NUMBER       relative file ID # of data

DATA_BLK#                        NUMBER       block ID # within file of data

DATA_OBJ#                        NUMBER       OBJECT_ID within DBA_OBJECTS of data

DATA_OBJD#                       NUMBER       DATA_OBJECT_ID within DBA_OBJECTS of data

SEG_OWNER                        VARCHAR2(32) schema name owning data object

SEG_NAME                         VARCHAR2(32) name of data object

SEG_TYPE                         NUMBER       type of data object

SEG_TYPE_NAME                    VARCHAR2(32) name of type of data object

TABLE_SPACE                      VARCHAR2(32) name of tablespace in which data object resides

ROW_ID                           VARCHAR2(19) full ROWID value

SESSION#                         NUMBER       value of V$SESSION.SID of txn

SERIAL#                          NUMBER       value of V$SESSION.SERIAL# of txn

USERNAME                         VARCHAR2(32) username executing the txn

SESSION_INFO                     VARCHAR2(4000) additional info

ROLLBACK                         NUMBER       0 = txn commited, 1 = txn rolled back

OPERATION                        VARCHAR2(32) type of SQL operation

SQL_REDO                         VARCHAR2(4000) text of SQL statement reconstructed from redo

SQL_UNDO                         VARCHAR2(4000) text of SQL statement reconstructed from undo

RS_ID                            VARCHAR2(32) rollback segment name

SSN                              NUMBER       SQL sequence number

CSF                              NUMBER       SQL continuation flag (0 = not continued)

INFO                             VARCHAR2(32) additional info

STATUS                           NUMBER       txn status

PH1_NAME                         VARCHAR2(32) COLMAP placeholder name #1

PH1_REDO                         VARCHAR2(4000) COLMAP placeholder redo info #1

PH1_UNDO                         VARCHAR2(4000) COLMAP placeholder undo info #1

PH5_NAME                         VARCHAR2(32) COLMAP placeholder name #5

PH5_REDO                         VARCHAR2(4000) COLMAP placeholder redo info #5

PH5_UNDO                         VARCHAR2(4000) COLMAP placeholder undo info #5

 

The use of the USE_COLMAP option, the “logmnr.opt” file, and the “placeholder” columns allows one to query the V$LOGMNR_CONTENTS view based on the before and/or after values of specific columns, such as:

 

SELECT *

FROM       V$LOGMNR_CONTENTS

WHERE      SEG_OWNER = ‘SCOTT’

AND    SEG_NAME = ‘EMP’

AND    PH1_NAME = ‘ENAME’

AND    PH1_REDO = ‘SMITH’

AND    PH1_UNDO = ‘JONES’;

 

This query would be searching for any transactions upon the SCOTT.EMP table where the ENAME column was changed from ‘JONES’ to ‘SMITH’, for example.

Queries performed upon this view when a Log Miner session has not been started within the current database session will result in the error message “ORA-00942: table or view does not exist”.

Queries performed on this view during a Log Miner session will initiate read I/O directly to the redo log files in the list.  Since there are no indexes upon the redo log files, the performance of very specific queries may be quite slow, as the entire range of redo records and files may need to be scanned.

Ending a Log Miner session

Ending the database session also ends the Log Miner session.  Alternatively, the Log Miner session can be ended by a call to the END_LOGMNR procedure in the DBMS_LOGMNR package.

How to use Log Miner in Oracle9i for diagnosis?

Using Log Miner for diagnosis in Oracle9i is similar to Oracle8i, only there are more options.

Specifically, the following options exist:

·         Data dictionary extracts can still be captured in flat files as with Oracle8i, but in Oracle9i they can also be captured into the redo log files themselves, as well.

a)       Additionally in Oracle9i, if you are reading redo log files generated by the same database that is running the Log Miner session, you can use the online data dictionary itself, instead of an extract to flat file or the redo log file stream.  This is just a shortcut for the situation where using the current information in the data dictionary is sufficient for your purposes.

·         It is not necessary to specify the file name of every redo log file to be read by the Log Miner session, as in Oracle8i.  Instead, it is possible in Oracle9i to specify only the first redo log file and then have the Log Miner session assume that the next file will have the same filename in the same location/directory, just using the next log sequence number.

·         An Oracle9i Log Miner session can continuously update its data dictionary information by scanning the incoming redo log file stream for DDL statements, instead of relying on a single static image of data dictionary information, as Oracle8i Log Miner does.

We shall discuss the enhanced capabilities for Oracle9i Log Miner in the following sections.

Capturing data dictionary information extracts

The DBMS_LOGMNR_D package has been greatly enhanced for Oracle9i.  Instead of having a single capability, capturing data dictionary extracts to flat files, the package can now perform additional duties:

·         Capture data dictionary extracts to flat files (as with Oracle8i)

·         Capture data dictionary extracts into the redo log file stream

·         Change the location and storage of internal tables used for capturing data dictionary extracts into the redo log file stream.  By default, these tables are located in the SYSTEM tablespace, and they can moved into another pre-existing tablespace

There are two procedures belonging to the DBMS_LOGMNR_D package:

Procedure BUILD

This parameters expected by this procedure now vary based on the value of the new OPTIONS parameter.  If the value of OPTIONS is DBMS_LOGMNR_D.STORE_AS_FLAT_FILE (the default), then the procedure will expect to have values provided for the DICTIONARY_FILENAME and DICTIONARY_LOCATION parameters, just as with Oracle8i.

However, if the value of the OPTIONS parameter is DBMS_LOGMNR.STORE_IN_REDO_LOGS, then no other parameters are needed.  In this case, the call to the BUILD procedure will populate a series of tables belonging to the SYS schema whose names all begin with the prefix “LOGMNR”.

Since INSERT statements will be utilized to populate these tables, redo will be generated as a result.  The redo records from these operations will be written to a redo log file, which in turn will be read by a Log Miner session.  The Log Miner session, if configured properly, will recognize the changes to the data values in these tables, and update its in-memory translation map of data dictionary information accordingly.

This is a wonderfully elegant way of solving the problem of continuous data dictionary updates!

Thus, there is no longer a need for the flat file data dictionary extracts.  Moreover, a Log Miner session can have its data dictionary translation map updated without stopping and restarting the Log Miner session, as was necessary in Oracle8i Log Miner.

Procedure SET_TABLESPACE

New with Oracle9i, this procedure is intended to move the tables associated with Log Miner out of the all-important SYSTEM tablespace.

This routine is expected to be run only once during the life of the database, but it is OK to run it several times if the Log Miner tables need to be moved several times.

Identifying redo log files

The importance of the ADD_LOGFILE procedure is less pronounced in Oracle9i than it was in Oracle8i.  Whereas it was required to use this procedure to specify the name of every redo log file to be included in the Log Miner session, in Oracle9i this is no longer required.

Instead, if the first redo log file is specified, then the Log Miner can be instructed to make the assumption that subsequent files will reside in the same location with the same filename format, only with an incrementing log seque

阅读(1111) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~