主题: |
LogMiner Utility Release 8.1.x - 10g |
|
文档 ID: |
291686.1 |
类型: |
WHITE PAPER |
|
Modified Date: |
16-JUN-2009 |
状态: |
PUBLISHED |
Checked for relevance on 16-Oct-2008
LogMiner Utility Release 8.1.x - 10g
Product: [RDBMS server Utilities LogMiner] Minimum Version:[8.1.7.0.0 - 10.1.0.2.0]
An Oracle Corporation White Paper
Abstract
This paper is written to provide simplest methods to use the LogMiner. It has complete information about LogMiner release 8i - 10g at one place. The LogMiner offers easy methods to read the online as well as offline redolog files. This document covers release wise features, restrictions, setup commands in details and test cases, known issues/bugs. It also covers all the easy methods to analyze the redologs in different Oracle releases 8i - 10g.
Document HistoryAuthor : Shrikant Gavali
Create Date : 15-Apr-2005
Last Update Date : 16-Oct-2008
Table of Contents
1.1 What is LogMiner ?The Oracle LogMiner utility enables you to query redo logs through a SQLinterface.
Redo logs contain information about the history of activity on a database.
- LogMiner is a utility which help us to extract the data from the redo logs (Online/Offline)
- When you run LogMiner, you need to specify the names of redo logs that you want to analyze
- LogMiner retrieves information from those redo logs and returns it through the V$LOGMNR_CONTENTS
- LogMiner only runs in Oracle version 8.1 or higher but you can use it to analyzes redo logs from release 8.0 or later database.
- The block size (DB_BLOCK_SIZE) of the analyzing instance must be identical to the block size of the log producing instance.
- If this is not the case, you will receive an error indicating the archive log is corrupted (when it is probably not).
- By default LogMiner returns all the rows in SCN order unless you have used the option COMMITTED_DATA_ONLY.
- The option COMMITTED_DATA_ONLY to specify that only committed transactions should be retrieved.
- SCN order is the order normally applied in media recovery
1.2 Redo Logs and Dictionary Files Before you begin using LogMiner, it is important to understand how LogMiner works with redo logs and dictionary files.
This will help you to get accurate results and to plan the use of your system resources.
1.2.1 Redo Logs
1.2.2 Dictionary Options
1.2.3 Tracking DDL Statements
1.2.1 Redo Logs- When you run LogMiner, you specify the names of redo logs that you want to analyze.
- LogMiner retrieves information from those redo logs and returns it through the V$LOGMNR_CONTENTS view.
- To get correct results, you must enable at least minimal supplemental logging.
- To check the Mined/Analyzed data, query the V$LOGMNR_CONTENTS view,
- When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially.
- Translated information from the redo log files is returned as rows in the V$LOGMNR_CONTENTS view.
- This continues until either the filter criteria specified at startup are met or the end of the redo log file is reached.
* Keep the following things in mind about redo logs- The redo logs must be from a release 8.0 or later Oracle database.
- However, several of the LogMiner features introduced as of release 9.0.1 only work with redo logs produced on an Oracle9i or later database.
- Support for LOB and LONG data types is available as of release 9.2, but only for redo logs generated on a release 9.2 Oracle database.
- The redo logs must use a database character set that is compatible with the character set of the database on which LogMiner is running.
- In general, the analysis of redo logs requires a dictionary that was generated from the same database that generated the redo logs.
- If you are using the online catalog as the LogMiner dictionary, you can only analyze redo logs from the database on which LogMiner is running.
- LogMiner must be running on the same hardware platform that generated the redo logs being analyzed. However, it does not have to be on the same system.
- It is important to specify the correct redo logs when running LogMiner.
- To determine which redo logs are being analyzed in the current LogMiner session, Query the V$LOGMNR_LOGS view, which contains one row for each redo log.
1.2.2 LogMiner Dictionary options- To Analyzes the contents of the redo log files completely, recommended to use dictionary file and you can create it using PL/SQL package.
- LogMiner uses a dictionary file, which is a special file that contains information of database from which it created and the time it was created
- LogMiner requires a dictionary to translate internal object identifiers and data types to object name and external data formats.
- The dictionary file is not required, but is recommended
- Without a dictionary file, the equivalent SQL statements will use Oracle internal object IDs for the object name and present column values as hex data
For example, instead of the SQL statement:
INSERT INTO emp (name, salary) VALUES ('John Doe', 50000);
LogMiner will display:
insert into Object#2581 (col#1, col#2) values (hextoraw('4a6f686e20446f65'), hextoraw('c306'));
- You can use this dictionary file to mine the redo logs of that database in a separate database
instance without being connected to the source database
* LogMiner gives you following three options for supplying the dictionary1.2.2.1 Using the Online Catalog
1.2.2.2 Extracting a LogMiner Dictionary to the Redo Log Files
1.2.2.3 Extracting the LogMiner Dictionary to a Flat File
1.2.2.4 Decision Tree for Choosing a LogMiner Dictionary
1.2.2.1 Using the Online CatalogYou can use this option:
- When you will have access to the source database from which the redo log files were created.
- When no changes to the column definitions in the tables involved
- This is the most efficient and easy-to-use option
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
1.2.2.2 Extracting a LogMiner Dictionary to the Redo log filesYou can use this option:
- If you do NOT expect to have access to the source database
SQL> EXECUTE DBMS_LOGMNR_D.BUILD (OPTIONS => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
- It takes time so, do it in off-peak hours
- It is faster than extracting to a flat file
- Depending on the size of the dictionary, it may be contained in multiple redologs, If the relevant redo log files have been archived, you can find out which redo log files contain the START and END of an extracted dictionary, using following query:
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES';
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END = 'YES';
1.2.2.3 Extracting the LogMiner Dictionary to a Flat fileAs per 10g documentation:
- This option is maintained for backward compatibility.
- This option does not guarantee transactional consistency.
- Oracle recommends that you use either the Online catalog or extract the Dictionary from redo log files instead of Dictionary to a Flat File.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('dictionary.ora', '/oracle/database/', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
You could also specify a filename and location without specifying the STORE_IN_FLAT_FILE option. The result would be the same.
1.2.2.4 Decision tree for choosing a LogMiner Dictionary- Please refer to Oracle 10g Database Utilities Manual Figure 19–2 shows a decision tree to help you select a LogMiner dictionary, depending on your situation.
1.2.3 Tracking DDL statements- If your source dictionary is a flat file dictionary or a dictionary in the redo logs, then you can use the DDL_DICT_TRACKING.
- By default DDL tracking is disabled.
- To enable it, use the OPTIONS parameter to specify DDL_DICT_TRACKING when you start LogMiner.
For example:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING);
or:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS =>DBMS_LOGMNR.DDL_DICT_TRACKING + DBMS_LOGMNR.DICT_FROM_REDO_LOGS);
To see only DDL statements, use query:
SQL> SELECT USERNAME, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE USERNAME = ’SCOTT’ AND OEPRATION = ’DDL’;
* Keep the following in mind when you use the DDL_DICT_TRACKING option- The DDL_DICT_TRACKING option is NOT valid with the DICT_FROM_ONLINE_CATALOG option.
- The DDL_DICT_TRACKING option requires that the database must be open
The following are not supported in 8i:
- Index-organized tables
- Clustered tables/indexes
- Non-scalar data types
- Chained rows
- LogMiner does not handle direct path insert operations, even though such operations are logged.
From release 9.2:
- You have the option of accessing LogMiner functionality through the Oracle LogMiner Viewer graphical user interface (GUI),
- The LogMiner Viewer is a part of Oracle Enterprise Manager
- By default all LogMinger tables are created in SYSTEM tablespace - And from release 9.2, you can re-create all LogMiner tables in an alternate tablespace using following command
SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE ('
');
- LogMiner returns all the rows in SCN order unless you have used the COMMITTED_DATA_ONLY option to specify that only committed transactions should be retrieved, SCN order is the order normally applied in media recovery
4.1 Tracking DDL Statements using option DDL_DICT_TRACKING
4.2 Executing Reconstructed SQL Statements
4.3 Formatting of Returned Data
4.4 Extracting Actual Data Values from Redo Logs
4.5 NULL Returns From the MINE_VALUE Function
4.6 Filtering Data That is Returned
- 4.6.1 Showing Only Committed Transactions
- 4.6.2 Skipping Redo Corruptions
- 4.6.3 Filtering Data By Time
- 4.6.7 Filtering Data By SCN
4.7 The summary of LogMiner settings
4.8 Supplemental Logging
- 4.8.1 Database-Level Identification Key Logging
- 4.8.2 Table-Level Identification Key Logging
4.1 Tracking DDL Statements using option DDL_DICT_TRACKING
- DDL tracking is disabled by default
- To enable it, use the OPTIONS parameter to specify DDL_DICT_TRACKING when you start LogMiner, for example:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS =>DBMS_LOGMNR.DDL_DICT_TRACKING);
Note: Be aware that the DDL_DICT_TRACKING option to the DBMS_LOGMNR.START_LOGMNR procedure is not supported for flat file dictionaries created prior to Oracle9i. If you attempt to use the DDL_DICT_TRACKING option with a LogMiner database extracted to a flat file prior to Oracle9i, the ORA-01330 error (problem loading a required build table) is returned.
4.2 Executing Reconstructed SQL Statements
- By default, SQL_REDO and SQL_UNDO statements are ended with a semicolon. You can disable the semicolon, using option DBMS_LOGMNR.NO_SQL_DELIMITER when you start LogMiner.
4.3 Formatting of returned data
- Sometimes a query can result in a large number of columns containing reconstructed SQL statements, which can be visually look odd on and hard to read.
- LogMiner provides the DBMS_LOGMNR.PRINT_PRETTY_SQL option to address this problem. It formats the reconstructed SQL statements as follows, which makes them easier to read:
insert into "SCOTT"."EMP" values
"EMPNO": 5505,
"ENAME": "Parker",
"SAL": 9000,
"DEPTNO": NULL;
4.4 Extracting actual data values from Redo logs
- LogMiner allows you to query data based on actual data values
- LogMiner data extraction from redo logs is performed using two mine functions:
DBMS_LOGMNR.MINE_VALUE
DBMS_LOGMNR.COLUMN_PRESENT
These functions are part of the DBMS_LOGMNR package. The support for these mine functions is provided by the REDO_VALUE and UNDO_VALUE columns in the V$LOGMNR_CONTENTS view. The following is an example of how you could use the MINE_VALUE function:
WHERE OPERATION = ’UPDATE’ AND
DBMS_LOGMNR.COLUMN_PRESENT (REDO_VALUE, ’SCOTT.EMP.SAL’ ) = 1 AND
DBMS_LOGMNR.COLUMN_PRESENT (UNDO_VALUE, ’SCOTT.EMP.SAL’ ) = 1 AND
(DBMS_LOGMNR.MINE_VALUE (REDO_VALUE, ’SCOTT.EMP.SAL’) > DBMS_LOGMNR.MINE_VALUE (UNDO_VALUE, ’SCOTT.EMP.SAL’));
4.5 NULL Returns from the MINE_VALUE function
If the MINE_VALUE function returns a NULL value, mean either
- The specified column is not present in the redo or undo column
- The specified column is present and has a null value
- To distinguish between these two cases, use the DBMS_LOGMNR.COLUMN_PRESENT function which returns 1 if the column is present in the redo or undo portion of the data. Otherwise, it returns 0.
Please see following sample query:
WHERE OPERATION = 'UPDATE' AND
DBMS_LOGMNR.MINE_VALUE (REDO_VALUE, 'HR.EMPLOYEES.SALARY') > DBMS_LOGMNR.MINE_VALUE (UNDO_VALUE, 'HR.EMPLOYEES.SALARY');
Or:
WHERE OPERATION = 'UPDATE' AND
DBMS_LOGMNR.COLUMN_PRESENT (REDO_VALUE, 'HR.EMPLOYEES.SALARY') = 1 AND DBMS_LOGMNR.COLUMN_PRESENT (UNDO_VALUE, 'HR.EMPLOYEES.SALARY') = 1;
4.6 Filtering Data that is returned
There are several methods you can use to limit the information that is returned to the V$LOGMNR_CONTENTS view, as well as the speed at which it is returned.
These options to be used when you start LogMiner:
- 4.6.1 Showing Only Committed Transactions
- 4.6.2 Skipping Redo Corruptions
- 4.6.3 Filtering Data By Time
- 4.6.7 Filtering Data By SCN
4.6.1 Showing only Committed transactions
Use this option to extract only committed rows.
This enables you to filter out rolled back transactions, transactions that are in progress and internal operations.
To enable this option, specify it when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY);
4.6.2 Skipping Redo Corruptions
This option helps you to skip any corruptions in the redo log files during select operations from the V$LOGMNR_CONTENTS view. For every corrupt redo record encountered, a row is returned that contains the value CORRUPTED_BLOCKS in the OPERATION column, 1343 in the STATUS column, and the number of blocks skipped in the INFO column.
Be aware that the skipped records may include changes to ongoing transactions in the corrupted blocks; such changes will not be reflected in the data returned from the V$LOGMNR_CONTENTS view.
The default is for the select operation to terminate at the first corruption it encounters in the redo log file.
The following example shows how this option works, add redo log files of interest:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (logfilename => '/usr/oracle/data/db1arch_1_16_482701534.log' options => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR();
SQL> SELECT operation, status FROM V$LOGMNR_CONTENTS;
ERROR at line 3:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 6 change 73528 time 11/06/2002
ORA-00334: archived log: /usr/oracle/data/dbarch1_16_482701534.log
Restart LogMiner, this time specify the SKIP_CORRUPTION option:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (options =>-DBMS_LOGMNR.SKIP_CORRUPTION);
SQL> SELECT operation, status, info FROM V$LOGMNR_CONTENTS;
OPERATION STATUS INFO
-------------------------------- ------------ ------------------------------------------
CORRUPTED_BLOCKS 1343 corrupt blocks 4 to 19 skipped
4.7 The summary of LogMiner Options
The following list is a summary of LogMiner settings that you can specify with the OPTIONS parameter to DBMS_LOGMNR.START_LOGMNR:
DICT_FROM_ONLINE_CATALOG
DICT_FROM_REDO_LOGS
CONTINUOUS_MINE
COMMITTED_DATA_ONLY
SKIP_CORRUPTION
NO_SQL_DELIMITER
PRINT_PRETTY_SQL
STARTIME, ENDTIME
STARSCN, ENDSCN
4.8 Supplemental Logging
The data needed for instance and media recovery is automatically recorded in the redo log files.
However,a redo-based application may require that additional columns be logged in the redo log files. The process of logging these additional columns is called supplemental logging.
In Oracle release 9.0.1 Supplemental Logging was always on and it was not available at all in releases prior to 9.0.1.
In release 9.2 and 10g By default, Oracle Database does not provide any supplemental logging, which means that the following LogMiner features are NOT supported by default:
- Index clusters, chained rows, and migrated rows
- Direct-path inserts (also require that ARCHIVELOG mode be enabled)
- Extracting the LogMiner dictionary into the redo log files
- DDL tracking
- Generating SQL_REDO and SQL_UNDO with identification key information
- LONG and LOB data types
Therefore, to make full use of LogMiner features, you must enable supplemental logging. The following are examples of situations in which additional columns may be needed:
An application that applies reconstructed SQL statements to a different database must identify the update statement by a set of columns that uniquely identify the row (for example, a primary key), not by the ROWID shown in the reconstructed SQL returned by the V$LOGMNR_CONTENTS view, because the Rowid of one database will be different and therefore meaningless in another Database.
An application may require that the before-image of the whole row be logged, not just the modified columns, so that tracking of row changes is more efficient.
There are two types of supplemental log groups that determine when columns in the log group are logged:
* Unconditional supplemental log groups
The before-images of specified columns are logged any time a row is updated, regardless of whether the update affected any of the specified columns. This is sometimes referred to as an ALWAYS log group.
* Conditional supplemental log groups
The before-images of all specified columns are logged only if at least one of the columns in the log group is updated. Supplemental log groups can be system-generated or user-defined.
In addition to the two types of supplemental logging, there are two levels of supplemental logging, as described in the following sections:
- Database-Level Supplemental Logging
- Table-Level Supplemental Logging
* Minimal Supplemental Logging
Minimal supplemental logging logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo operations associated with DML changes.
It ensures that LogMiner has sufficient information to support chained rows and various storage arrangements, such as cluster tables.
To enable minimal supplemental logging, execute the following statement:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
* Database-Level Identification Key Logging
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
PRIMARY KEY system-generated uncondititional supplemental log group, If a table does not have a primary key, but has one or more non-null unique index key constraints or index keys, then one of the unique index keys is chosen for logging as a mean of uniquely identifying the row being updated.
If the table has neither a primary key nor a non-null unique index key, then all columns except LONG and LOB are supplementally logged; this is equivalent to specifying ALL supplemental logging for that row.
Therefore, Oracle recommends that when you use database-level primary key supplemental logging, all or most tables be defined to have primary or unique index keys.
To enable primary key logging at the database level, execute the following statement:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
UNIQUE index system-generated conditional supplemental log group:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
FOREIGN KEY system-generated conditional supplemental log group:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
* Table-Level Identification Key Logging
Following command works in 10g and above releases:
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
If you run above SQL statement, then whenever any column in table HR.EMPLOYEES is changed, the entire row containing that column (except columns for LOBs, LONGs and ADTs) will be placed in the redo log file.
* User-defined unconditional supplemental log groups
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_parttime (EMPLOYEE_ID, LAST_NAME, DEPARTMENT_ID) ALWAYS;
* User-defined conditional supplemental log groups
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME,DEPARTMENT_ID);
- Simple and nested abstract datatypes (ADTs)
- Object Refs
- Collections (nested tables and VARRAYs)
- Index organized tables (IOTs)
- CREATE TABLE AS SELECT of a table with a clustered key
Support for LOB and LONG datatypes in redo log files is available only for redo log files generated on a release 9.2 or later Oracle database. LogMiner does not support these datatypes and table storage attributes (unsupported Datatypes and Table Storage Attributes):
- BFILE datatype
- Simple and nested abstract datatypes (ADTs)
- Collections (nested tables and VARRAYs)
- Object refs
- XMLTYPE datatype
- Index-organized tables (IOTs) with LOB columns
- Tables using table compression
V$LOGMNR_CONTENTS
V$LOGMNR_DICTIONARY
V$LOGMNR_LOGS
V$LOGMNR_PARAMETERS
V$LOGMINER_CONTENTS contains information like:
- Changes made to the database like Insert, Update, Delete and DDL.
- The SCN at which a change was made (SCN Column)
- The SCN at which a change was committed (Commit_SCN)
- The transaction to which a change belongs (XIDUSN, XIDSLT and XIDSQN columns)
-Table, Schema, Username
- SQL_REDO: Reconstructed SQL statement that is equivalent to the SQL used to generate the redo records.
- SQL_UNDO: Reconstructed SQL statement showing the SQL statement needed to undo the change. For DDL its shows NULL.
- SEG_OWNER, SEG_NAME, ROW_ID, TABLESPACE, etc.
You can query the appropriate USER_, ALL_, or DBA_LOG_GROUPS data dictionary view to determine whether any supplemental logging has already been specified.
- ALL_LOG_GROUP_COLUMNS (Only applicable in 10g)
Column : LOGGING_PROPERTY VARCHAR2(6) Indicates whether the column or attribute would be supplementally logged (LOG) or not (NO LOG)
In Oracle release 9.2.x:
- GV$DATABASE Columns : SUPPLEMENTAL_LOG_DATA_FK, SUPPLEMENTAL_LOG_DATA_ALL
- SUPPLEMENTAL_LOG_DATA_MIN VARCHAR2(3)
- SUPPLEMENTAL_LOG_DATA_PK VARCHAR2(3)
- SUPPLEMENTAL_LOG_DATA_UI VARCHAR2(3)
- V$DATABASE Columns
- SUPPLEMENTAL_LOG_DATA_MIN VARCHAR2(3)
- SUPPLEMENTAL_LOG_DATA_PK VARCHAR2(3)
- SUPPLEMENTAL_LOG_DATA_UI VARCHAR2(3)
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
SUP SUP SUP
--- --- ---
YES NO NO
ALL_LOG_GROUPS describes the log group definitions on the tables accessible to the current user.
Related Views:
- USER_LOG_GROUPS
- ALL_LOG_GROUPS
- DBA_LOG_GROUPS
Column Datatype NULL Description
OWNER VARCHAR2(30) NOT NULL Owner of the log group definition
LOG_GROUP_NAME VARCHAR2(30) NOT NULL Name of the log group definition
TABLE_NAME VARCHAR2(30) NOT NULL Name of the table on which the log group is defined
LOG_GROUP_TYPE VARCHAR2(19) Type of the log group:
- PRIMARY KEY LOGGING
- UNIQUE KEY LOGGING
- FOREIGN KEY LOGGING
- ALL COLUMN LOGGING
- USER LOG GROUP
- ALWAYS VARCHAR2(11) Y indicates the log group is logged any time a row is updated; N indicates the log group is logged any time a member column is updated.
ALL_METHOD_RESULTS
- GENERATED VARCHAR2(14) Indicates whether the name of the supplemental log group was system generated (GENERATED NAME) or not (USER NAME)
8. LogMiner Documentation
1) Oracle8i Administrator's Guide
Chapter-7 : Managing Archived Redo Logs
Topic : Using LogMiner to Analyze Online and Archived Redo Logs
2) Oracle9i Release 2 (9.2) Database Administrator’s Guide
Chapter-9 Using LogMiner to Analyze Redo Logs
3) Oracle10g Database Utilities
Part IV : Other Utilities
Chapter : 19 Using LogMiner to Analyze RedoLogs
This section describes the steps in a typical LogMiner session.
9.1 Enable Supplemental Logging
9.2 Extract the LogMiner Dictionary to a Flat file
9.3 Specify first archive file which you want to mine
9.4 Add Redo Log Files for mining
9.5 Remove redo Log which is added earlier
9.6 Start LogMiner
9.7 Query V$LOGMNR_CONTENTS
9.8 End the LogMiner Session
The LogMiner packages are owned by the sys schema. Therefore, if you are not connected as user SYS, then you must include SYS. in your calls. For example:
SQL> EXECUTE SYS.DBMS_LOGMNR.START_LOGMNR;
9.1 Enable Supplemental Logging by issueing command
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
You can use LogMiner in mount mode as well as an open database mode.
9.2 Extract the LogMiner Dictionary to a Flat file
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('dictionary.ora', '/oracle/database/', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
Note: The directory /oracle/database must be contained in initialisation parameter UTL_FILE_DIR.
9.3 Create a list of redo log files. Specify the NEW option of the
DBMS_LOGMNR.ADD_LOGFILE procedure to signal that this is the beginning of a NEW list.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);
9.4 If desired, add more redo log files by specifying the ADDFILE option of the DBMS_LOGMNR.ADD_LOGFILE procedure.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/oracle/logs/log2.f', OPTIONS => DBMS_LOGMNR.ADDFILE);
The OPTIONS parameter is optional when you are adding additional redo log files.
For example, you could simply enter the following:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/oracle/logs/log2.f');
9.5 If desired, remove redo log files by using the DBMS_LOGMNR.REMOVE_LOGFILE procedure.
For example, enter the following to remove the /oracle/logs/log2.f redo log file:
SQL> EXECUTE DBMS_LOGMNR.REMOVE_LOGFILE (LOGFILENAME => '/oracle/logs/log2.f');
9.6 Start LogMiner
After you have created a LogMiner dictionary file and specified which redo log files to analyze, you must start LogMiner.
Oracle recommends that you specify a LogMiner dictionary option. If you do not, LogMiner cannot translate internal object identifiers and datatypes to object names and external data formats. Therefore, it would return internal object IDs and present data as hex data.
Additionally, the MINE_VALUE and COLUMN_PRESENT functions cannot be used without a dictionary.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME =>'/oracle/database/dictionary.ora');
9.7 Now you can query the V$LOGMNR_CONTENTS to see the mined rows.
9.8 You can End LogMiner session using:
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR
[10.1 Case-1 (Extracting the Dictionary to Flat File)]
1.1 Set the Init.ora parameter UTL_FILE_DIR, with the name of dictionary where you want to create a dictionary file.
Example : utl_file_dir = 'G:\ORA817\LogMinerDIR'
1.2 Now generate the dictionary file using following command:
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ('DICTIONARY817.ORA', 'G:\ORA817\LogMinerDIR', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
1.2 Now prepare the list of the archive files which were generated when the unwanted changes done.
Example :
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => 'F:\ORACLE\ORA92DATA\ORA92\ARCO01.LOG', OPTIONS => DBMS_LOGMNR.NEW);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => 'F:\ORACLE\ORA92DATA\ORA92\ARCO02.LOG', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => 'F:\ORACLE\ORA92DATA\ORA92\ARCO03.LOG', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => 'F:\ORACLE\ORA92DATA\ORA92\ARCO04.LOG', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => 'F:\ORACLE\ORA92DATA\ORA92\ARCO05.LOG', OPTIONS => DBMS_LOGMNR.ADDFILE);
1.3. Now you can start the logMiner as shown below:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME => 'G:\ORA817\LogMinerDIR\DICTIONARY817.ORA');
1.4. Now you can query v$LOGMNR_CONTENTS details using following queries:
SQL> select seg_owner, operation, sql_redo, sql_undo from v$logmnr_contents where SEG_NAME = '';
SQL> select username, operation, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SEG_OWNER = '';
SQL> SELECT (XIDUSN || '.'|| XIDSLT || '.'|| XIDSQN) AS XID, USERNAME ,SQL_REDO,SQL_UNDO AS SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'SCOTT';
[10.2 Case-2 (Extracting the Dictionary from ONLINE CATALOG)]
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => 'F:\ORACLE\ORA92DATA\ORA92\REDO01.LOG', OPTIONS => DBMS_LOGMNR.NEW);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => 'F:\ORACLE\ORA92DATA\ORA92\REDO02.LOG', OPTIONS => DBMS_LOGMNR.addfile);
SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => 'F:\ORACLE\ORA92DATA\ORA92\REDO03.LOG', OPTIONS => DBMS_LOGMNR.addfile);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> select username, operation, SQL_REDO,SQL_UNDO FROM V$LOGMNR_CONTENTS where SEG_OWNER = 'SCOTT';
SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR;
1. Check following things:
- Init.ora parameter UTL_FILE_DIR
- Archive log destination (Check the destination of offline redo logs)
- Archive log mode
- Check the location of online redo logs
2. Dictionary Option
- DICT_FROM_ONLINE_CATALOG
- DICT_FROM_REDO_LOGS
- STORE_IN_FLAT_FILE
3. Adding redo log file to LogMiner
- Add first log file
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);
- Add next files one by one, if you want mine few more files.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/oracle/logs/log2.f', OPTIONS => DBMS_LOGMNR.ADDFILE);
Or:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => ’/oracle/logs/log2.f’);
- If you want to remove file from LogMiner which you added earlier:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/oracle/logs/log2.f', OPTIONS => DBMS_LOGMNR.REMOVEFILE);
4. Start LogMiner
- DICT_FROM_ONLINE_CATALOG
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS =>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG)
Or:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (OPTIONS =>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY);
- DICT_FROM_REDO_LOGS
If you specify DICT_FROM_REDO_LOGS, LogMiner expects to find a dictionary in the redo logs that you specified with the DBMS_LOGMNR.ADD_LOGFILE procedure.
- STORE_IN_FLAT_FILE
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (DICTFILENAME =>'/oracle/database/dictionary.ora');
5. Filter Options
- COMMITTED_DATA_ONLY
- SKIP_CORRUPTION
- STARTTIME and ENDTIME
- STARTSCN and ENDSCN
1. How to check lots of redo generation
1. Set following parameters at SQLPLUS
set linesize 1000
col username format a10
col OPERATION format a10
col seg_owner format a30
col seg_name format a40
2. select distinct username, SEG_OWNER, seg_name, operation from V$LOGMNR_CONTENTS where SEG_OWNER not in ('SYS','SYSTEM') order by username, SEG_OWNER, seg_name, operation;
3. SELECT OPERATION, COUNT(*) FROM V$LOGMNR_CONTENTS GROUP BY OPERATION;
4. select distinct SQL_REDO, SQL_UNDO from V$LOGMNR_CONTENTS where operation = 'UNSUPPORTED';
Unsupported operations could be because one the restrictions mentioned under the following points:
- Please see 2. Restrictions in 8i, 5. Restrictions in Release 9.2, 6. Restrictions in 10.x.
5. select distinct SEG_NAME FROM V$LOGMNR_CONTENTS where operation = 'UNSUPPORTED' AND SEG_OWNER not in ('SYS','SYSTEM');
- For more information please refer to Note 300395.1 Using LogMiner, How to determine the cause of lots of redo
2. How to reproduce the "Version Mismatch" error?
- create table skg (c1 number, c2 varchar2(10));
- insert 2-3 rows.
- add the logfile to LogMiner
- start the Logminer with the DICT_FROM_ONLINE_CATALOG
- select operation, sql_redo, sql_undo from v$logmnr_contents where SEG_OWNER = 'SKG';
- insert few more row or do update or delete
- select operation,sql_redo,sql_undo from v$logmnr_contents where SEG_OWNER = 'SKG';
- alter the table
- select operation, sql_redo, sql_undo from v$logmnr_contents where SEG_OWNER = 'SKG';
- do the DMLS
- select operation, sql_redo, sql_undo from v$logmnr_contents where SEG_OWNER = 'SKG';
Now, you will get the "Version Mismatch" errors
3. How to extract data in SCN order from RAC database?
If you have a RAC environment which has three instances and you want to extract data in SCN order using LogMiner from RAC database without missing the data, please do the following steps:
1. First decide your SCN rang.
2. Query following two views to find out the SEQUENCE# belongs to your SCN range.
Run the following query on each instance/node:
SQL> select GROUP#, THREAD#, sequence#, FIRST_CHANGE# from v$log where FIRST_CHANGE# >= and FIRST_CHANGE# <= ;
SQL> SELECT THREAD#, SEQUENCE#, FIRST_CHANGE# FROM V$LOG_HISTORY where FIRST_CHANGE# >= and FIRST_CHANGE# <= ;
Or:
Run the following query to get data details about all instance from one instance/node.
SQL> select INST_ID, GROUP#, THREAD#, sequence#, FIRST_CHANGE# from gv$log where FIRST_CHANGE# >= and FIRST_CHANGE# <= ;
SQL> SELECT INST_ID, THREAD#, SEQUENCE#, FIRST_CHANGE# FROM GV$LOGHIST where FIRST_CHANGE# >= and FIRST_CHANGE# <= ;
3. Add all those sequences (archive files) to LOGMINER and then you can start it using option SCN range OR start is without SCN range and later on query V$LOGMNR_CONTENTS with order by SCN clause.
4. Known Bugs
Bug.4149779 Gen RDBMS-1010 V1010 (80)
Abstract : ORACLE10G LOGMINER DOES NOT SHOW CORRECT ROWID IF LOG CREATED IN ORACLE9I
Bug.4096561 Gen RDBMS-9205 V9205 (80)
Abstract: LOGMINER CREATES PARTITIONS WITH NOLOGGING OPTION
Bug.4031430 Gen RDBMS-102 V102 (80)
Abstract: ORA-600 [17092] IN LOGMINER BUILDER PROCESS
Bug.3938909 Gen RDBMS-1010 V1010 (80)
Abstract: LOGMINER CANNOT SUPPORT CTAS FROM REDO GENERATED UNDER DIFFERENT WORDSIZE
Bug.3928031 Gen RDBMS-1020 V1020 (80)
Abstract: LOGMINER DOES NOT PROPERLY HANDLE RESTART IN THE FIRST FILE OF A NEW THREAD
Bug.3889289 Gen RDBMS-1020 V1020 (80)
Abstract: LOGMINER DOES NOT SUPPORT REDO GENERATED BY SQL*LOADER (SORTED INDEXES)
Bug.3617572 Gen RDBMS-1020 V1020 (80)
Abstract: CERTAIN INDEXES USED BY LOGMINER ARE PLACED IN THE WRONG TABLESPACE
Bug.3381899 Gen RDBMS-1010 V1010 (80)
Abstract: LOGMINER CANNOT SUPPORT MULTIPLE DDLS COMMITING AT THE SAME SCN
Bug.3171445 Gen RDBMS-1010 V1010 (80)
Abstract: LOGMINER ALLOWS GAP IN REDO LOGS OVERLAPPING THE DICTIONARY DUMP IN RAC
Bug.3113200 Gen RDBMS-1010 V1010 (80)
Abstract: LOGMINER DOES NOT WORK FOR INSERT /*+ APPEND */ INTO INDEX-ORGANIZED TABLE
Copyright © 2003 Oracle Corporation. All rights reserved. Oracle is a registered trademark of Oracle Corporation. Various product and service names referenced herein may be trademarks of Oracle Corporation. All other product and service names mentioned may be trademarks of their respective owners.
Disclaimer: This document is provided for information purposes only and the contents hereof are subject to change without notice. Oracle Corporation does not warrant that this document is error-free, nor does it provide any other arranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. Oracle Corporation specifically disclaims any liability with respect to this document and no ontractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without the prior written permission of Oracle Corporation.