Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1781013
  • 博文数量: 413
  • 博客积分: 8399
  • 博客等级: 中将
  • 技术积分: 4325
  • 用 户 组: 普通用户
  • 注册时间: 2011-06-09 10:44
文章分类

全部博文(413)

文章存档

2015年(1)

2014年(18)

2013年(39)

2012年(163)

2011年(192)

分类: Oracle

2013-04-11 16:35:05

Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.

LogMiner Benefits

All changes made to user data or to the database dictionary are recorded in the Oracle redo log files so that database recovery operations can be performed.

Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis. The following list describes some key capabilities of LogMiner:

1>定位逻辑错误发生的时间点(SCN)
Pinpointing(pin-point准确定位) when a logical corruption to a database, such as errors made at the application level, may have begun. These might include errors such as those where the wrong rows were deleted because of incorrect values in a WHERE clause, rows were updated with incorrect values, the wrong index was dropped, and so forth. For example, a user application could mistakenly update a database to give all employees 100 percent salary increases rather than 10 percent increases, or a database administrator (DBA) could accidently delete a critical system table. It is important to know exactly when an error was made so that you know when to initiate time-based or change-based recovery. This enables you to restore the database to the state it was in just before corruption. See for details about how you can use LogMiner to accomplish this.(这一部分功能,似乎可以通过 flashback 更简洁的完成?)
2>细粒度的恢复逻辑错误
Determining what actions you would have to take to perform fine-grained recovery at the transaction level. If you fully understand and take into account existing dependencies, it may be possible to perform a table-specific undo operation to return the table to its original state. This is achieved by applying table-specific reconstructed SQL statements that LogMiner provides in the reverse order from which they were originally issued.

3>性能调优和容量规划
Performance tuning and capacity planning through trend analysis. You can determine which tables get the most updates and inserts. That information provides a historical perspective on disk access statistics, which can be used for tuning purposes.
4>事后审计
Performing postauditing. LogMiner can be used to track any data manipulation language (DML) and data definition language (DDL) statements executed on the database, the order in which they were executed, and who executed them.

转自:http://blog.csdn.net/tianlesoftware/article/details/5604497

Logminer
oracle8i开始提供的用于分析重做日志信息的工具,它包括 DBMS_LOGMNRDBMS_LOGMNR_D 两个package,后边的D是字典的意思。它既能分析redo log file,也能分析归档后的archive log file。在分析日志的过程中需要使用数据字典,一般先生成数据字典文件后使用,10g版本还可以使用在线数据字典。

Logminer也可以分析其它数据库的重做日志文件,但是必须使用重做日志所在数据库的数据字典,否则会出现无法识别的乱码。另外被分析数据库的操作系统平台最好和当前Logminer所在数据库的运行平台一样,且block size相同。

LogMinerOracle数据库提供的一个工具,它用于分析重做日志和归档日志所记载的事务操作。

1)确定数据库的逻辑损坏时间。假定某个用户执行drop table误删除了重要表sales,通过LogMiner可以准确定位该误操作的执行时间和SCN值,然后通过基于时间恢复或者基于SCN恢复可以完全恢复该表数据。

2)确定事务级要执行的精细逻辑恢复操作。假定某些用户在某表上执行了一系列DML操作并提交了事务,并且其中某个用户的DML操作存在错误。通过LogMiner可以取得任何用户的DML操作及相应的UNDO操作,通过执行UNDO操作可以取消用户的错误操作。

3)执行后续审计。通过LogMiner可以跟踪Oracle数据库的所有DMLDDLDCL操作,从而取得执行这些操作的时间顺序、执行这些操作的用户等信息。

LogMiner 由如下2个脚本来安装:

1)创建DBMS_LOGMNR$ORACLE_HOME/rdbms/admin/dbmslm.sql

SQL> @dbmslm.sql

程序包已创建。

授权成功。

2)创建DBMS_LOGMNR_D$ORACLE_HOME/rdbms/admin/dbmslmd.sql.

SQL> @dbmslmd.sql

过程已创建。

没有错误。

授权成功。

PL/SQL 过程已成功完成。

程序包已创建。

1.3  LogMiner基本对象

There are four basic objects in a LogMiner configuration that you should be familiar with: the source database, the mining database, the LogMiner dictionary, and the redo log files containing the data of interest:

1The source database is the database that produces all the redo log files that you want LogMiner to analyze.

2The mining database is the database that LogMiner uses when it performs the analysis.

3The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request.

LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data.

-- LogMiner字典用于将内部对象ID号和数据类型转换为对象名和外部数据格式。使用LogMiner分析重做日志和归档日志时,应该生成LogMiner字典,否则将无法读懂分析结果。

4The redo log files contain the changes made to the database or database dictionary.

1.4  LogMiner配置要求

The following are requirements for the source and mining database, the data dictionary, and the redo log files that LogMiner will mine:

 Source and mining database

1Both the source database and the mining database must be running on the same hardware platform.

       -- 源数据库和分析数据库必须运行在相同硬件平台上;

2The mining database can be the same as, or completely separate from, the source database.

       -- 分析数据库可以是独立数据库或源数据库;

3The mining database must run the same release or a later release of the Oracle Database software as the source database.

       --分析数据库的版本不能低于源数据库的版本;

4The mining database must use the same character set (or a superset of the character set) used by the source database.

       --分析数据库与源数据库必须具有相同的字符集。

LogMiner dictionary

1The dictionary must be produced by the same source database that generates the redo log files that LogMiner will analyze.    

       -- LogMiner字典必须在源数据库中生成。

 

All redo log files:

1Must be produced by the same source database.

       --当分析多个重做日志和归档日志时,它们必须是同一个源数据库的重做日志和归档日志;

2Must be associated with the same database RESETLOGS SCN.

       --当分析多个重做日志和归档日志时,它们必须具有相同的resetlogs scn

3Must be from a release 8.0 or later Oracle Database. However, several of the LogMiner features introduced as of release 9.0.1 work only with redo log files produced on an Oracle9i or later database.

       --当分析的重做日志和归档日志必须在Oracle8.0版本以上。

LogMiner does not allow you to mix redo log files from different databases or to use a dictionary from a different database than the one that generated the redo log files to be analyzed.

1.5补充日志(suppplemental logging

You must enable supplemental logging before generating log files that will be analyzed by LogMiner.

When you enable supplemental logging, additional information is recorded in the redo stream that is needed to make the information in the redo log files useful to you. Therefore, at the very least, you must enable minimal supplemental logging, as the following SQL statement shows:

      ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

To determine whether supplemental logging is enabled, query the V$DATABASE view, as the following SQL statement shows:

      SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

 If the query returns a value of YES or IMPLICIT, then minimal supplemental logging is enabled.

重做日志用于实现例程恢复和介质恢复,这些操作所需要的数据被自动记录在重做日志中。但是,重做应用可能还需要记载其他列信息到重做日志中,记录其他列的日志过程被称为补充日志

默认情况下,Oracle数据库没有提供任何补充日志,从而导致默认情况下LogMiner无法支持以下特征:

1)索引簇、链行和迁移行;

2)直接路径插入;

3)摘取LogMiner字典到重做日志;

4)跟踪DDL

5)生成键列的SQL_REDOSQL_UNDO信息;

6LONGLOB数据类型。

 因此,为了充分利用LogMiner提供的特征,必须激活补充日志。在数据库级激活补充日志的示例如下:

SQL> conn /as sysdba

 已连接。

SQL> alter database add supplemental log data;

数据库已更改。

注意:激活不用重启数据库,数据库联机即可。

.  一个典型的LogMiner 步骤
http://www.itpub.net/thread-1051633-1-1.html

SQL>  @?/rdbms/admin/dbmslm.sql

Package created.

SQL>  @?/rdbms/admin/dbmslmd.sql

Package created.

SQL> alter system set utl_file_dir='/u01/app/oracle/logminer/'  scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area  218103808 bytes
Fixed Size                  1218604 bytes
Variable Size              67110868 bytes
Database Buffers          146800640 bytes
Redo Buffers                2973696 bytes
Database mounted.
Database opened.
SQL> show parameter utl_file_dir;

NAME                                 TYPE        VALUE
------------------------------------ ----------- --------------------------------------------------------
utl_file_dir                         string      /u01/app/oracle/logminer/

SQL> alter database add supplemental log data;

Database altered.

SQL> execute dbms_logmnr_d.build(dictionary_filename =>'dictionary.ora',dictionary_location=>'/u01/app/oracle/logminer/');

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.add_logfile(options =>dbms_logmnr.new,logfilename =>'/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2013_04_10/o1_mf_1_1_8p9jz21w_.arc');

PL/SQL procedure successfully completed.

SQL> execute dbms_logmnr.add_logfile(options =>dbms_logmnr.addfile,logfilename=>'/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2013_04_10/o1_mf_1_2_8p9k0o22_.arc');

PL/SQL procedure successfully completed.

SQL> execute dbms_logmnr.start_logmnr(dictfilename=>'/u01/app/oracle/logminer/dictionary.ora');

PL/SQL procedure successfully completed.

参看分析结果, 一共有四个表,详细说明如下
V$LOGMNR_DICTIONARY-------查询使用的数据字典文件
V$LOGMNR_PARAMETERS-------查询当前LogMiner设定的参数
V$LOGMNR_LOGS      -------查询分析的日志文件
V$LOGMNR_CONTENTS  -------日志文件的内容

Seg_name  : 表名;
Scn       : SCN 号码
Sql_redo  : 所作的sql 语句
Sql_undo  : 对应sql_redo, 恢复用的sql 语句
Timestamp : sql 发出的具体时间
Operation : sql 的类型,分为INSERT, UPDATE, START(set …), COMMIT(commit), INTERNAL等
Session#  : 发出该操作的 session 

select operation,sql_redo,sql_undo from v$logmnr_contents;
SELECT sql_redo FROM v$logmnr_contents WHERE username='SYS' AND tablename='TT';
select scn,sql_redo from v$logmnr_contents where seg_owner='SYSTEM' and seg_name='ENDUSER' and upper(operation)='INSERT';
SELECT timstamp,sql_redo FROM v$logmnr_contents WHERE upper(sql_redo) like '%insert%';

http://blog.csdn.net/tianlesoftware/article/details/5604497

4.1 使用源数据库数据字典(Online Catalog)来分析DML操作

1、先进行DMLDDL的操作,以便下面分析。

SQL> conn /as sysdba

已连接。

SQL> show parameter utl;

NAME                                 TYPE        VALUE                         

------------------------------------ ----------- --------

create_stored_outlines               string                                    

utl_file_dir                         string   

SQL> insert into scott.dept  values('80','Dave','AnQing');

已创建 1 行。

SQL> update scott.dept set loc='shang hai' where deptno=70;

已更新 1 行。

SQL> commit;

提交完成。

SQL> delete from scott.dept where deptno=40;

已删除 1 行。

SQL> commit;

提交完成。

SQL> alter table scott.dept add(phone varchar2(32));

表已更改。

SQL> insert into scott.dept values(50,'David','Dai','13888888888');

已创建 1 行。

SQL> commit;

提交完成。

SQL> alter table scott.dept add(address varchar2(300));

表已更改。

 

2、把在线重做日志变成归档日志,这样分析归档日志就可以了

SQL> alter system switch logfile;

系统已更改。

 

3、建立日志分析列表:

 

----添加要分析的日志文件

SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_32_597FQD7B_.ARC',options=>dbms_logmnr.new);

PL/SQL 过程已成功完成。

---继续填加,用dbms_logmnr.removefile可以删除

SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_30_597B5P7B_.ARC',options=>dbms_logmnr.addfile);

PL/SQL 过程已成功完成。

 

4、启动LogMiner

SQL> execute dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);

PL/SQL 过程已成功完成。

 

5、查看日志分析结果:

SQL> col username format a8

SQL> col sql_redo format a50

SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';

会话已更改。

 

SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where seg_name='DEPT';

USERNAME        SCN TIMESTAMP           SQL_REDO

-------- ---------- ------------------- -----------------------------------

            1645927 2009-08-25 16:54:56 delete from "SCOTT"."DEPT" where "DEPTNO" = '40' and "DNAME" = 'OPERATIONS' and "LOC" = 'BOSTON' and "PHONE" IS NULL and "ADDRESS" IS                                           NULL and ROWID = 'AAAMfNAAEAAAAAQAAD';

SYS         1645931 2009-08-25 16:54:57 alter table scott.dept add(phone varchar2(32)) ;

SYS         1645992 2009-08-25 16:56:33 alter table scott.dept add(address varchar2(300)) ;

 

6、结束LogMiner

SQL> execute dbms_logmnr.end_logmnr;

PL/SQL 过程已成功完成。

 

 

4.2 摘取LogMiner字典到字典文件分析DDL操作

 

1、进行DDL操作,以便分析

SQL> conn scott/admin

已连接。

SQL> drop table emp;

表已删除。

SQL> drop table dept;

表已删除。

SQL> conn /as sysdba

已连接。

SQL> alter system switch logfile;

系统已更改。

 

2、使用字典文件,请查看数据库是否配置utl_file_dir,这个参数为字典文件的目录。配置该参数后,需要重启数据库

SQL> show user;

USER "SYS"

SQL> show parameter utl;

NAME                                 TYPE        VALUE                         

------------------------------------ ----------- ------------

create_stored_outlines               string                                    

utl_file_dir                         string                          

SQL> alter system set utl_file_dir='D:/oracle/logminer' scope=spfile;

System altered.

SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup

ORACLE instance started.

SQL> show parameter utl

NAME                                 TYPE        VALUE                         

------------------------------------ ----------- -----------

create_stored_outlines               string                                    

utl_file_dir                         string      D:/oracle/logminer     

 

3、建立字典文件:

SQL> execute dbms_logmnr_d.build ('dict.ora','D:/oracle/logminer',dbms_logmnr_d.store_in_flat_file);

PL/SQL 过程已成功完成。

 

4、建立日志分析列表:

SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_32_597FQD7B_.ARC',options=>dbms_logmnr.new);

PL/SQL 过程已成功完成。

 

SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_30_597B5P7B_.ARC',options=>dbms_logmnr.addfile);

PL/SQL 过程已成功完成。

 

5、启动LogMiner

SQL> execute dbms_logmnr.start_logmnr(dictfilename=>'D:/oracle/logminer/dict.ora',options=>dbms_logmnr.ddl_dict_tracking);

PL/SQL 过程已成功完成。

 

6、查询分析日志结果:

 

SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents  where lower(sql_redo) like '%table%';

USERNAME        SCN TIMESTAMP      SQL_REDO

-------- ---------- -------------- -----------------------------------

            1647538 25-8 -09     ALTER TABLE "SCOTT"."EMP" RENAME CONSTRAINT "PK_EMP" TO "BIN$f/mFjN+nTmaYjrb17YU80w==$0" ;

            1647550 25-8 -09     ALTER TABLE "SCOTT"."EMP" RENAME TO "BIN$E5UujHaTR+uItpLtzN0Ddw==$0" ;

            1647553 25-8 -09     drop table emp AS "BIN$E5UujHaTR+uItpLtzN0Ddw==$0" ;

            1647565 25-8 -09     ALTER TABLE "SCOTT"."DEPT" RENAME CONSTRAINT "PK_DEPT" TO "BIN$3imFL+/1SqONFCB7LoPcCg==$0" ;

            1647571 25-8 -09     ALTER TABLE "SCOTT"."DEPT" RENAME TO "BIN$kYKBLvltRb+vYaT6RkaRiA==$0";

            1647574 25-8 -09     drop table dept AS "BIN$kYKBLvltRb+vYaT6RkaRiA==$0" ;                        

                                              

或者其他的查询:

 

SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where username='SYS';

USERNAME TIMESTAMP           SQL_REDO                                          

-------- ------------------- --------------------------------------------------

USERNAME        SCN TIMESTAMP      SQL_REDO

-------- ---------- -------------- -----------------------------------

SYS         1647487 25-8 -09     set transaction read write;

SYS         1647488 25-8 -09     alter user scott account unlock;

SYS         1647490 25-8 -09     Unsupported

SYS         1647492 25-8 -09     commit;

                         

7、结束LogMiner

SQL> execute dbms_logmnr.end_logmnr;

PL/SQL 过程已成功完成。

 

       注意,v$logmnr_contents内容保存了日志的内容,只在当前会话有效,如果想长期保存分析,可以在当前会话用create table tablename as select * from v$logmnr_contents语句来持久保存。

阅读(1425) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~