大部分来自官方文档 :)
4 Create Logical Standby
4.1 Prerequisite Conditions for Creating a Logical Standby
Database
在搭建逻辑DG前, 需要确认两个事:
4.1.1 Determine Support for Data Types
and Storage Attributes for Tables
4.1.2 Ensure Table Rows in the Primary
Database Can Be Uniquely Identified
10g里不支持的数据类型有:
- BFILE
-
Collections (including VARRAYSand nested tables)
-
Encrypted columns
-
Multimedia data types (including Spatial, Image, and Context)
-
ROWID, UROWID
-
User-defined types
-
XMLType
不支持的存储类型:
- Segment compression storage type
不支持的pl/sql packages:
pl/sql 提供的更新系统元数据不支持SQL APPLY,因此他们的效果将不会在逻辑standby上出现。
例如,dbms_java, dbms_registry, dbms_alert, dbms_space_admin,
dbms_refresh, dbms_redefinition, dbms_scheduler 和 dbms_aq。
特定的dbms_job有支持。job的暂停执行在逻辑standby数据库,并且job不能直接在standby数据库直接被定制。然而,在primary上提交的job恢复到standby数据库里。当出现switchover或者failover时,原来在原来primary执行的,将在新的primary上自动执行。
不支持的表,序列和视图:
非常重要的是在创建stadnby前,在primary数据库,去确定不支持的表,序列和视图。因为,这些不支持的数据类型,表,序列或视图在primary将自动被SQL APPLY跳过。此外,不会有报错。
一些Schema在将会被跳过SQL Apply.为了准确知道哪些schema被跳过,可以查询DBA_LOGSTDBY_SKIP视图。
- Select owner from dba_logstdby_skip where statement_opt='INTERNAL SCHEMA';
为了查看primary数据库是否包含不支持对象,查看dba_logstdby_unsuported视图。
- Select distinct owner, table_name from dba_logstdby_unsupported
-
-
Order by owner, table_name;
-
-
-
Select column_name, data_type from dba_logstdby_unsupported
-
-
Where owner='OE' and table_name='CUSTOMERS';
默认,下面的SQL语句将自动被SQL apply跳过:
- ALTER DATABASE
-
ALTER MATERIALIZED VIEW
-
ALTER MATERIALIZED VIEW LOG
-
ALTER SESSION
-
ALTER SYSTEM
-
CREATE CONTROL FILE
-
CREATE DATABASE
-
CREATE DATABASE LINK
-
CREATE PFILE FROM SPFILE
-
CREATE MATERIALIZED VIEW
-
CREATE MATERIALIZED VIEW LOG
-
CREATE SCHEMA AUTHORIZATION
-
CREATE SPFILE FROM PFILE
-
DROP DATABASE LINK
-
DROP MATERIALIZED VIEW
-
DROP MATERIALIZED VIEW LOG
-
EXPLAIN
-
LOCK TABLE
-
SET CONSTRAINTS
-
SET ROLE
-
SET TRANSACTION
Statement Options for Skipping SQL DDL
Statements
Statement Option
|
SQL Statements and Operations
|
ALTER
SEQUENCE
|
ALTER
SEQUENCE
|
ALTER
TABLE
|
ALTER
TABLE
|
COMMENT
TABLE
|
COMMENT
ON TABLEtable, view, materialized view
COMMENT
ON COLUMNtable.column, view.column, materialized_view.column
|
DELETE
TABLE
|
DELETE
FROMtable, view
|
EXECUTE
PROCEDURE
|
CALL
Execution of any procedure or function or access to any
variable, library, or cursor inside a package.
|
GRANT
DIRECTORY
|
GRANTprivilegeONdirectory
REVOKEprivilegeONdirectory
|
GRANT
PROCEDURE
|
GRANTprivilegeONprocedure,
function, package
REVOKEprivilegeONprocedure,
function, package
|
GRANT
SEQUENCE
|
GRANTprivilegeONsequence
REVOKEprivilegeONsequence
|
GRANT
TABLE
|
GRANTprivilegeONtable,
view, materialized view
REVOKEprivilegeONtable,
view, materialized view
|
GRANT
TYPE
|
GRANTprivilegeON
TYPE
REVOKEprivilegeON
TYPE
|
INSERT
TABLE
|
INSERT
INTOtable, view
|
LOCK
TABLE
|
LOCK
TABLEtable, view
|
SELECT
SEQUENCE
|
Any statement containing sequence.CURRVALor
|
SELECT
TABLE
|
SELECT
FROMtable, view, materialized view
REVOKEprivilegeONtable,
view, materialized view
|
UPDATE
TABLE
|
UPDATEtable,
view
|
4.1.2 Ensure Table Rows in the Primary
Database Can Be Uniquely Identified
Step1 : Find tables without unique logical
identifier in the primary database.
Query the DBA_LOGSTDBY_NOT_UNIQUE view to
display a list of tables that SQL apply may not be able to uniquely identify.
For example:
- Select owner, table_name from dba_logstdby_not_unique
-
Where (owner, table_name) not in
-
(Select distinct owner, table_name from dba_logstdby_unsupoorted)
-
And bad_column='Y';
Step 2: Add a disabled primary-key RELY
constraint.
If your application ensures the rows in a table are unique, you can
create a disabled primary key RELY constraint on the table. This avoids the
overhead of maintaining a primary key on the primary database.
To create a disabled RELY constraint on a
primary database table, use the ALTER TABLE statement with a RELY DISABLE
clause. The following example creates a disabled RELY constraint on a table
named mytab, for which rows can be uniquely identified using the id and name
columns:
- ALERT TABLE mytab ADD PRIMARY KEY(id, name) RELY DISABLE;
When you specify the RELY constraint, the
system will assume that rows are unique. Because you are telling the system to
rely on the information, but are not validating it on every modification done
to the table, you must be careful to select columns for the disabled RELY
constraint that will uniquely identifiy each row in the table. If such
uniqueness is not present, then SQL apply will not correctly maintain the
table.
To improve the performance of SQL apply,
add a unique-constraint/index to the columns to identify the row on the logical
standby database. Failure to do so result in full table scans during UPDATE or
DELETE statements carried out on the table by SQL apply.
4.2 Step-by-Step
Instructions for Creating a Logical Standby Database
This section describes the tasks you
perform to create a logical standby database.
provides a checklist of the tasks that you perform to create a
logical standby database and specifies on which database you perform each task.
There is also a reference to the section that describes the task in more
detail.
Table
4-2 Creating a Logical Standby Database
Reference
|
Task
|
Database
|
|
|
Primary
|
|
|
Standby
|
|
|
Primary
|
|
|
Standby
|
|
|
Standby
|
|
|
Standby
|
4.2.1
完成。
4.2.2
Stop Redo Apply on the physical Standby Database
- SQL> alter database recover managed standby database cancel;
-
-
Database altered.
4.2.3
Prepare the Primary Database to Support a Logical Standby Database
4.2.3.1
Prepare the Primary Database for Role Transitions
primary unique name:
- SYS@liuzhou> show parameter unique
-
-
NAME TYPE VALUE
-
-
------------------------------------ ----------- ------------------------------
-
-
db_unique_name string prim
-
-
SYS@liuzhou>
standby unique_name:
- SQL> show parameter unique
-
-
NAME TYPE VALUE
-
------------------------------------ ----------- ------------------------------
-
db_unique_name string stdby
log_archive_dest in primary:
- SYS@liuzhou> show parameter log_archive_dest
-
-
NAME TYPE VALUE
-
-
------------------------------------ ----------- ------------------------------
-
-
log_archive_dest string
-
-
log_archive_dest_1 string location=/arch1/ valid_for=(al
-
-
l_logfiles,all_roles) db_uniqu
-
-
e_name=prim
-
-
log_archive_dest_10 string
-
-
log_archive_dest_2 string service=stdby lgwr async valid
-
-
_for=(online_logfiles,primary_
-
-
role) db_unique_name=stdby
-
-
log_archive_dest_3 string
Setting log_archive_dest_3 for logical standby:
- SYS@liuzhou> alter system set log_archive_dest_3='location=/arch2/ valid_for=(standby_logfiles,standby_role) db_unique_name=stdby' scope=spfile;
-
-
System altered.
-
-
SYS@liuzhou> alter system set log_archive_dest_state_3=enable;
-
-
System altered.
-
-
-
-
SYS@liuzhou> r
-
-
1* select name, value from v$parameter where name='log_archive_dest_3'
-
-
NAME VALUE
-
------------------------------ ------------------------------
-
log_archive_dest_3
-
-
-
-
SYS@liuzhou> select name, value from v$spparameter where name='log_archive_dest_3';
-
-
NAME VALUE
-
------------------------------ ------------------------------
-
log_archive_dest_3 location=/arch2/ valid_for=(st
-
-
andby_logfiles,standby_role) d
-
-
b_unique_name=stdby
4.2.3.2
Build a Dictionary in the Redo Data
A LogMiner dictionary must be built into
the redo data so that the LogMiner component of SQL apply can properly
interpret changes it sees in the redo. As part of building LogMiner
Multiversioned Data Dictionary, supplemental logging is automatically set up to
log primary key and unique-constraint/index columns. The supplemental logging
information ensures each update contains enough information to logically
identify each row that is modified by the statement.
To build the LogMiner dictionary, issue the
following statement:
- SYS@liuzhou> execute dbms_logstdby.build;
-
-
PL/SQL procedure successfully completed.
The dbms_logstdby.build procedure waits for
all existing transactions to complete. Long-running transactions executed on
the primary database will affect the timelines of this command.
The dbms_logstdby.build procedure uses
flashback query to obtain a consistent snapshot of the data dictionary that is
then logged in the redo stream. Oracle recommends setting the UNDO_RETENTION
initialization parameter to 3600 on both the primary and logcial standby
databases.
4.2.4
Transition to a Logcial Standby Database
4.2.4.1
Convert to a Logical standby Database
- SQL> alter database recover to logical standby liuzhou;
-
-
Database altered.
4.2.3.3
Adjust initialization parameters for the logical standby database
- SQL> alter system set log_archive_dest_3='location=/arch2/ valid_for=(standby_logfiles,standby_role) db_unique_name=prim' scope=spfile;
-
-
System altered.
- SQL> alter system set log_archive_dest_state_3=enable;
-
-
System altered.
- SQL> alter system set log_archive_dest_3='location=/arch2/ valid_for=(standby_logfiles,standby_role) db_unique_name=prim' scope=spfile;
-
-
System altered.
-
-
SQL> alter system set log_archive_dest_state_3=enable;
-
-
System altered.
-
-
-
-
SQL> alter database open resetlogs;
-
-
Database altered.
-
-
SQL>
-
-
SQL> alter database start logical standby apply immediate;
-
-
Database altered.
- SQL> archive log list
-
-
Database log mode Archive Mode
-
-
Automatic archival Enabled
-
-
Archive destination /arch1/
-
-
Oldest online log sequence 0
-
-
Next log sequence to archive 1
-
-
Current log sequence 1
-
-
-
-
SQL> ed
-
-
Wrote file afiedt.buf
-
-
224
-
-
q
-
-
-
-
1 select local.thread#, local.sequence# from
-
-
2 (select thread#, sequence# from v$archived_log where dest_id=1)
-
-
3 local where
-
-
4 local.sequence# not in
-
-
5* (select sequence# from v$archived_log where dest=2 and thread# = local.thread#)
-
-
SQL> l 5
-
-
5* (select sequence# from v$archived_log where dest=2 and thread# = local.thread#)
-
-
SQL> c /dest/dest_id/
-
-
5* (select sequence# from v$archived_log where dest_id=2 and thread# = local.thread#)
-
-
SQL> l
-
-
1 select local.thread#, local.sequence# from
-
-
2 (select thread#, sequence# from v$archived_log where dest_id=1)
-
-
3 local where
-
-
4 local.sequence# not in
-
-
5* (select sequence# from v$archived_log where dest_id=2 and thread# = local.thread#)
-
-
SQL> r
-
-
1 select local.thread#, local.sequence# from
-
-
2 (select thread#, sequence# from v$archived_log where dest_id=1)
-
-
3 local where
-
-
4 local.sequence# not in
-
-
5* (select sequence# from v$archived_log where dest_id=2 and thread# = local.thread#)
-
-
-
-
no rows selected
-
-
-
-
查询数据库的角色:
-
-
SQL> select database_role from v$database;
-
-
-
-
DATABASE_ROLE
-
-
----------------
-
-
LOGICAL STANDBY
后来,监控了一把,发现alert正常,切换日志也没问题。
primary:
- Tue Sep 27 05:21:57 2011
-
alter database stop logical standby apply
-
Tue Sep 27 05:21:57 2011
-
ALTER DATABASE STOP LOGICAL STANDBY APPLY
-
Tue Sep 27 05:21:58 2011
-
LOGSTDBY status: ORA-16128: User initiated stop apply successfully completed
-
LOGSTDBY status: ORA-16112: log mining and apply stopping
-
Tue Sep 27 05:21:58 2011
-
Completed: alter database stop logical standby apply
-
-
-
-
-
Tue Sep 27 05:22:15 2011
-
alter database start logical standby apply
-
Tue Sep 27 05:22:15 2011
-
ALTER DATABASE START LOGICAL STANDBY APPLY (liuzhou)
-
Tue Sep 27 05:22:15 2011
-
No optional part
-
Attempt to start background Logical Standby process
-
LSP0 started with pid=19, OS id=7800
-
LOGSTDBY status: ORA-16111: log mining and apply setting up
-
Tue Sep 27 05:22:15 2011
-
LOGMINER: Parameters summary for session# = 1
-
LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
-
LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
-
Tue Sep 27 05:22:15 2011
-
Completed: alter database start logical standby apply
Standby:
但是过了一会,发现standby报了很些错
~~~~~~~~~~~~
问题 1:session受限
~~~~~~~~~~~~
- Tue Sep 27 07:02:23 2011
-
Errors in file /u01/app/oracle/admin/liuzhou/udump/liuzhou_rfs_6073.trc:
-
ORA-00604: error occurred at recursive SQL level 1
-
ORA-00018: maximum number of sessions exceeded
-
ORA-00604: error occurred at recursive SQL level 1
-
ORA-00018: maximum number of sessions exceeded
-
RFS LogMiner: Encountered error [604] while processing [STAT1] event
-
Tue Sep 27 07:02:23 2011
-
Errors in file /u01/app/oracle/admin/liuzhou/udump/liuzhou_rfs_6073.trc:
-
ORA-00604: error occurred at recursive SQL level 1
-
ORA-00018: maximum number of sessions exceeded
-
ORA-00604: error occurred at recursive SQL level 1
-
ORA-00018: maximum number of sessions exceeded
-
RFS LogMiner: Encountered error [604] while processing [GAP] event
-
Tue Sep 27 07:02:23 2011
-
Errors in file /u01/app/oracle/admin/liuzhou/udump/liuzhou_rfs_6073.trc:
-
ORA-00604: error occurred at recursive SQL level 1
-
ORA-00018: maximum number of sessions exceeded
-
ORA-00604: error occurred at recursive SQL level 1
-
ORA-00018: maximum number of sessions exceeded
-
Tue Sep 27 07:02:25 2011
-
ksvcreate: Process(m001) creation failed
Search了下,发现应该是sessions和processes的参数太小了。
- do select * from v$resource_limit and paste here but you MUST have not shutdown the datatabse because the data in that v$ is from instance startup.
-
-
Default value for sessions is 1.1 * processes + 5
-
Increase the initialization parameter "sessions" in init.ora and restart the database.
改了下standby的两个参数,问题解决:
更严谨的做法是应该也把primary的相应参数和standby的统一。
~~~~~~~~~~~~~~
问题2: 日志挖掘未启用
~~~~~~~~~~~~~~
- primary:
-
-
SYS@liuzhou> alter system archive log current;
-
-
System altered.
-
-
-
-
LNS1 started with pid=13, OS id=4382
-
Sun Sep 25 21:50:30 2011
-
Thread 1 advanced to log sequence 39
-
Current log# 1 seq# 39 mem# 0: /u01/app/oracle/oradata/liuzhou/redo01.log
-
Sun Sep 25 21:50:32 2011
-
LNS: Standby redo logfile selected for thread 1 sequence 39 for destination LOG_ARCHIVE_DEST_2
-
Sun Sep 25 21:51:05 2011
-
sARC1: Standby redo logfile selected for thread 1 sequence 38 for destination LOG_ARCHIVE_DEST_2
-
-
-
-
-
~~~~~~~~~
-
standby:
-
-
Mon Sep 26 19:27:07 2011
-
RFS[6]: Possible network disconnect with primary database
-
Redo Shipping Client Connected as PUBLIC
-
-- Connected User is Valid
-
RFS[7]: Assigned to RFS process 657
-
RFS[7]: Identified database type as 'logical standby'
-
Primary database is in MAXIMUM PERFORMANCE mode
-
Mon Sep 26 19:27:24 2011
-
RFS LogMiner: Client enabled and ready for notification
-
Primary database is in MAXIMUM PERFORMANCE mode
-
RFS[7]: Successfully opened standby log 4: '/u01/app/oracle/oradata/liuzhou/stdbylog04a.log'
-
Mon Sep 26 19:28:30 2011
-
RFS[1]: Successfully opened standby log 3: '/u01/app/oracle/oradata/liuzhou/stdbylog03a.log'
-
Mon Sep 26 19:28:33 2011
-
RFS LogMiner: Client enabled and ready for notification
-
Mon Sep 26 19:28:36 2011
-
RFS LogMiner: Registered logfile [/arch1/1_38_749293317.arch] to LogMiner session id [1]
-
-
-
-
SQL> alter database stop logical standby apply;
-
-
Database altered.
-
-
-
Mon Sep 26 19:33:16 2011
-
alter database stop logical standby apply
-
Mon Sep 26 19:33:16 2011
-
ALTER DATABASE STOP LOGICAL STANDBY APPLY
-
Completed: alter database stop logical standby apply
-
-
SQL> alter database start logical standby apply immediate;
-
-
Database altered.
-
-
-
Mon Sep 26 19:33:56 2011
-
alter database start logical standby apply immediate
-
Mon Sep 26 19:33:57 2011
-
ALTER DATABASE START LOGICAL STANDBY APPLY (liuzhou)
-
Mon Sep 26 19:33:57 2011
-
with optional part
-
IMMEDIATE
-
LSP0 started with pid=21, OS id=726
-
Mon Sep 26 19:33:57 2011
-
Attempt to start background Logical Standby process
-
LOGSTDBY Parameter: DISABLE_APPLY_DELAY =
-
LOGSTDBY Parameter: REAL_TIME =
-
Completed: alter database start logical standby apply immediate
-
Mon Sep 26 19:33:59 2011
-
LOGSTDBY status: ORA-16111: log mining and apply setting up
-
Mon Sep 26 19:33:59 2011
-
LOGMINER: Parameters summary for session# = 1
-
LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
-
LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
-
LOGMINER: session# = 1, builder process P001 started with pid=23 OS id=730
-
LOGMINER: session# = 1, preparer process P002 started with pid=24 OS id=732
-
LOGMINER: session# = 1, reader process P000 started with pid=22 OS id=728
-
Mon Sep 26 19:34:06 2011
-
LOGMINER: Begin mining logfile: /arch1/1_32_749293317.arch
-
Mon Sep 26 19:34:06 2011
-
LOGMINER: Turning ON Log Auto Delete
-
LOGSTDBY Analyzer process P003 started with pid=25 OS id=734
-
Mon Sep 26 19:34:12 2011
-
LOGMINER: End mining logfile: /arch1/1_32_749293317.arch
-
Mon Sep 26 19:34:12 2011
-
LOGMINER: Begin mining logfile: /arch1/1_33_749293317.arch
-
LOGSTDBY Apply process P005 started with pid=27 OS id=738
-
LOGSTDBY Apply process P007 started with pid=29 OS id=742
-
LOGSTDBY Apply process P004 started with pid=26 OS id=736
-
LOGSTDBY Apply process P006 started with pid=28 OS id=740
-
LOGSTDBY Apply process P008 started with pid=30 OS id=744
-
Mon Sep 26 19:34:14 2011
-
LOGMINER: End mining logfile: /arch1/1_33_749293317.arch
-
Mon Sep 26 19:34:14 2011
-
LOGMINER: Begin mining logfile: /arch1/1_34_749293317.arch
-
Mon Sep 26 19:34:24 2011
-
LOGMINER: Log Auto Delete - deleting: /arch1/1_32_749293317.arch
-
Deleted file /arch1/1_32_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: End mining logfile: /arch1/1_34_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: Begin mining logfile: /arch1/1_35_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: End mining logfile: /arch1/1_35_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: Begin mining logfile: /arch1/1_36_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: End mining logfile: /arch1/1_36_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: Begin mining logfile: /arch1/1_37_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: End mining logfile: /arch1/1_37_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: Begin mining logfile: /arch1/1_38_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: Log Auto Delete - deleting: /arch1/1_33_749293317.arch
-
Deleted file /arch1/1_33_749293317.arch
-
Mon Sep 26 19:34:25 2011
-
LOGMINER: Log Auto Delete - deleting: /arch1/1_34_749293317.arch
-
Deleted file /arch1/1_34_749293317.arch
-
Mon Sep 26 19:34:37 2011
-
LOGMINER: End mining logfile: /arch1/1_38_749293317.arch
-
Mon Sep 26 19:34:37 2011
-
LOGMINER: Begin mining logfile: /u01/app/oracle/oradata/liuzhou/stdbylog04a.log
-
-
-
-
-
-
-
-
-
SQL> select sequence#,first_change#,next_change#,timestamp,applied from dba_logstdby_log;
-
-
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP APPLIED
-
---------- ------------- ------------ --------- --------
-
32 349188 354354 27-SEP-11 CURRENT
-
33 354354 357832 24-SEP-11 NO
-
34 357832 370418 25-SEP-11 NO
-
35 370418 371936 25-SEP-11 NO
-
36 371936 371941 25-SEP-11 NO
-
37 371941 371944 25-SEP-11 NO
-
38 371944 390090 26-SEP-11 NO
-
-
7 rows selected.
-
-
SQL> select sequence#,first_change#,next_change#,timestamp,applied from dba_logstdby_log;
-
-
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP APPLIED
-
---------- ------------- ------------ --------- --------
-
35 370418 371936 25-SEP-11 YES
-
36 371936 371941 25-SEP-11 YES
-
37 371941 371944 25-SEP-11 YES
-
38 371944 390090 26-SEP-11 YES
阅读(2272) | 评论(0) | 转发(0) |