Chinaunix首页 | 论坛 | 博客
  • 博客访问: 4012388
  • 博文数量: 536
  • 博客积分: 10470
  • 博客等级: 上将
  • 技术积分: 4825
  • 用 户 组: 普通用户
  • 注册时间: 2006-05-26 14:08
文章分类

全部博文(536)

文章存档

2024年(3)

2021年(1)

2019年(1)

2017年(1)

2016年(2)

2013年(2)

2012年(10)

2011年(43)

2010年(10)

2009年(17)

2008年(121)

2007年(252)

2006年(73)

分类: Oracle

2011-05-17 11:25:11

The Study or knowledge of internal structures of Oracle is necessary for an advanced performance tuning and trouble-shooting. To understand Oracle dump files these internal structures are useful. Oracle uses hidden parameters also. It is advisable to know them but not to play with them. Understanding 'X$' Tables and sub-systems is required for advanced performance tuning. To understand the RBS functionality, online backup functionality the knowledge of internals helps a lot. Oracle documentation on tuning the database, books like 'Oracle8i Backup & Recovery' by Rama Velpuri and Adkoli, Oracle8i Internal Services for Waits, Latches, Locks, and Memory by Steve Adams deal with Oracle Internals contextually and subjectively. A good reading helps. The Metalink Docs also help understand them.

The following aspects, as I understand are essentially to be understood to have a basic knowledge of the Internals.

Locks, Latches, Enqueues, Semaphores, Control File Structures, Redo Log Files, I/O constitute the list which is not exhaustive.
LOCKS AND ENQUEUES
They are processes that support queuing and concurrency. They are queued and serviced in a first-in-first-out (FIFO) order. Locks are mechanisms that prevent destructive interaction between transactions accessing the same resource.
Resources include two general types of objects: User objects, such as tables and rows (structures and data) System objects not visible to users, such as shared data structures in the memory and data dictionary rows.

This script lists the users that are currently holding locks that are currently blocking users as well as the users that are being blocked. It also lists the offending objects (tables, etc).


set heading on

ttitle 'User Blocking and Waiting for Other Users'

select distinct o.object_name,
sh.username||'('||sh.sid||')' "Holder",
sw.username||'('||sw.sid||')' "Waiter",
decode( lh.lmode, 1, 'null', 2,'row share', 3, 'row exclusive', 4, 'share', 5, 'share row exclusive' , 6, 'exclusive') "Lock Type"
from all_objects o,
v$session sw,
v$lock lw,
v$session sh,
v$lock lh
where lh.id1 = o.object_id
and lh.id1 = lw.id1
and sh.sid = lh.sid
and sw.sid = lw.sid
and sh.lockwait is null
and sw.lockwait is not null
and lh.type = 'TM'
and lw.type = 'TM'
/


This script shows the operating system process that is being locked out.

select ses.username||'('||sid||')' users, acc.owner owner,
acc.object object, ses.lockwait, prc.osuser os_process
from v$process prc, v$access acc, v$session ses
where prc.addr = ses.paddr
and ses.sid = acc.sid
and ses.lockwait is not null;

This script shows the SQL that people who are currently being locked are trying to run.

select ses.username||'('||sid||')' users, acc.owner owner,
acc.object object, ses.lockwait, txt.sql_text sqltext
from v$sqltext txt, v$access acc, v$session ses
where txt.address = ses.sql_address
and txt.hash_value = ses.sql_hash_value
and ses.sid = acc.sid
and ses.lockwait is not null;


LATCHES

They are internal Oracle mechanism used to protect data structures in the SGA from simultaneous access. Atomic hardware instructions like TEST-AND-SET is used to implement latches. Latches are more restrictive than locks in that they are always exclusive. Latches are never queued, but will spin or sleep until they obtain a resource, or time out.

SEMAPHORES

Semaphores are UNIX operating system facility to used to control the waiting. Oracle Uses Semaphores in HP_UX and Solaris to synchronize shadow processes and background processes.

The number of Semaphores used by Oracle is equal to the number of Processes defined in the initialization parameter file.

In AIX UNIX it is not the semaphore but it is a post/wait driver to serialize the tasks.

Hidden Parameters

The following SQL enables you find all the hidden parameters for your database. The SQL is executable as sys user. If this SQL is executed as system or any other user with DBA role may encounter ORA-00942: table or view does not exist.

select *
from SYS.X$KSPPI
where substr(KSPPINM,1,1) = '_';

The following query displays parameter names with their current value:

select a.ksppinm "Parameter",
b.ksppstvl "Session Value",
c.ksppstvl "Instance Value"
from x$ksppi a,
x$ksppcv b,
x$ksppsv c
where a.indx = b.indx and a.indx = c.indx
and substr(ksppinm,1,1)='_'
order by a.ksppinm;

X$ Tables

To get an exhaustive list of the X$ tables the following query may be used

select distinct table_name from V$INDEXED_FIXED_COLUMN where table_name like 'X$%';

The list may not be complete or accurate, but represents an attempt to figure out what information they contain. One should generally not write queries against these tables as they are internal to Oracle, and Oracle may change them without any prior notification. You may get a list 178 table when in oracle 9.0.1, 155 when in oracle 8.1.6 and so on and so forth.

SUB-SYSTEMS

Kernel Subsystems:


OPI Oracle Program Interface
KK Compilation Layer - Parse SQL, compile PL/SQL
KX Execution Layer - Bind and execute SQL and PL/SQL
K2 Distributed Execution Layer - 2PC handling
NPI Network Program Interface
KZ Security Layer - Validate privs
KQ Query Layer
RPI Recursive Program Interface
KA Access Layer
KD Data Layer
KT Transaction Layer
KC Cache Layer
KS Services Layer
KJ Lock Manager Layer
KG Generic Layer
KV Kernel Variables (eg. x$KVIS and X$KVII)
S or ODS Operating System Dependencies


Setting up events:

The following events are frequently used by DBAs and Oracle Support to diagnose problems:

10046 trace name context forever, level 4
Trace SQL statements and show bind variables in trace output.
10046 trace name context forever, level 8
This shows wait events in the SQL trace files
10046 trace name context forever, level 12

This shows both bind variable names and wait events in the SQL trace files

1401trace name errorstack, level 12

1401 trace name errorstack, level 4

1401 trace name processstate

Dumps out trace information if an ORA-1401 "inserted value too large for column" error occurs. The 1401 can be replaced by any other Oracle Server error code that you want to trace.

The following list of events is examples only. They might be version specific, so please call Oracle before using them:

10210 trace name context forever, level 10
10211 trace name context forever, level 10
10231 trace name context forever, level 10
These events prevent database block corruptions
10049 trace name context forever, level 2
Memory protect cursor
10210 trace name context forever, level 2
Data block check
10211 trace name context forever, level 2
Index block check
10235 trace name context forever, level 1
Memory heap check
10262 trace name context forever, level 300
Allow 300 bytes memory leak for connections

Use Unix oerr command to get the description of an event.

On Unix, you can type "oerr ora 10053" from the command prompt to get event details.
How can one dump internal database structures?
The following (mostly undocumented) commands can be used to obtain information about internal database structures.

-- Dump control file contents

alter session set events 'immediate trace name CONTROLF level 10'
/

-- Dump file headers
alter session set events 'immediate trace name FILE_HDRS level 10'
/

-- Dump redo log headers
alter session set events 'immediate trace name REDOHDR level 10'
/

-- Dump the system state
alter session set events 'immediate trace name SYSTEMSTATE level 10'
/

-- Dump the process state
alter session set events 'immediate trace name PROCESSSTATE level 10'
/

-- Dump Library Cache details
alter session set events 'immediate trace name library_cache level 10'
/

-- Dump optimizer statistics whenever a SQL statement is parsed (hint: change statement or flush pool)
alter session set events '10053 trace name context forever, level 1'
/

-- Dump a database block (File/ Block must be converted to DBA address)
-- Convert file and block number to a DBA (database block address). Eg:

variable x varchar2;
exec :x := dbms_utility.make_data_block_address(1,12);

print x

alter session set events 'immediate trace name blockdump level 50360894'
/

These examples are taken from


Oracle Internals - 2 (Oracle Controlfile Structures )

This document tries to discuss the controlfile and its structures as part of the discussions initiated in Oracle Internals -1.

Let it be noted by all the readers that in this process of documentation many books have been referred to and from many web sites information is collected which is not found, generally in Oracle Documentation. All the sources are not listed here. But, have been acknowledged with honor and dignity.

What a Controlfile Consists of?

The control file has to be looked at in two parts. Part one is header and part 2 is its contents / structures.

The header block contains (1) controlfile block size and (2) the number of blocks in the controlfile. When Oracle mounts the database, reads the header and matches them. If they can be matched Oracle returns the error message informing the corruption of the controlfile.

What is the block size of Controlfile?

The initialization parameter file for every database determines the default block size using db_block_size parameter. This Oracle data block size is the default block size for the Controlfile.

How many Blocks are there in the Controlfile?

This is tallied with the size of the controlfile in bytes at the OS level. Oracle data blocks are in the multiples of OS blocks and OS blocks are determinable and specific as provided by the OS vendors in the OS docs. So comparing the size of the controlfile with the size as shown at OS level tallies the number of the Controlfile blocks.

NOTE:
This default size cannot be changed once database is created. If the Oracle data block size is to be modified the database is to be exported and imported into the database created with a different size of the data block. There are several restrictions varying from the platforms to oracle versions. Hence it is not advised to go for a change of data block after the database is created. Oracle 9i permits you to have tablespaces created with different data blocks, of course, with some restrictions, such as SYSTEM, TEMP and RBS/UNDO tablespaces cannot have different data blocks than the default data block size.
After the header block, all controlfile blocks occur in pairs. Two physical blocks represent each logical block. This is essentially to maintain the mechanism of controlfile transactions.


Here is an extract from Steve

Sessions must hold an exclusive lock on the CF enqueue for the duration of controlfile transactions. This prevents concurrent controlfile transactions, and in-flux controlfile reads, because a shared lock on the CF enqueue is needed for controlfile reads. However, there is also a need for recoverability should a process, instance or system failure occur during a controlfile transaction.

For the first record section of the controlfile, the database information entry section, this requirement is trivial, because the database information entry only takes about 210 bytes and is therefore guaranteed to always fit into a single controlfile block that can be written atomically. Therefore changes to the database entry can be implicitly committed as they are written, without any recoverability concerns.

Maintaining all the information in duplicate provides recoverability for changes to the other controlfile records sections. Two physical blocks represent each logical block. One contains the current information, and the other contains either an old copy of the information, or a pending version that is yet to be committed. To keep track of which physical copy of each logical block contains the current information, Oracle maintains a block version bitmap with the database information entry in the first record section of the controlfile.

To read information from the controlfile, a session must first read the block version bitmap to determine which physical block to read. Then if a change must be made to the logical block, the change is first written to the alternate physical block for that logical block, and then committed by atomically rewriting the block containing the block version bitmap with the bit representing that logical block flipped. When changes need to be made to multiple records in the same controlfile block, such as when updating the checkpoint SCN in all online datafiles, those changes are buffered and then written together. Note that each controlfile transaction requires at least 4 serial I/O operations against the controlfile, and possibly more if multiple blocks are affected, or if the controlfile is multiplexed and asynchronous I/O is not available. So controlfile transactions are potentially expensive in terms of I/O latency.

Whenever a controlfile transaction is committed, the controlfile sequence number is incremented. This number is recorded with the block version bitmap and database information entry in the first record section of the controlfile. It is used in the cache header of each controlfile block in place of an SCN to detect possible split blocks from hot backups. It is also used in queries that perform multiple controlfile reads to ensure that a consistent snapshot of the controlfile has been seen. If not, an ORA-00235 error is returned.

The controlfile transaction mechanism is not used for updates to the checkpoint heartbeat. Instead the size of the checkpoint progress record is overstated as half of the available space in a controlfile block, so that one physical block is allocated to the checkpoint progress record section per thread. Then, instead of using pairs of physical blocks to represent each logical block, each checkpoint progress record is maintained in its own physical block so that checkpoint heartbeat writes can be performed and committed atomically without affecting any other data.

Steve on Controlfile Transactions: ()

Sessions must hold an exclusive lock on the CF enqueue for the duration of controlfile transactions. This prevents concurrent controlfile transactions, and in-flux controlfile reads, because a shared lock on the CF enqueue is needed for controlfile reads. However, there is also a need for recoverability should a process, instance or system failure occur during a controlfile transaction.

For the first record section of the controlfile, the database information entry section, this requirement is trivial, because the database information entry only takes about 210 bytes and is therefore guaranteed to always fit into a single controlfile block that can be written atomically. Therefore changes to the database entry can be implicitly committed as they are written, without any recoverability concerns.

Maintaining all the information in duplicate provides recoverability for changes to the other controlfile records sections. Two physical blocks represent each logical block. One contains the current information, and the other contains either an old copy of the information, or a pending version that is yet to be committed. To keep track of which physical copy of each logical block contains the current information, Oracle maintains a block version bitmap with the database information entry in the first record section of the controlfile.

To read information from the controlfile, a session must first read the block version bitmap to determine which physical block to read. Then if a change must be made to the logical block, the change is first written to the alternate physical block for that logical block, and then committed by atomically rewriting the block containing the block version bitmap with the bit representing that logical block flipped. When changes need to be made to multiple records in the same controlfile block, such as when updating the checkpoint SCN in all online datafiles, those changes are buffered and then written together. Note that each controlfile transaction requires at least 4 serial I/O operations against the controlfile, and possibly more if multiple blocks are affected, or if the controlfile is multiplexed and asynchronous I/O is not available. So controlfile transactions are potentially expensive in terms of I/O latency.

Whenever a controlfile transaction is committed, the controlfile sequence number is incremented. This number is recorded with the block version bitmap and database information entry in the first record section of the controlfile. It is used in the cache header of each controlfile block in place of an SCN to detect possible split blocks from hot backups. It is also used in queries that perform multiple controlfile reads to ensure that a consistent snapshot of the controlfile has been seen. If not, an ORA-00235 error is returned.
The controlfile transaction mechanism is not used for updates to the checkpoint heartbeat. Instead the size of the checkpoint progress record is overstated as half of the available space in a controlfile block, so that one physical block is allocated to the checkpoint progress record section per thread. Then, instead of using pairs of physical blocks to represent each logical block, each checkpoint progress record is maintained in its own physical block so that checkpoint heartbeat writes can be performed and committed atomically without affecting any other data.

The View V$CONTROLFILE_RECORD_SECTION contains all the structures a control file has in it.
To know the addresses of these structures the X$KCCRS columns can be mapped. The SQL is to be executed as SYS user only other wise table or view does not exist error may popup.

select addrd,
indx,
decode (indx,0,'DATABASE',
1,'CKPT PROGRESS',
2,'REDO THREAD',
3,'REDO LOG',
4,'DATAFILE',
5,'FILENAME',
6,'TABLESPACE',
7,'TEMPORARY FILENAME',
8,'RMAN CONFIGURATION',
9,'LOG HISTORY,
10,'OFFLINE RANGE',
11,'ARCHIVED LOG',
12,'BACKUP SET',
13,'BACKUP PIECE',
14,'BACKUP DATAFILE',
15,'BACKUP REDOLOG',
16,'DATAFILE COPY',
17,'BACKUP CORRUPTION',
18,'COPY CORRUPTION',
19,'DELETED OBJECT',
20,'PROXY COPY',
21,'RESERVED4'),
INST_ID,
RSLBN,
RSRSZ,
RSNUM,
RSNUS,
RSIOL,
RSILW,
RSRLW
from sys.x$kccrs;

This may be throwing light, enough to the reader to understand the role of controlfile when RMAN is used with no catalog. This part of this paper is covering 'how the updates to controlfile take place' as it is thought fit to discuss that part in the redolog files and backup and recovery section.

Size Of The Controlfile

Overwriting the information already contained in the controlfile reuses the space in the controlfiles and thus a bit of control can be exercised by the User. The particular initialization parameter 'control_file_record_keep_time' sets the minimum number of days that must have elapsed before a reusable controlfile record slot can be reused. The default is 7 days. If all the slots in a record section are in use and that number of days has not yet elapsed since the timestamp on the earliest entry, then Oracle dynamically expands the record section (and thus the controlfile too) to make more slots available, up to a maximum of 65535 slots per section, or the controlfile size limit. (The controlfile size limit is based on the number of blocks that can be represented in the block version bitmap, and is thus most unlikely to be reached.) Informational "kccrsz" messages about the dynamic expansion of the controlfile (or the failure to do so) may be seen in the alert log file for the instance. The control_file_record_keep_time parameter can also be set to zero to prevent keep time related controlfile expansion, if it suits the requirements of maintenance of database. Controlfile backups are advocated along with the datafiles only because the resizing takes place under the protection of CF enqueue. If an instance failure or a system failure occurs during that event of resizing the controlfiles may get corrupted.

The contents of the current controlfile can be dumped in text form to a process trace file in the user_dump_dest directory using the CONTROLF dump for a given Oracle database. The levels for this dump are as follows.

Alter session set events 'immediate trace name controlf level 3'; is issued against the database from one user session.

(This is for 8.1.6.3 version)
DUMP OF CONTROL FILES, Seq # 22327 = 0x5737
FILE HEADER:
Software vsn=135266304=0x8100000, Compatibility Vsn=134217728=0x8000000
Db Id=2607436010=0x9b6a50ea, Db Name='SRINIVAS'
Control Seq=22327=0x5737, File size=364=0x16c
File Number=0, Blksiz=8192, File Type=1 CONTROL

The above is part of the header information when a SQL statement

( this is against 9.0.1.3.1 version)
DUMP OF CONTROL FILES, Seq # 10624 = 0x2980
FILE HEADER:
Software vsn=150994944=0x9000000, Compatibility Vsn=134217728=0x8000000
Db Id=3228358760=0xc06cd868, Db Name='KNOW9I'
Control Seq=10624=0x2980, File size=366=0x16e
File Number=0, Blksiz=4096, File Type=1 CONTROL


Oracle Internals - 3 (Redolog Files)

Oracle has three important structures that take care of the database. The Oracle Internals maintain the structural details of these structures. These structures are (1) datafiles (2) controlfiles and (3) redolog files.

Oracle has been allowing the multiplexing of controlfiles and redolog files. The user defines the multiplexed controlfile destinations while creating the initialization parameter file for the database proposed to be created. In the 'CREATE DATABASE' command defines the redolog groups and members and their destinations.

Multiplexing, Mirroring or otherwise securing the database files is done by choosing the additional software like RAID to stripe, mirror, set parity and/or duplex, EMC for replicating the datafiles and other software that enables to copy or duplex datafiles. Oracle has provided the capability of backing of the datafiles or any set of files identified as a backup set to tow destinations using RMAN by identifying different channels through parallelization work-around or set duplex levels etc.

As the aim of this article is to discuss Oracle Internals the passing reference is done to those issues, which are discussed at appropriate places.

Redo Log Buffers

log_buffer parameter set in the initialization parameter defines the size of buffers in bytes allocated in the SGA. Although the size of redo entries is measured in bytes, LGWR writes the redo to the log files on disk in blocks. The size of redo log blocks is fixed in the Oracle source code and is operating system specific. Oracle's documentation uses the term "Operating System Block Size" to refer to the log block size. Normally it is the smallest unit of I/O supported by the operating system for raw I/O, but on some operating systems it is the smallest possible unit of file system based I/O.

The Operating System Block default sizes for various Operating Systems are as under and LGWR writes to log files the redo buffers in that default sizes respectively.

In some Operating Systems the Block Size is modifiable.

The log block size is the unit for the setting of the log_checkpoint_interval, _log_io_size and max_dump_file_size parameters. Therefore, it is an important constant to know. If your operating system is not listed in the table above, then you can find your log block size using the following query.

To know what is the Operating System Block size as also the Log Block Size use this SQL logging in as SYS user.

select max(lebsz) from sys.x$kccle;

The value for log_buffer (as set in parameter file is to be reflected in the V$SGA and v$SGASTAT views under name 'log_buffer'. The following SQL statements can be issued and found whether they match are not.

select sum(value) from v$sga;
select sum(bytes) from v$sgastat;

The discrepancy, if exists, is because of the Log Buffer Guard Page.

On platforms that support memory protection there are guard pages on each side of the log buffer. These are one memory protection unit in size - normally 4K or 8K. Oracle uses an mprotect() system call during instance startup to set the permissions on the guard pages to PROT_NONE. Thus any attempt to read from or write to either of the guard pages will return the EACCES error and cause the process to abort, even if the effective user id of the process is oracle (Steve).

Up to 7.x there was a perfect match between these two. As said by Steve, if the difference is 4k or 8k, it is because of Log Buffer Guard Page. If it exceeds that 8K what is that? The answer is still searched. Here is one example:

SQL> select sum(value) from v$sga;

SUM(VALUE)
----------
137020380

SQL> select sum(bytes) from v$sgastat;

SUM(BYTES)
----------
136990084

SQL> select (137020380-136990084)/1024 from dual;

(137020380-136990084)/1024
--------------------------
29.5859375

阅读(1296) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~