分类: Oracle
2008-06-16 17:10:21
Oracle9i Real Application Clusters
(RAC) on HP-UX
| |
|
Authors: Rebecca Kühn, Rainer Marekwia HP/Oracle Cooperative Technology Center ( Co-Authors: Sandy Gruver, HP |
Contents:
1.
2.
3.
4.
5.
5.1
5.2
5.3
5.4
5.5
6.
6.1
6.2
6.3
6.4
6.5
6.6
Once Install is selected, the OUI will install the Oracle RAC software on to the local node, and then copy software to the other nodes selected earlier. This will take some time. During the installation process, the OUI does not display messages indicating that components are being installed on other nodes - I/O activity may be the only indication that the process is continuing.
Oracle9i Real Application Clusters
(RAC) on HP-UX
| |
Authors: Rebecca Kühn, Rainer Marekwia HP/Oracle Cooperative Technology Center ( Co-Authors: Sandy Gruver, HP Alliance Team US Troy Anthony, Oracle RAC Development |
Contents:
1.
2.
3.
4.
5.
5.1
5.2
5.3
5.4
5.5
6.
6.1
6.2
6.3
6.4
6.5
6.6
1. Module Objectives
Purpose
This module focuses on what Oracle9i Real Application Clusters (RAC) is and how it can be properly configured on HP-UX to tolerate failures with minimal downtime. Oracle9i Real Application Clusters is an important Oracle9i feature that addresses high availability and scalability issues.
Objectives
Upon completion of this module, you should be able to:
2. Overview: What is Oracle9i Real Applications Clusters?
Oracle9i Real Application Clusters is a computing environment that harnesses the processing power of multiple, interconnected computers. Oracle9i Real Application Clusters software and a collection of hardware known as a "cluster," unites the processing power of each component to become a single, robust computing environment. A cluster generally comprises two or more computers, or "nodes."
In Oracle9i Real Application Clusters (RAC) environments, all nodes concurrently execute transactions against the same database. Oracle9i Real Application Clusters coordinates each node's access to the shared data to provide consistency and integrity.
Oracle9i Real Application Clusters serves as an important component of robust high availability solutions. A properly configured Oracle9i Real Application Clusters environment can tolerate failures with minimal downtime.
Oracle9i Real Application Clusters is also applicable for many other system types. For example, data warehousing applications accessing read-only data are prime candidates for Oracle9i Real Application Clusters. In addition, Oracle9i Real Application Clusters successfully manages increasing numbers of online transaction processing systems as well as hybrid systems that combine the characteristics of both read-only and read/write applications.
Harnessing the power of multiple nodes offers obvious advantages. If you divide a large task into sub-tasks and distribute the sub-tasks among multiple nodes, you can complete the task faster than if only one node did the work. This type of parallel processing is clearly more efficient than sequential processing. It also provides increased performance for processing larger workloads and for accommodating growing user populations. Oracle9i Real Application Clusters can effectively scale your applications to meet increasing data processing demands. As you add resources, Oracle9i Real Application Clusters can exploit them and extend their processing powers beyond the limits of the individual components.
From a functional perspective RAC is equivalent to single-instance Oracle. What the RAC environment does offer is significant improvements in terms of availability, scalability and reliability.
In recent years, the requirement for highly available systems, able to scale on demand, has fostered the development of more and more robust cluster solutions. Prior to Oracle9i, HP and Oracle, with the combination of Oracle Parallel Server and HP ServiceGuard OPS edition, provided cluster solutions that lead the industry in functionality, high availability, management and services. Now with the release of Oracle 9i Real Application Clusters (RAC) with the new Cache Fusion architecture based on an ultra-high bandwidth, low latency cluster interconnect technology, RAC cluster solutions have become more scalable without the need for data and application partitioning.
The information contained in this document covers the installation and configuration of Oracle Real Application Clusters in a typical environment; a two node HP cluster, utilizing the HP-UX operating system.
3. Oracle 9i Real Application Clusters ? Cache Fusion technology
Oracle 9i cache fusion utilizes the collection of caches made available by all nodes in the cluster to satisfy database requests. Requests for a data block are satisfied first by a local cache, then by a remote cache before a disk read is needed. Similarly, update operations are performed first via the local node and then the remote node caches in the cluster, resulting in reduced disk I/O. Disk I/O operations are only done when the data block is not available in the collective caches or when an update transaction performs a commit operation.
Oracle 9i cache fusion thus provides Oracle users an expanded database cache for queries and updates with reduced disk I/O synchronization which overall speeds up database operations.
However, the improved performance depends greatly on the efficiency of the inter-node message passing mechanism, which handles the data block transfers between nodes.
The efficiency of inter-node messaging depends on three primary factors:
4. New HP Cluster Interconnect technology
5. HP/Oracle Hardware and Software Requirements
5.1 General Notes
For additional information and latest updates please refer to the Oracle9i Release Note Release 1 (9.0.1) for HP 9000 Series HP-UX (Part Number A90357-01) at
5.2 System Requirements
$ /usr/sbin/dmesg | grep "Physical:"
$ /usr/sbin/swapinfo -a (requires root privileges)
$/bin/getconf KERNEL_BITS
$ uname -a
$ cd /usr/lib
$ ln -s /usr/lib/libX11.3 libX11.sl
$ ln -s /usr/lib/libXIE.2 libXIE.sl
$ ln -s /usr/lib/libXext.3 libXext.sl
$ ln -s /usr/lib/libXhp11.3 libXhp11.sl
$ ln -s /usr/lib/libXi.3 libXi.sl
$ ln -s /usr/lib/libXm.4 libXm.sl
$ ln -s /usr/lib/libXp.2 libXp.sl
$ ln -s /usr/lib/libXt.3 libXt.sl
$ ln -s /usr/lib/libXtst.2 libXtst.sl
5.3 HP-UX Operating System Patches
11.0 (64bit):
11i (64bit):
Optional Patch: For DSS applications running on machines with more than 16 CPUs, we recommend installation of the HP-UX patch PHKL_22266. This patch addresses performance issues with the HP-UX Operating System.
HP provides patch bundles at
Individual patches can be downloaded from
To determine which operating system patches are installed, enter the following command:
$ /usr/sbin/swlist -l patch
To determine if a specific operating system patch has been installed, enter the following command:
$ /usr/sbin/swlist -l patch patch_number
To determine which operating system bundles are installed, enter the following command:
$ /usr/sbin/swlist -l bundle
5.4 Kernel parameters
| ||
Kernel Parameter |
Setting |
Purpose |
KSI_ALLOC_MAX |
(NPROC * 8) |
Defines the system wide limit of queued signal that can be allocated. |
MAXDSIZ |
1073741824 bytes |
Refers to the maximum data segment size for 32-bit systems. Setting this value too low may cause the processes to run out of memory. |
MAXDSIZ_64 |
2147483648 bytes |
Refers to the maximum data segment size for 64-bit systems. Setting this value too low may cause the processes to run out of memory. |
MAXSSIZ |
134217728 bytes |
Defines the maximum stack segment size in bytes for 32-bit systems. |
MAXSSIZ_64BIT |
1073741824 |
Defines the maximum stack segment size in bytes for 64-bit systems. |
MAXSWAPCHUNKS |
(available memory)/2 |
Defines the maximum number of swap chunks where SWCHUNK is the swap chunk size (1 KB blocks). SWCHUNK is 2048 by default. |
MAXUPRC |
(NPROC + 2) |
Defines maximum number of user processes. |
MSGMAP |
(NPROC + 2) |
Defines the maximum number of message map entries. |
MSGMNI |
NPROC |
Defines the number of message queue identifiers. |
MSGSEG |
(NPROC * 4) |
Defines the number of segments available for messages. |
MSGTQL |
NPROC |
Defines the number of message headers. |
NCALLOUT |
(NPROC + 16) |
Defines the maximum number of pending timeouts. |
NCSIZE |
((8 * NPROC + 2048) + VX_NCSIZE) |
Defines the Directory Name Lookup Cache (DNLC) space needed for inodes. VX_NCSIZE is by default 1024. |
NFILE |
(15 * NPROC + 2048) |
Defines the maximum number of open files. |