Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2152772
  • 博文数量: 157
  • 博客积分: 10047
  • 博客等级: 上将
  • 技术积分: 6757
  • 用 户 组: 普通用户
  • 注册时间: 2005-05-19 11:38
文章分类

全部博文(157)

文章存档

2011年(16)

2010年(50)

2009年(42)

2008年(49)

我的朋友

分类: Oracle

2008-06-16 17:10:21

Oracle9i Real Application Clusters
(RAC) on HP-UX

Authors: Rebecca Kühn, Rainer Marekwia HP/Oracle Cooperative Technology Center ( Co-Authors: Sandy Gruver, HP Alliance Team US Troy Anthony, Oracle RAC Development

Contents:

1.

2.

3.

4.

5.
5.1
5.2
5.3
5.4
5.5

6.
6.1
6.2
6.3
6.4
6.5
6.6


Once Install is selected, the OUI will install the Oracle RAC software on to the local node, and then copy software to the other nodes selected earlier. This will take some time. During the installation process, the OUI does not display messages indicating that components are being installed on other nodes - I/O activity may be the only indication that the process is continuing.


Oracle9i Real Application Clusters
(RAC) on HP-UX

Authors: Rebecca Kühn, Rainer Marekwia HP/Oracle Cooperative Technology Center ( Co-Authors: Sandy Gruver, HP Alliance Team US Troy Anthony, Oracle RAC Development

Contents:

1.

2.

3.

4.

5.
5.1
5.2
5.3
5.4
5.5

6.
6.1
6.2
6.3
6.4
6.5
6.6


1. Module Objectives

Purpose

This module focuses on what Oracle9i Real Application Clusters (RAC) is and how it can be properly configured on HP-UX to tolerate failures with minimal downtime. Oracle9i Real Application Clusters is an important Oracle9i feature that addresses high availability and scalability issues.

Objectives

Upon completion of this module, you should be able to:

  • What Oracle9i Real Application Clusters is and how it can be used
  • Understand the hardware requirements
  • Understand how to configure the HP Cluster and create the raw devices
  • Examine and verify the cluster configuration
  • Understand the creation of the Oracle9i Real Application Clusters database using DBCA

2. Overview: What is Oracle9i Real Applications Clusters?

Oracle9i Real Application Clusters is a computing environment that harnesses the processing power of multiple, interconnected computers. Oracle9i Real Application Clusters software and a collection of hardware known as a "cluster," unites the processing power of each component to become a single, robust computing environment. A cluster generally comprises two or more computers, or "nodes."

In Oracle9i Real Application Clusters (RAC) environments, all nodes concurrently execute transactions against the same database. Oracle9i Real Application Clusters coordinates each node's access to the shared data to provide consistency and integrity.

Oracle9i Real Application Clusters serves as an important component of robust high availability solutions. A properly configured Oracle9i Real Application Clusters environment can tolerate failures with minimal downtime.

Oracle9i Real Application Clusters is also applicable for many other system types. For example, data warehousing applications accessing read-only data are prime candidates for Oracle9i Real Application Clusters. In addition, Oracle9i Real Application Clusters successfully manages increasing numbers of online transaction processing systems as well as hybrid systems that combine the characteristics of both read-only and read/write applications.

Harnessing the power of multiple nodes offers obvious advantages. If you divide a large task into sub-tasks and distribute the sub-tasks among multiple nodes, you can complete the task faster than if only one node did the work. This type of parallel processing is clearly more efficient than sequential processing. It also provides increased performance for processing larger workloads and for accommodating growing user populations. Oracle9i Real Application Clusters can effectively scale your applications to meet increasing data processing demands. As you add resources, Oracle9i Real Application Clusters can exploit them and extend their processing powers beyond the limits of the individual components.

From a functional perspective RAC is equivalent to single-instance Oracle. What the RAC environment does offer is significant improvements in terms of availability, scalability and reliability.

In recent years, the requirement for highly available systems, able to scale on demand, has fostered the development of more and more robust cluster solutions. Prior to Oracle9i, HP and Oracle, with the combination of Oracle Parallel Server and HP ServiceGuard OPS edition, provided cluster solutions that lead the industry in functionality, high availability, management and services. Now with the release of Oracle 9i Real Application Clusters (RAC) with the new Cache Fusion architecture based on an ultra-high bandwidth, low latency cluster interconnect technology, RAC cluster solutions have become more scalable without the need for data and application partitioning.

The information contained in this document covers the installation and configuration of Oracle Real Application Clusters in a typical environment; a two node HP cluster, utilizing the HP-UX operating system.

3. Oracle 9i Real Application Clusters ? Cache Fusion technology

Oracle 9i cache fusion utilizes the collection of caches made available by all nodes in the cluster to satisfy database requests. Requests for a data block are satisfied first by a local cache, then by a remote cache before a disk read is needed. Similarly, update operations are performed first via the local node and then the remote node caches in the cluster, resulting in reduced disk I/O. Disk I/O operations are only done when the data block is not available in the collective caches or when an update transaction performs a commit operation.

Oracle 9i cache fusion thus provides Oracle users an expanded database cache for queries and updates with reduced disk I/O synchronization which overall speeds up database operations.
However, the improved performance depends greatly on the efficiency of the inter-node message passing mechanism, which handles the data block transfers between nodes.

The efficiency of inter-node messaging depends on three primary factors:

  • The number of messages required for each synchronization sequence. Oracle 9i?s Global Cache Manager GCM (Global Cache Services GCS/ Global Enqueue Services GES) coordinates the fast block transfer between nodes with two inter- node messages and one intra-node message. If the data is in a remote cache, an inter-node message is sent to the Lock Manager Services (LMS) on the remote node. The GCM and Cache Fusion processes then update the in-memory lock structure and send the block to the requesting process.
  • The frequency of synchronization (the less frequent the better). The cache fusion architecture reduces the frequency of the inter-node communication by dynamically migrating locks to a node that shows a frequent access pattern for a particular data block. This dynamic lock allocation increases the likelihood of local cache access thus reducing the need for inter-node communication. At a node level, a cache fusion lock controls access to data blocks from other nodes in the cluster.
  • The latency of inter-node communication. This is a critical component in Oracle 9i RAC as it determines the speed of data block transfer between nodes. An efficient transfer method must utilize minimal CPU resources, support high availability as well as highly scalable growth without bandwidth constraints.


4. New HP Cluster Interconnect technology

  • HyperFabric
    HyperFabric is a high-speed cluster interconnect fabric that supports both the industry standard TCP/UDP over IP and HP?s proprietary Hyper Messaging Protocol (HMP). HyperFabric extends the scalability and reliability of TCP/UDP by providing transparent load balancing of connection traffic across multiple network interface cards (NICs) and transparent failover of traffic from one card to another without invocation of MC/ServiceGaurd. The HyperFabric NIC incorporates a network processor that implements HP?s Hyper Messaging Protocol and provides lower latency and lower host CPU utilization for standard TCP/UDP benchmarks over HyperFabric when compared to gigabit Ethernet. Hewlett-Packard released HyperFabric in 1998 with a link rate of 2.56 Gbps over copper. In 2001, Hewlett-Packard released HyperFabric 2 with a link rate of 4.0 Gbps over fiber with support for compatibility with the copper HyperFabric interface. Both HyperFabric products support clusters up to 64-nodes.
  • HyperFabric Switches
    Hewlett-Packard provides the fastest cluster interconnect via its proprietary HyperFabric switches, the latest product being HyperFabric 2, which is a new set of hardware components with fiber connectors to enable low-latency, high bandwidth system interconnect. With fiber interfaces, HyperFabric 2 provides faster speed ? up to 4Gbps in full duplex over longer distance ? up to 200 meters. HyperFabric 2 also provides excellent scalability by supporting up to 16 hosts via point-to-point connectivity and up to 64 hosts via fabric switches. It is backward compatible with previous versions of HyperFabric and available on IA-64, PA-RISC servers.
  • Hyper Messaging Protocol (HMP)
    Hewlett-Packard, in cooperation with Oracle, has designed a cluster interconnect product specifically tailored to meet the needs of enterprise class parallel database applications. HP?s Hyper Messaging Protocol significantly expands on the feature set provided by TCP/UDP by providing a true Reliable Datagram model for both remote direct memory access (RDMA) and traditional message semantics. Coupled with OS bypass capability and the hardware support for protocol offload provided by HyperFabric, HMP provides high bandwidth, low latency and extremely low CPU utilization with an interface and feature set optimized for business critical parallel applications such as Oracle 9i RAC.

5. HP/Oracle Hardware and Software Requirements

5.1 General Notes

For additional information and latest updates please refer to the Oracle9i Release Note Release 1 (9.0.1) for HP 9000 Series HP-UX (Part Number A90357-01) at

  • Each node uses the HP-UX 11.x operating system software. Issue the command "$uname -r" at the operating system prompt to verify the release being used.
  • Oracle9i RAC is only available in 64-bit flavour. To determine if you have a 64-bit configuration on an HP-UX 11.0 installation, enter the following command: $/bin/getconf KERNEL_BITS
  • Oracle9i 9.0.1 RAC is supported by ServiceGuard OPS Edition 11.09 and 11.13.
  • Starting with ServiceGuard OPS Edition 11.13 16 nodes 9i RAC cluster is supported with SLVM.
  • Software mirroring with HP-UX MirrorDisk/UX with SLVM is supported in a 2-node configuration only.
  • Support for the HP Hyperfabric product is provided.
  • A total of 127 RAC instances per cluster is supported.

5.2 System Requirements

  • RAM Memory allocation: Minimum 256 MB. Use the following command to verify the amount of memory installed on your system

$ /usr/sbin/dmesg | grep "Physical:"

  • Swap Space: Minimum 2 x RAM or 400 MB, whichever is greater. Use the following command to determine the amount of swap space installed on your system:

$ /usr/sbin/swapinfo -a (requires root privileges)

  • CD-ROM drive: A CD-ROM drive capable of reading CD-ROM disks in the ISO 9660 format with RockRidge extensions.
  • Available Disk Space: 3 GB
  • Temporary Disk Space: The Oracle Universal Installer requires up to 512 MB of space in the /tmp directory
  • Operating System: HP-UX version 11.0. or 11i (11.11). To determine if you have a 64-bit configuration on an HP-UX 11.0 or HP-UX 11i installation, enter the following command:

$/bin/getconf KERNEL_BITS

  • To determine your current operating system information, enter the following command:

$ uname -a

  • JRE: Oracle applications use JRE 1.1.8.
  • JDK: Oracle HTTP Server Powered by Apache uses JDK 1.2.2.05.
  • Due to a known HP bug (Doc.id. KBRC00003627), the default HP-UX 64 operating system installation does not create a few required X-library symbolic links. These links must be created manually before starting Oracle9i installation. To create these links, you must have superuser privileges, as the links are to be created in the /usr/lib directory. After enabling superuser privileges, run the following commands to create the required links:

$ cd /usr/lib

$ ln -s /usr/lib/libX11.3 libX11.sl
$ ln -s /usr/lib/libXIE.2 libXIE.sl
$ ln -s /usr/lib/libXext.3 libXext.sl
$ ln -s /usr/lib/libXhp11.3 libXhp11.sl
$ ln -s /usr/lib/libXi.3 libXi.sl
$ ln -s /usr/lib/libXm.4 libXm.sl
$ ln -s /usr/lib/libXp.2 libXp.sl
$ ln -s /usr/lib/libXt.3 libXt.sl
$ ln -s /usr/lib/libXtst.2 libXtst.sl

  • X Server system software. (Refer to the installation guide for more information on the X Server System and emulator issues.)


5.3 HP-UX Operating System Patches

11.0 (64bit):

  • Sep 2001 HP-UX patch bundle
  • PHCO_23963
  • PHCO_24148
  • PHCO_23919
  • PHKL_23226
  • PHNE_24034
  • PHSS_23440
  • hyperfabric driver: 11.00.12 (HP-UX 11.0) (Required only if your system has an older hyperfabric driver version)

11i (64bit):

  • Sep 2001 HP-UX patch bundle
  • PHCO_23094
  • PHCO_23772
  • PHSS_23441
  • PHNE_23502

Optional Patch: For DSS applications running on machines with more than 16 CPUs, we recommend installation of the HP-UX patch PHKL_22266. This patch addresses performance issues with the HP-UX Operating System.

HP provides patch bundles at

Individual patches can be downloaded from

To determine which operating system patches are installed, enter the following command:

$ /usr/sbin/swlist -l patch

To determine if a specific operating system patch has been installed, enter the following command:

$ /usr/sbin/swlist -l patch patch_number

To determine which operating system bundles are installed, enter the following command:

$ /usr/sbin/swlist -l bundle

5.4 Kernel parameters

阅读(1486) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~

Kernel Parameter

Setting

Purpose

KSI_ALLOC_MAX

(NPROC * 8)

Defines the system wide limit of queued signal that can be allocated.

MAXDSIZ

1073741824 bytes

Refers to the maximum data segment size for 32-bit systems. Setting this value too low may cause the processes to run out of memory.

MAXDSIZ_64

2147483648 bytes

Refers to the maximum data segment size for 64-bit systems. Setting this value too low may cause the processes to run out of memory.

MAXSSIZ

134217728 bytes

Defines the maximum stack segment size in bytes for 32-bit systems.

MAXSSIZ_64BIT

1073741824

Defines the maximum stack segment size in bytes for 64-bit systems.

MAXSWAPCHUNKS

(available memory)/2

Defines the maximum number of swap chunks where SWCHUNK is the swap chunk size (1 KB blocks). SWCHUNK is 2048 by default.

MAXUPRC

(NPROC + 2)

Defines maximum number of user processes.

MSGMAP

(NPROC + 2)

Defines the maximum number of message map entries.

MSGMNI

NPROC

Defines the number of message queue identifiers.

MSGSEG

(NPROC * 4)

Defines the number of segments available for messages.

MSGTQL

NPROC

Defines the number of message headers.

NCALLOUT

(NPROC + 16)

Defines the maximum number of pending timeouts.

NCSIZE

((8 * NPROC + 2048) + VX_NCSIZE)

Defines the Directory Name Lookup Cache (DNLC) space needed for inodes.

VX_NCSIZE is by default 1024.

NFILE

(15 * NPROC + 2048)

Defines the maximum number of open files.