Chinaunix首页 | 论坛 | 博客
  • 博客访问: 898476
  • 博文数量: 206
  • 博客积分: 10276
  • 博客等级: 上将
  • 技术积分: 2358
  • 用 户 组: 普通用户
  • 注册时间: 2006-04-01 02:41
文章分类

全部博文(206)

文章存档

2014年(1)

2013年(1)

2012年(2)

2011年(10)

2010年(14)

2009年(15)

2008年(33)

2007年(90)

2006年(40)

我的朋友

分类: LINUX

2008-07-13 08:57:06

20. Install Oracle 11g Clusterware Software

Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (linux1)! The Oracle Clusterware software will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer.

You are now ready to install the "cluster" part of the environment: the Oracle Clusterware. In a previous section, you downloaded and extracted the install files for Oracle Clusterware to linux1 in the directory /home/oracle/orainstall/clusterware. This is the only node from which you need to perform the install from.

During the installation of Oracle Clusterware, you will be asked for the nodes to configure in the Oracle RAC cluster. After installing the software to the local node, it will copy the required software to all remaining nodes using the remote access we configured in the section ("Configure RAC Nodes for Remote Access using SSH").

So, what exactly is the Oracle Clusterware responsible for? It contains all of the cluster and database configuration metadata along with several system management features for RAC. It allows the DBA to register and invite an Oracle instance (or instances) to the cluster. During normal operation, Oracle Clusterware will send messages (via a special ping operation) to all nodes configured in the cluster, often called the "heartbeat." If the heartbeat fails for any of the nodes, it checks with the Oracle Clusterware configuration files (on the shared disk) to distinguish between a real node failure and a network failure.

After installing Oracle Clusterware, the Oracle Universal Installer (OUI) used to install the Oracle Database software (next section) will automatically recognize these nodes. Like the Oracle Clusterware install you will be performing in this section, the Oracle Database software only needs to be run from one node. The OUI will copy the software packages to all nodes configured in the RAC cluster.

Oracle Clusterware Shared Files

The two shared files (actually file groups) used by Oracle Clusterware will be stored on the Oracle Cluster File System, Release 2 (OFCS2) you created earlier. The two shared Oracle Clusterware file groups are:

  • Oracle Cluster Registry (OCR)

    • File 1 : /u02/oradata/orcl/OCRFile
    • File 2 : /u02/oradata/orcl/OCRFile_mirror
    • Size : (2 * 250 MB) = 500 MB

  • CRS Voting Disk

    • File 1 : /u02/oradata/orcl/CSSFile
    • File 2 : /u02/oradata/orcl/CSSFile_mirror1
    • File 3 : /u02/oradata/orcl/CSSFile_mirror2
    • Size : (3 * 20 MB) = 60 MB

Note: It is not possible to use Automatic Storage Management (ASM) for the two shared Oracle Clusterware files: Oracle Cluster Registry (OCR) or the CRS Voting Disk files. The problem is that these files need to be in place and accessible BEFORE any Oracle instances can be started. For ASM to be available, the ASM instance would need to be run first.

Also note that the two shared files could be stored on the OCFS2, shared RAW devices, or another vendor's clustered file system.

Verifying Terminal Shell Environment

Before starting the Oracle Universal Installer, you should first verify you are logged onto the server you will be running the installer from (i.e. linux1) then run the xhost command as root from the console to allow X Server connections. Next, login as the oracle user account. If you are using a remote client to connect to the node performing the installation (SSH or Telnet to linux1 from a workstation configured with an X Server), you will need to set the DISPLAY variable to point to your local workstation. Finally, verify remote access / user equivalence to all nodes in the cluster:

Verify Server and Enable X Server Access

# hostname
linux1

# xhost +
access control disabled, clients can connect from any host

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle
$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password. When using the secure shell method, will need to be enabled for the terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Fri Oct 12 14:44:31 EDT 2007
linux1

$ ssh linux2 "date;hostname"
Fri Oct 12 14:45:02 EDT 2007
linux2

Installing Oracle Clusterware

Perform the following tasks to install the Oracle Clusterware:

$ cd ~oracle
$ /home/oracle/orainstall/clusterware/runInstaller

Screen Name Response
Welcome Screen Click Next
Specify Inventory directory and credentials Accept the default values:
   Inventory directory: /u01/app/oraInventory
   Operating System group name: oinstall
Specify Home Details Set the Name and Path for the ORACLE_HOME (actually the $ORA_CRS_HOME that I will be using in this article) as follows:
   Name: OraCrs11g_home
   Path: /u01/app/crs
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle Clusterware software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.

Click Next to continue.

Specify Cluster Configuration Cluster Name: linux_cluster

Public Node Name Private Node Name Virtual Host Name
linux1 linux1-priv linux1-vip
linux2 linux2-priv linux2-vip

Specify Network Interface Usage
Interface Name Subnet Interface Type
eth0 192.168.1.0 Public
eth1 192.168.2.0 Private
Specify OCR Location Starting with Oracle Database 10g Release 2 (10.2) with RAC, Oracle Clusterware provides for the creation of a mirrored OCR file, enhancing cluster reliability. By enabling a multiple OCR file configuration, the redundant OCR files allow you to configure a RAC database with multiple OCR files on independent shared physical disks. For the purpose of this example, I did choose to mirror the OCR file by keeping the default option of "Normal Redundancy":

Specify OCR Location: /u02/oradata/orcl/OCRFile
Specify OCR Mirror Location: /u02/oradata/orcl/OCRFile_mirror

Specify Voting Disk Location Starting with Oracle Database 10g Release 2 (10.2) with RAC, CSS has been modified to allow you to configure CSS with multiple voting disks. In Release 1 (10.1), you could configure only one voting disk. By enabling a multiple voting disk configuration, the redundant voting disks allow you to configure a RAC database with multiple voting disks on independent shared physical disks. Note that to take advantage of the benefits of multiple voting disks, you must configure at least three voting disks. For the purpose of this example, I did choose to mirror the voting disk by keeping the default option of "Normal Redundancy":

Voting Disk Location: /u02/oradata/orcl/CSSFile
Additional Voting Disk 1 Location: /u02/oradata/orcl/CSSFile_mirror1
Additional Voting Disk 2 Location: /u02/oradata/orcl/CSSFile_mirror2

Summary

Click Install to start the installation!

Execute Configuration scripts After the installation has completed, you will be prompted to run the orainstRoot.sh and root.sh script. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the "root" user account.

Navigate to the /u01/app/oraInventory directory and run orainstRoot.sh on all nodes in the RAC cluster one at a time starting with the node you are performing the install from.

Note: After executing the orainstRoot.sh on both nodes, verify the permissions of the file "/etc/oraInst.loc" are 644 (-rw-r--r--) and owned by root. Problems can occur during the installation of Oracle if the oracle user account does not have read permissions to this file - "the location of the oraInventory directory cannot be determined". For example, during the Oracle Clusterware post-installation process (while running the Oracle Clusterware Verification Utility), the following error will occur: "CRS is not installed on any of the nodes." If the permissions to /etc/oraInst.loc is not set correctly, it is possible you didn't run orainstRoot.sh on both nodes before running root.sh. Also, the umask setting may be off - it should be 0022. If the permissions on /etc/oraInst.loc are not set correctly, run the following on both nodes in the Oracle RAC cluster to correct the problem:

# chmod 644 /etc/oraInst.loc
# ls -l /etc/oraInst.loc
-rw-r--r-- 1 root root 56 Oct 12 21:52 /etc/oraInst.loc


Within the same new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), stay logged in as the "root" user account.

Navigate to the /u01/app/crs directory and locate the root.sh file. Run the root.sh file on all nodes in the RAC cluster one at a time starting with the node you are performing the install from.

If the Oracle Clusterware home directory is a subdirectory of the ORACLE_BASE directory (which should never be!), you will receive several warnings regarding permission while running the root.sh script on both nodes. Any warnings regarding directories not being owned by root can be safely ignored.

The root.sh script can take awhile to run. When running root.sh on the last node, you will receive output similar to the following signify a successful install:

...
Cluster Synchronization Services is active on these nodes.
        linux1
        linux2
Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes...
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...


Done.

Go back to the OUI and acknowledge the "Execute Configuration scripts" dialog window.

End of Installation At the end of the installation, exit from the OUI.

Verify Oracle Clusterware Installation

After the installation of Oracle Clusterware, we can run through several tests to verify the install was successful. Run the following commands on both nodes in the RAC cluster.

Check Cluster Nodes

$ $ORA_CRS_HOME/bin/olsnodes -n
linux1 1
linux2 2

Confirm Oracle Clusterware Function

$ $ORA_CRS_HOME/bin/crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora.linux1.gsd application    0/5    0/0    ONLINE    ONLINE    linux1
ora.linux1.ons application    0/3    0/0    ONLINE    ONLINE    linux1
ora.linux1.vip application    0/0    0/0    ONLINE    ONLINE    linux1
ora.linux2.gsd application    0/5    0/0    ONLINE    ONLINE    linux2
ora.linux2.ons application    0/3    0/0    ONLINE    ONLINE    linux2
ora.linux2.vip application    0/0    0/0    ONLINE    ONLINE    linux2

Check CRS Status

$ $ORA_CRS_HOME/bin/crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy

Check Oracle Clusterware Auto-Start Scripts

$ ls -l /etc/init.d/init.*
-rwxr-xr-x 1 root root  2236 Oct 12 22:08 /etc/init.d/init.crs
-rwxr-xr-x 1 root root  5290 Oct 12 22:08 /etc/init.d/init.crsd
-rwxr-xr-x 1 root root 49416 Oct 12 22:08 /etc/init.d/init.cssd
-rwxr-xr-x 1 root root  3859 Oct 12 22:08 /etc/init.d/init.evmd

 


21. Install Oracle Database 11g Software

Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (linux1)! The Oracle Database software will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer.

After successfully installing the Oracle Clusterware software, the next step is to install the Oracle Database 11g Release 1 (11.1.0.6.0) with RAC.

For the purpose of this example, you will forgo the "Create Database" option when installing the Oracle Database software. You will create the database using the Database Configuration Assistant (DBCA) after all installs have been completed.

Like the Oracle Clusterware install (previous section), the Oracle Database software only needs to be run from one node. The OUI will copy the software packages to all nodes configured in the RAC cluster.

Verifying Terminal Shell Environment

As discussed in the previous section, (Install Oracle 11g Clusterware Software), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Oracle Universal Installer. Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle
$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password. When using the secure shell method, will need to be enabled for the terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Fri Oct 12 14:44:31 EDT 2007
linux1

$ ssh linux2 "date;hostname"
Fri Oct 12 14:45:02 EDT 2007
linux2

Run the Oracle Cluster Verification Utility

Before installing the Oracle Database software, you should run the following database pre-installation check using the Cluster Verification Utility (CVU).

Instructions for configuring CVU can be found in the section " discussed earlier in this article.

$ cd /home/oracle/orainstall/clusterware
$ ./runcluvfy.sh stage -pre dbinst -n linux1,linux2 -r 11gR1 -verbose

Review the CVU report. Note that all of the checks performed by CVU should be reported as passed before continuing with the Oracle RAC installation.

Install Oracle Database 11g Release 1 Software

Install the Oracle Database 11g Release 1 software as follows:

$ cd ~oracle
$ /home/oracle/orainstall/database/runInstaller

Screen Name Response
Welcome Screen Click Next
Select Installation Type Select the type of installation you would like to perform: "Enterprise Edition", "Standard Edition", or "Custom".

For the purpose of this article, I selected the "Custom" option.

Install Location Set the Oracle Base, Name and Path as follows:

   Oracle Base: /u01/app/oracle

   Name: OraDb11g_home1
   Path: /u01/app/oracle/product/11.1.0/db_1

Specify Hardware Cluster Installation Mode Select the "Cluster Installation" option then select all nodes available. Click "Select All" to select all servers: linux1 and linux2.

If the installation stops here and the status of any of the RAC nodes is "Node not reachable", perform the following checks:

  • Ensure Oracle Clusterware is running on the node in question.
  • Ensure you are able to reach the node in question from the node you are performing the installation from.
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the nodes meet the minimum requirements for installing and configuring the Oracle Database software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox.

It is possible to receive an error about the available swap space not meeting its minimum requirements:

Checking available swap space requirements...
Expected result: 3036MB
Actual Result: 1983MB

In most cases, you will have the minimum required swap space (as shown above) and this can be safely ignored. Simply click the check-box for "Checking available swap space requirements..." and click Next to continue.

Available Product Components Select the components that you plan on using for your database environment.
Privileged Operating System Groups Select the UNIX groups that will be used for each of the Oracle group names as follows:

   Database Administrator (OSDBA) Group: dba
   Database Operator (OSOPER) Group: oper
   ASM Administrator (OSASM) Group: asm

Create Database Select the option to "Install database Software only".

Remember that we will create the clustered database as a separate step using DBCA.

Oracle Configuration Manager Registration I kept the default option to not enable Oracle Configuration Manager.
Summary

Click on Install to start the installation!

Root Script Window - Run root.sh After the installation has completed, you will be prompted to run the root.sh script. It is important to keep in mind that the root.sh script will need to be run on all nodes in the RAC cluster one at a time starting with the node you are running the database installation from.

Navigate to the /u01/app/oracle/product/11.1.0/db_1 directory and run root.sh.

After running the root.sh script on all nodes in the cluster, go back to the OUI and acknowledge the "Execute Configuration scripts" dialog window.

End of installation At the end of the installation, exit from the OUI.

 


22. Install Oracle Database 11g Examples (formerly Companion)

Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (linux1)! The Oracle Database 11g Examples will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer.

After successfully installing the Oracle Database software, the next step is to install the Oracle Database 11g Examples (11.1.0.6.0). Please keep in mind that this is an optional step.

Verifying Terminal Shell Environment

As discussed in the previous section, (Install Oracle Database 11g Software), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Oracle Universal Installer. Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle
$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password. When using the secure shell method, will need to be enabled for the terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Fri Oct 12 14:44:31 EDT 2007
linux1

$ ssh linux2 "date;hostname"
Fri Oct 12 14:45:02 EDT 2007
linux2

Install Oracle Database 11g Examples

Install the Oracle Database 11g Examples as follows:

$ cd ~oracle
$ /home/oracle/orainstall/examples/runInstaller

Screen Name Response
Welcome Screen Click Next
Specify Home Details Set the destination for the ORACLE_HOME Name and Path to that of the previous Oracle11g Database software install as follows:
   Name: OraDb11g_home1
   Path: /u01/app/oracle/product/11.1.0/db_1
Specify Hardware Cluster Installation Mode The Cluster Installation option will be selected along with all of the available nodes in the cluster by default. Stay with these default options and click Next to continue.

If the installation stops here and the status of any of the RAC nodes is "Node not reachable", perform the following checks:

  • Ensure Oracle Clusterware is running on the node in question.
  • Ensure you are able to reach the node in question from the node you are performing the installation from.
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle Examples CD. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.

Click on Next to continue.

Summary On the Summary screen, click Install to start the installation!
End of installation At the end of the installation, exit from the OUI.



23. Create TNS Listener Process

Perform the following configuration procedures from only one of the Oracle RAC nodes in the cluster (linux1)! The Network Configuration Assistant (NETCA) will setup the TNS listener in a clustered configuration on both of Oracle RAC nodes in the cluster..

The DBCA requires the Oracle TNS Listener process to be configured and running on all nodes in the RAC cluster before it can create the clustered database.

The process of creating the TNS listener only needs to be performed on one node in the cluster. All changes will be made and replicated to all nodes in the cluster. On one of the nodes (I will be using linux1) bring up the NETCA and run through the process of creating a new TNS listener process and also configure the node for local access.

Verifying Terminal Shell Environment

As discussed in the previous section, (Install Oracle Database 11g Examples), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Network Configuration Assistant (NETCA). Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle
$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password. When using the secure shell method, will need to be enabled for the terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Fri Oct 12 14:44:31 EDT 2007
linux1

$ ssh linux2 "date;hostname"
Fri Oct 12 14:45:02 EDT 2007
linux2

Run the Network Configuration Assistant

To start the NETCA, run the following:

$ netca &
The following table walks you through the process of creating a new Oracle listener for our RAC environment.

Screen Name Response
Select the Type of Oracle
Net Services Configuration
Select "Cluster Configuration".
Select the nodes to configure Select all of the nodes: linux1 and linux2.
Type of Configuration Select "Listener configuration".
Listener Configuration - Next 6 Screens The following screens are now like any other normal listener configuration. You can simply accept the default parameters for the next six screens:
   What do you want to do: Add
   Listener name: LISTENER
   Selected protocols: TCP
   Port number: 1521
   Configure another listener: No
   Listener configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Select "Naming Methods configuration".
Naming Methods Configuration The following screens are:
   Selected Naming Methods: Local Naming
   Naming Methods configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Click Finish to exit the NETCA.

The Oracle TNS listener process should now be running on all nodes in the RAC cluster:

$ hostname
linux1

$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_LINUX1

$ $ORA_CRS_HOME/bin/crs_stat ora.linux1.LISTENER_LINUX1.lsnr
NAME=ora.linux1.LISTENER_LINUX1.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on linux1

=====================

$ hostname
linux2

$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_LINUX2
$ $ORA_CRS_HOME/bin/crs_stat ora.linux2.LISTENER_LINUX2.lsnr NAME=ora.linux2.LISTENER_LINUX2.lsnr TYPE=application TARGET=ONLINE STATE=ONLINE on linux2

 


24. Create the Oracle Cluster Database

The database creation process should only be performed from one of the Oracle RAC nodes in the cluster (linux1)!

We will use the Oracle Database Configuration Assistant (DBCA) to create the clustered database.

Before executing the DBCA, make sure that $ORACLE_HOME and $PATH are set appropriately for the $ORACLE_BASE/product/11.1.0/db_1 environment.

You should also verify that all services we have installed up to this point (Oracle TNS listener, Oracle Clusterware processes, etc.) are running before attempting to start the clustered database creation process.

Verifying Terminal Shell Environment

As discussed in the previous section, (Create TNS Listener Process), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Database Configuration Assistant (DBCA). Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle
$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password. When using the secure shell method, will need to be enabled for the terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Fri Oct 12 14:44:31 EDT 2007
linux1

$ ssh linux2 "date;hostname"
Fri Oct 12 14:45:02 EDT 2007
linux2

Run the Oracle Cluster Verification Utility

Before creating the Oracle clustered database, you should run the following database configuration check using the Cluster Verification Utility (CVU).

Instructions for configuring CVU can be found in the section " discussed earlier in this article.

$ cd /home/oracle/orainstall/clusterware
$ ./runcluvfy.sh stage -pre dbcfg -n linux1,linux2 -d ${ORACLE_HOME} -verbose

Review the CVU report. Note that all of the checks performed by CVU should be reported as passed before continuing with creating the Oracle clustered database.

Create the Clustered Database

To start the database creation process, run the following:

$ dbca &

Screen Name Response
Welcome Screen Select Oracle Real Application Clusters database.
Operations Select Create a Database.
Node Selection Click on the Select All button to select all servers: linux1 and linux2.
Database Templates Select Custom Database.
Database Identification Select:
   Global Database Name: orcl.idevelopment.info
   SID Prefix: orcl

I used idevelopment.info for the database domain. You may use any domain. Keep in mind that this domain does not have to be a valid DNS domain.

Management Option Leave the default options here, which is to Configure Enterprise Manager / Configure Database Control for local management.
Database Credentials I selected to Use the Same Administrative Password for All Accounts. Enter the password (twice) and make sure the password does not start with a digit number.
Storage Options For this guide, we will select to use Automatic Storage Management (ASM).
Create ASM Instance Supply the SYS password to use for the new ASM instance.

Also, starting with Oracle10g Release 2, the ASM instance server parameter file (SPFILE) needs to be on a shared disk. You will need to first select and then modify the default entry for "Create server parameter file (SPFILE)" to reside on the OCFS2 partition as follows: /u02/oradata/orcl/dbs/spfile+ASM.ora. All other options can stay at their defaults.

You will then be prompted with a dialog box asking if you want to create and start the ASM instance. Select the OK button to acknowledge this dialog.

The OUI will now create and start the ASM instance on all nodes in the RAC cluster.

ASM Disk Groups

To start, click the Create New button. This will bring up the "Create Disk Group" window with the four volumes we configured earlier using ASMLib.

If the ASM volumes we created earlier in this article do not show up in the "Select Member Disks" window: (ORCL:VOL1, ORCL:VOL2, ORCL:VOL3, and ORCL:VOL4) then click on the "Change Disk Discovery Path" button and input "ORCL:VOL*".

For the first "Disk Group Name" I used ORCL_DATA1. Select the first two ASM volumes (ORCL:VOL1 and ORCL:VOL2) in the "Select Member Disks" window. Keep the "Redundancy" setting to Normal.

After verifying all values in this window are correct, click the [OK] button. This will present the "ASM Disk Group Creation" dialog. When the ASM Disk Group Creation process is finished, you will be returned to the "ASM Disk Groups" windows.

Click the Create New button again. For the second "Disk Group Name" I used FLASH_RECOVERY_AREA. Select the last two ASM volumes (ORCL:VOL3 and ORCL:VOL4) in the "Select Member Disks" window. Keep the "Redundancy" setting to Normal.

After verifying all values in this window are correct, click the [OK] button. This will present the "ASM Disk Group Creation" dialog. When the ASM Disk Group Creation process is finished, you will be returned to the "ASM Disk Groups" window with two disk groups created and selected.

Select only one of the disk groups by using the checkbox next to the newly created Disk Group Name "ORCL_DATA1" (ensure that the disk group for "FLASH_RECOVERY_AREA" is not selected) and click [Next] to continue.

Database File Locations

Keep the default option which is to use Oracle Managed Files:

Database Area: +ORCL_DATA1

Recovery Configuration Check the option for Specify Flash Recovery Area.

For the Flash Recovery Area, click the [Browse] button and select the disk group name +FLASH_RECOVERY_AREA.

My disk group has a size of about 118GB. When defining the Flash Recovery Area size, use the entire volume minus 10% — (118-10%=106 GB). I used a Flash Recovery Area Size of 106 GB (108544 MB).

Database Content I left all of the Database Components (and destination tablespaces) set to their default value although it is perfectly OK to select the Sample Schemas. This option is available since we installed the Oracle Database 11g Examples.
Initialization Parameters Change any parameters for your environment. I left them all at their default settings.
Security Settings Oracle highly recommends using the new enhanced security settings for Oracle 11g. For the purpose of this article, I choose the default setting to Keep the enhanced 11g default security settings. Note that these security settings can be modified after the database creation using DBCA.
Automatic Maintenance Tasks I kept the default to Enable automatic maintenance tasks.
Database Storage Change any parameters for your environment. I left them all at their default settings.
Creation Options Keep the default option Create Database selected. I also always select to Generate Database Creation Scripts. Click Finish to start the database creation process. After acknowledging the database creation report and script generation dialog, the database creation will start.

Click OK on the "Summary" screen.

End of Database Creation At the end of the database creation, exit from the DBCA.

When exiting the DBCA you will not receive any feedback from the dialog window for around 30-60 seconds. After awhile, another dialog will come up indicating that it is starting the cluster database and all of its instances. This may take several minutes to complete. When finished, all windows and dialog boxes will disappear.

When the DBCA has completed, you will have a fully functional Oracle RAC cluster running!

 


25. Post-Installation Tasks - (Optional)

This chapter describes several optional tasks that can be applied to your new Oracle 11g in order to enhance availability as well as database management.

Enabling Archive Logs in a RAC Environment

Whether a single instance or clustered database, Oracle tracks and logs all changes to database blocks in online redolog files. In an Oracle RAC environment, each instance will have its own set of online redolog files known as a thread. Each Oracle instance will use its group of online redologs in a circular manner. Once an online redolog fills, Oracle moves to the next one. If the database is in "Archive Log Mode", Oracle will make a copy of the online redo log before it gets reused. A thread must contain at least two online redologs (or online redolog groups). The same holds true for a single instance configuration. The single instance must contain at least two online redologs (or online redolog groups).

The size of an online redolog file is completely independent of another intances' redolog size. Although in most configurations the size is the same, it may be different depending on the workload and backup / recovery considerations for each node. It is also worth mentioning that each instance has exclusive write access to its own online redolog files. In a correctly configured RAC environment, however, each instance can read another instance's current online redolog file to perform instance recovery if that instance was terminated abnormally. It is therefore a requirement that online redo logs be located on a shared storage device (just like the database files).

As already mentioned, Oracle writes to its online redolog files in a circular manner. When the current online redolog fills, Oracle will switch to the next one. To facilitate media recovery, Oracle allows the DBA to put the database into "Archive Log Mode" which makes a copy of the online redolog after it fills (and before it gets reused). This is a process known as archiving.

The Database Configuration Assistant (DBCA) allows users to configure a new database to be in archive log mode, however most DBA's opt to bypass this option during initial database creation. In cases like this where the database is in no archive log mode, it is a simple task to put the database into archive log mode. Note however that this will require a short database outage. From one of the nodes in the Oracle RAC configuration, use the following tasks to put a RAC enabled database into archive log mode. For the purpose of this article, I will use the node linux1 which runs the orcl1 instance:

  1. Login to one of the nodes (i.e. linux1) and disable the cluster instance parameter by setting cluster_database to FALSE from the current instance:
    $ sqlplus "/ as sysdba"
    SQL> alter system set cluster_database=false scope=spfile sid='orcl1';

  2. Shutdown all instances accessing the clustered database:
    $ srvctl stop database -d orcl

  3. Using the local instance, MOUNT the database:
    $ sqlplus "/ as sysdba"
    SQL> startup mount

  4. Enable archiving:
    SQL> alter database archivelog;

  5. Re-enable support for clustering by modifying the instance parameter cluster_database to TRUE from the current instance:
    SQL> alter system set cluster_database=true scope=spfile sid='orcl1';

  6. Shutdown the local instance:
    SQL> shutdown immediate

  7. Bring all instance back up using srvctl:
    $ srvctl start database -d orcl

  8. Login to the local instance and verify Archive Log Mode is enabled:
    $ sqlplus "/ as sysdba"
    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     58
    Next log sequence to archive   59
    Current log sequence           59

After enabling Archive Log Mode, each instance in the RAC configuration can automatically archive redologs!

Download and Install Custom Oracle Database Scripts

DBA's rely on Oracle's data dictionary views and dynamic performance views in order to support and better manage their databases. Although these views provide a simple and easy mechanism to query critical information regarding the database, it helps to have a collection of accurate and readily available SQL scripts to query these views.

In this section you will download and install a collection of Oracle DBA scripts that can be used to manage many aspects of your database including space management, performance, backups, security, and session management. The Oracle DBA scripts archive can be downloaded using the following link http://www.idevelopment.info/data/Oracle/DBA_scripts/common.zip. As the oracle user account, download the common.zip archive to the $ORACLE_BASE directory of each node in the cluster. For the purpose of this example, the common.zip archive will be copied to /u01/app/oracle. Next, unzip the archive file to the $ORACLE_BASE directory.

For example, perform the following on both nodes in the Oracle RAC cluster as the oracle user account:

$ mv common.zip /u01/app/oracle
$ cd /u01/app/oracle
$ unzip common.zip

The final step is to verify (or set) the appropriate environment variable for the current UNIX shell to ensure the Oracle SQL scripts can be run from within SQL*Plus while in any directory. For UNIX, verify the following environment variable is set and included in your login shell script:

ORACLE_PATH=$ORACLE_BASE/common/oracle/sql:.:$ORACLE_HOME/rdbms/admin
export ORACLE_PATH

Note: The ORACLE_PATH environment variable should already be set in the .bash_profile login script that was created in the section .

Now that the Oracle DBA scripts have been unzipped and the UNIX environment variable ($ORACLE_PATH) has been set to the appropriate directory, you should now be able to run any of the SQL scripts in your $ORACLE_BASE/common/oracle/sql while logged into SQL*Plus. For example, to query tablespace information while logged into the Oracle database as a DBA user:

SQL> @dba_tablespaces

Status    Tablespace Name TS Type      Ext. Mgt.  Seg. Mgt.    Tablespace Size    Used (in bytes) Pct. Used
--------- --------------- ------------ ---------- --------- ------------------ ------------------ ---------
ONLINE    SYSAUX          PERMANENT    LOCAL      AUTO             692,846,592        659,816,448        95
ONLINE    UNDOTBS1        UNDO         LOCAL      MANUAL           367,001,600         20,250,624         6
ONLINE    USERS           PERMANENT    LOCAL      AUTO               5,242,880             65,536         1
ONLINE    SYSTEM          PERMANENT    LOCAL      MANUAL           734,003,200        732,233,728       100
ONLINE    EXAMPLE         PERMANENT    LOCAL      AUTO             157,286,400         83,886,080        53
ONLINE    UNDOTBS2        UNDO         LOCAL      MANUAL           209,715,200         24,313,856        12
ONLINE    TEMP            TEMPORARY    LOCAL      MANUAL            48,234,496         39,845,888        83
                                                            ------------------ ------------------ ---------
avg                                                                                                      50
sum                                                              2,214,330,368      1,560,412,160

7 rows selected.

To obtain a list of all available Oracle DBA scripts while logged into SQL*Plus, run the help.sql script:

SQL> @help.sql

========================================
Automatic Shared Memory Management
========================================
asmm_components.sql

========================================
Automatic Storage Management
========================================
asm_alias.sql
asm_clients.sql
asm_diskgroups.sql
asm_disks.sql
asm_disks_perf.sql
asm_drop_files.sql
asm_files.sql
asm_files2.sql
asm_templates.sql

< --- SNIP --- >

perf_top_sql_by_buffer_gets.sql
perf_top_sql_by_disk_reads.sql

========================================
Workspace Manager
========================================
wm_create_workspace.sql
wm_disable_versioning.sql
wm_enable_versioning.sql
wm_freeze_workspace.sql
wm_get_workspace.sql
wm_goto_workspace.sql
wm_merge_workspace.sql
wm_refresh_workspace.sql
wm_remove_workspace.sql
wm_unfreeze_workspace.sql
wm_workspaces.sql

Create Shared Oracle Password Files

In this section, I present the steps required to configure a shared Oracle password file between all instances in the Oracle clustered database. The password file for the database in UNIX is located at $ORACLE_HOME/dbs/orapw for each instance and contains a list of all database users that have SYSDBA privileges. When a database user is granted the SYSDBA role, the instance records this in the database password file for the instance you are logged into. But what about the other instances in the cluster? The database password file on other instances do not get updated and will not contain the user who was just granted the SYSDBA role. Therefore a program like RMAN that tries to login as this new user with SYSDBA privileges will fail if it tries to use an instance with a password file that does not contain his or her name.

A common solution to this problem is to place a single database password file on a shared / clustered file system and then create symbolic links from each of the instances to this single version of the database password file. Since the environment described in this article makes use of the Oracle Clustered File System (OCFS2), we will use it to store the single version of the database password file.

Note: In this section, we will also be including the Oracle password file for the ASM instance.

  1. Create the database password directory on the clustered file system mounted on /u02/oradata/orcl. Perform the following from only one node in the cluster as the oracle user account - (linux1):
    $ mkdir -p /u02/oradata/orcl/dbs

  2. From one node in the cluster (linux1), move the database password files to the database password directory on the clustered file system. Chose a node that contains a database password file that has the most recent SYSDBA additions. In most cases, this will not matter since any missing entries can be easily added by granting them the SYSDBA role - (plus the fact that this is a fresh install and unlikely you created any SYSDBA users at this point!). Note that the database server does not need to be shutdown while performing the following actions. From linux1 as the oracle user account:
    $ mv $ORACLE_HOME/dbs/orapw+ASM1 /u02/oradata/orcl/dbs/orapw+ASM
    $ mv $ORACLE_HOME/dbs/orapworcl1 /u02/oradata/orcl/dbs/orapworcl
    
    $ ln -s /u02/oradata/orcl/dbs/orapw+ASM $ORACLE_HOME/dbs/orapw+ASM1
    $ ln -s /u02/oradata/orcl/dbs/orapworcl $ORACLE_HOME/dbs/orapworcl1
  3. From the second node in the cluster (linux2):
    $ rm $ORACLE_HOME/dbs/orapw+ASM2
    $ rm $ORACLE_HOME/dbs/orapworcl2
    
    $ ln -s /u02/oradata/orcl/dbs/orapw+ASM $ORACLE_HOME/dbs/orapw+ASM2
    $ ln -s /u02/oradata/orcl/dbs/orapworcl $ORACLE_HOME/dbs/orapworcl2

Now, when a user is granted the SYSDBA role, all instances will have access to the same password file:

SQL> GRANT sysdba TO scott;

26. Verify TNS Networking Files

Ensure that the TNS networking files are configured on both Oracle RAC nodes in the cluster!

listener.ora

We already covered how to create a TNS listener configuration file (listener.ora) for a clustered environment in . The listener.ora file should be properly configured and no modifications should be needed.

For clarity, I have included a copy of the listener.ora file from both Oracle RAC nodes in this guide's . I've also included a copy of my tnsnames.ora file that was configured by Oracle. This file should already be configured on both Oracle RAC nodes in the cluster.

You can include any of these entries on other client machines that need access to the clustered database.

Connecting to Clustered Database From an External Client

This is an optional step, but I like to perform it in order to verify my TNS files are configured correctly. Use another machine (i.e. a Windows machine connected to the network) that has Oracle installed and add the TNS entries (in the tnsnames.ora) from either of the nodes in the cluster that were created for the clustered database.

Note: Verify that the machine you are connecting from can resolve all host names exactly how they appear in the listener.ora and tnsnames.ora files. For the purpose of this document, the machine you are connecting from should be able to resolve the following host names in the local hosts file or through DNS:

192.168.1.100    linux1
192.168.1.101    linux2
192.168.1.200    linux1-vip
192.168.1.201    linux2-vip

Then try to connect to the clustered database using all available service names defined in the tnsnames.ora file:

C:\> sqlplus system/manager@orcl2
C:\> sqlplus system/manager@orcl1
C:\> sqlplus system/manager@orcl

 


27. Create / Alter Tablespaces

When creating the clustered database, we left all tablespaces set to their default size. If you are using a large drive for the shared storage, you may want to make a sizable testing database.

Below are several optional SQL commands for modifying and creating all tablespaces for the test database. Please keep in mind that the database file names (OMF files) used in this example may differ from what the Oracle Database Configuration Assistant (DBCA) creates for your environment. When working through this section, substitute the data file names that were created in your environment where appropriate. The following query can be used to determine the file names for your environment:

SQL> select tablespace_name, file_name
  2  from dba_data_files
  3  union
  4  select tablespace_name, file_name
  5  from dba_temp_files;

TABLESPACE_NAME     FILE_NAME
--------------- --------------------------------------------------
EXAMPLE         +ORCL_DATA1/orcl/datafile/example.263.635876607
SYSAUX          +ORCL_DATA1/orcl/datafile/sysaux.260.635876529
SYSTEM          +ORCL_DATA1/orcl/datafile/system.259.635876483
TEMP            +ORCL_DATA1/orcl/tempfile/temp.262.635876577
UNDOTBS1        +ORCL_DATA1/orcl/datafile/undotbs1.261.635876549
UNDOTBS2        +ORCL_DATA1/orcl/datafile/undotbs2.264.635876629
USERS           +ORCL_DATA1/orcl/datafile/users.265.635876657

7 rows selected.

$ sqlplus "/ as sysdba"

SQL> create user scott identified by tiger default tablespace users;
SQL> grant dba, resource, connect to scott;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/users.265.635876657' resize 1024m;
SQL> alter tablespace users add datafile '+ORCL_DATA1' size 1024m autoextend off;

SQL> create tablespace indx datafile '+ORCL_DATA1' size 1024m
  2  autoextend on next 50m maxsize unlimited
  3  extent management local autoallocate
  4  segment space management auto;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/system.259.635876483' resize 800m;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/sysaux.260.635876529' resize 1024m;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/undotbs1.261.635876549' resize 1024m;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/undotbs2.264.635876629' resize 1024m;

SQL> alter database tempfile '+ORCL_DATA1/orcl/tempfile/temp.262.635876577' resize 1024m;
Here is a snapshot of the tablespaces I have defined for my test database environment:
Status    Tablespace Name TS Type      Ext. Mgt.  Seg. Mgt.    Tablespace Size    Used (in bytes) Pct. Used
--------- --------------- ------------ ---------- --------- ------------------ ------------------ ---------
ONLINE    SYSAUX          PERMANENT    LOCAL      AUTO           1,073,741,824        660,275,200        61
ONLINE    UNDOTBS1        UNDO         LOCAL      MANUAL         1,073,741,824         20,250,624         2
ONLINE    USERS           PERMANENT    LOCAL      AUTO           2,147,483,648            131,072         0
ONLINE    SYSTEM          PERMANENT    LOCAL      MANUAL           838,860,800        732,430,336        87
ONLINE    EXAMPLE         PERMANENT    LOCAL      AUTO             157,286,400         83,886,080        53
ONLINE    INDX            PERMANENT    LOCAL      AUTO           1,073,741,824             65,536         0
ONLINE    UNDOTBS2        UNDO         LOCAL      MANUAL         1,073,741,824         22,544,384         2
ONLINE    TEMP            TEMPORARY    LOCAL      MANUAL         1,073,741,824         39,845,888         4
                                                            ------------------ ------------------ ---------
avg                                                                                                      26
sum                                                              8,512,339,968      1,559,429,120

8 rows selected.

 


28. Verify the RAC Cluster & Database Configuration

The following RAC verification checks should be performed on both Oracle RAC nodes in the cluster! For this article, however, I will only be performing checks from linux1.

This section provides several srvctl commands and SQL queries you can use to validate your Oracle RAC configuration.

There are five node-level tasks defined for SRVCTL:

  • Adding and deleting node-level applications
  • Setting and unsetting the environment for node-level applications
  • Administering node applications
  • Administering ASM instances
  • Starting and stopping a group of programs that includes virtual IP addresses, listeners, Oracle Notification Services, and Oracle Enterprise Manager agents (for maintenance purposes).

Status of all instances and services

$ srvctl status database -d orcl
Instance orcl1 is running on node linux1
Instance orcl2 is running on node linux2

Status of a single instance

$ srvctl status instance -d orcl -i orcl2
Instance orcl2 is running on node linux2

Status of node applications on a particular node

$ srvctl status nodeapps -n linux1
VIP is running on node: linux1
GSD is running on node: linux1
Listener is running on node: linux1
ONS daemon is running on node: linux1

Status of an ASM instance

$ srvctl status asm -n linux1
ASM instance +ASM1 is running on node linux1.

List all configured databases

$ srvctl config database
orcl

Display configuration for our RAC database

$ srvctl config database -d orcl
linux1 orcl1 /u01/app/oracle/product/11.1.0/db_1
linux2 orcl2 /u01/app/oracle/product/11.1.0/db_1

Display the configuration for node applications - (VIP, GSD, ONS, Listener)

$ srvctl config nodeapps -n linux1 -a -g -s -l
VIP exists.: /linux1-vip/192.168.1.200/255.255.255.0/eth0
GSD exists.
ONS daemon exists.
Listener exists.

Display the configuration for the ASM instance(s)

$ srvctl config asm -n linux1
+ASM1 /u01/app/oracle/product/11.1.0/db_1

All running instances in the cluster

SELECT
    inst_id
  , instance_number inst_no
  , instance_name inst_name
  , parallel
  , status
  , database_status db_status
  , active_state state
  , host_name host
FROM gv$instance
ORDER BY inst_id;

 INST_ID  INST_NO INST_NAME  PAR STATUS  DB_STATUS    STATE     HOST
-------- -------- ---------- --- ------- ------------ --------- -------
       1        1 orcl1      YES OPEN    ACTIVE       NORMAL    linux1
       2        2 orcl2      YES OPEN    ACTIVE       NORMAL    linux2

All database files and the disk group they reside in

select name from v$datafile
union
select member from v$logfile
union
select name from v$controlfile
union
select name from v$tempfile;

NAME
-------------------------------------------
+FLASH_RECOVERY_AREA/orcl/controlfile/current.256.635876425
+FLASH_RECOVERY_AREA/orcl/onlinelog/group_1.257.635876443
+FLASH_RECOVERY_AREA/orcl/onlinelog/group_2.258.635876469
+FLASH_RECOVERY_AREA/orcl/onlinelog/group_3.259.635885447
+FLASH_RECOVERY_AREA/orcl/onlinelog/group_4.260.635885473
+ORCL_DATA1/orcl/controlfile/current.256.635876421
+ORCL_DATA1/orcl/datafile/example.263.635876607
+ORCL_DATA1/orcl/datafile/indx.270.635995051
+ORCL_DATA1/orcl/datafile/sysaux.260.635876529
+ORCL_DATA1/orcl/datafile/system.259.635876483
+ORCL_DATA1/orcl/datafile/undotbs1.261.635876549
+ORCL_DATA1/orcl/datafile/undotbs2.264.635876629
+ORCL_DATA1/orcl/datafile/users.265.635876657
+ORCL_DATA1/orcl/datafile/users.269.635994857
+ORCL_DATA1/orcl/onlinelog/group_1.257.635876431
+ORCL_DATA1/orcl/onlinelog/group_2.258.635876457
+ORCL_DATA1/orcl/onlinelog/group_3.266.635885435
+ORCL_DATA1/orcl/onlinelog/group_4.267.635885461
+ORCL_DATA1/orcl/tempfile/temp.262.635876577

19 rows selected.

All ASM disk that belong to the 'ORCL_DATA1' disk group

SELECT path
FROM   v$asm_disk
WHERE  group_number IN (select group_number
                        from v$asm_diskgroup
                        where name = 'ORCL_DATA1');

PATH
----------------------------------
ORCL:VOL1
ORCL:VOL2

 


29. Starting / Stopping the Cluster

At this point, everything has been installed and configured for Oracle RAC 11g. We have all of the required software installed and configured plus we have a fully functional clustered database.

After all the work done up to this point, you may well ask, "OK, so how do I start and stop services?" If you have followed the instructions in this guide, all services—including Oracle Clusterware, all Oracle instances, Enterprise Manager Database Console, and so on—should start automatically on each reboot of the Linux nodes.

There are times, however, when you might want to shut down a node and manually start it back up. Or you may find that Enterprise Manager is not running and need to start it. This section provides the commands responsible for starting and stopping the cluster environment.

Ensure that you are logged in as the oracle UNIX user. We will run all commands in this section from linux1:

# su - oracle
$ hostname
linux1

Stopping the Oracle RAC 11g Environment

The first step is to stop the Oracle instance. When the instance (and related services) is down, then bring down the ASM instance. Finally, shut down the node applications (Virtual IP, GSD, TNS Listener, and ONS).

$ export ORACLE_SID=orcl1
$ emctl stop dbconsole
$ srvctl stop instance -d orcl -i orcl1
$ srvctl stop asm -n linux1
$ srvctl stop nodeapps -n linux1

Starting the Oracle RAC 11g Environment

The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and ONS). When the node applications are successfully started, then bring up the ASM instance. Finally, bring up the Oracle instance (and related services) and the Enterprise Manager Database console.

$ export ORACLE_SID=orcl1
$ srvctl start nodeapps -n linux1
$ srvctl start asm -n linux1
$ srvctl start instance -d orcl -i orcl1
$ emctl start dbconsole

Start/Stop All Instances with SRVCTL

Start/stop all the instances and their enabled services. I have included this step just for fun as a way to bring down all instances!

$ srvctl start database -d orcl
$ srvctl stop database -d orcl

 


30. Troubleshooting

Confirm the RAC Node Name is Not Listed in Loopback Address

Ensure that the node names (linux1 or linux2) are not included for the loopback address in the /etc/hosts file. If the machine name is listed in the in the loopback address entry as below:

127.0.0.1 linux1 localhost.localdomain localhost
it will need to be removed as shown below:
127.0.0.1 localhost.localdomain localhost

If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation:

ORA-00603: ORACLE server session terminated by fatal error
or
ORA-29702: error occurred in Cluster Group Service operation

Setting the Correct Date and Time on All Cluster Nodes

During the installation of Oracle Clusterware, the Database, and the Examples, the Oracle Universal Installer (OUI) first installs the software to the local node running the installer (i.e. linux1). The software is then copied remotely to all of the remaining nodes in the cluster (i.e. linux2). During the remote copy process, the OUI will execute the UNIX "tar" command on each of the remote nodes to extract the files that were archived and copied over. If the date and time on the node performing the install is greater than that of the node it is copying to, the OUI will throw an error from the "tar" command indicating it is attempting to extract files stamped with a time in the future:
Error while copying directory 
    /u01/app/crs with exclude file list 'null' to nodes 'linux2'.
[PRKC-1002 : All the submitted commands did not execute successfully]
---------------------------------------------
linux2:
   /bin/tar: ./bin/lsnodes: time stamp 2007-10-14 09:21:34 is 735 s in the future
   /bin/tar: ./bin/olsnodes: time stamp 2007-10-14 09:21:34 is 735 s in the future
   ...(more errors on this node)

Please note that although this would seem like a severe error from the OUI, it can safely be disregarded as a warning. The "tar" command DOES actually extract the files; however, when you perform a listing of the files (using ls -l) on the remote node, they will be missing the time field until the time on the server is greater than the timestamp of the file.

Before starting any of the above noted installations, ensure that each member node of the cluster is set as closely as possible to the same date and time. Oracle strongly recommends using the Network Time Protocol feature of most operating systems for this purpose, with both Oracle RAC nodes using the same reference Network Time Protocol server.

Accessing a Network Time Protocol server, however, may not always be an option. In this case, when manually setting the date and time for the nodes in the cluster, ensure that the date and time of the node you are performing the software installations from (linux1) is less than all other nodes in the cluster (linux2). I generally use a 20 second difference as shown in the following example:

Setting the date and time from linux1:

# date -s "10/09/2007 23:00:00"

Setting the date and time from linux2:

# date -s "10/09/2007 23:00:20"

The two-node RAC configuration described in this article does not make use of a Network Time Protocol server.

Openfiler - Logical Volumes Not Active on Boot

One issue that I have run into several times occurs when using a USB drive connected to the Openfiler server. When the Openfiler server is rebooted, the system is able to recognize the USB drive however, it is not able to load the logical volumes and writes the following message to /var/log/messages - (also available through dmesg):
iSCSI Enterprise Target Software - version 0.4.14
iotype_init(91) register fileio
iotype_init(91) register blockio
iotype_init(91) register nullio
open_path(120) Can't open /dev/rac1/crs -2
fileio_attach(268) -2
open_path(120) Can't open /dev/rac1/asm1 -2
fileio_attach(268) -2
open_path(120) Can't open /dev/rac1/asm2 -2
fileio_attach(268) -2
open_path(120) Can't open /dev/rac1/asm3 -2
fileio_attach(268) -2
open_path(120) Can't open /dev/rac1/asm4 -2
fileio_attach(268) -2

Please note that I am not suggesting that this only occurs with USB drives connected to the Openfiler server. It may occur with other types of drives, however I have only seen it with USB drives!

If you do receive this error, you should first check the status of all logical volumes using the lvscan command from the Openfiler server:

# lvscan
inactive          '/dev/rac1/crs' [2.00 GB] inherit
inactive          '/dev/rac1/asm1' [115.94 GB] inherit
inactive          '/dev/rac1/asm2' [115.94 GB] inherit
inactive          '/dev/rac1/asm3' [115.94 GB] inherit
inactive          '/dev/rac1/asm4' [115.94 GB] inherit

Notice that the status for each of the logical volumes is set to inactive - (the status for each logical volume on a working system would be set to ACTIVE).

I currently know of two methods to get Openfiler to automatically load the logical volumes on reboot, both of which are described below.

Method 1

One of the first steps is to shutdown both of the Oracle RAC nodes in the cluster - (linux1 and linux2). Then, from the Openfiler server, manually set each of the logical volumes to ACTIVE for each consecutive reboot:

# lvchange -a y /dev/rac1/crs
# lvchange -a y /dev/rac1/asm1
# lvchange -a y /dev/rac1/asm2
# lvchange -a y /dev/rac1/asm3
# lvchange -a y /dev/rac1/asm4

Another method to set the status to active for all logical volumes is to use the Volume Group change command as follows:

# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "rac1" using metadata type lvm2

# vgchange -ay
  5 logical volume(s) in volume group "rac1" now active

After setting each of the logical volumes to active, use the lvscan command again to verify the status:

# lvscan
  ACTIVE            '/dev/rac1/crs' [2.00 GB] inherit
  ACTIVE            '/dev/rac1/asm1' [115.94 GB] inherit
  ACTIVE            '/dev/rac1/asm2' [115.94 GB] inherit
  ACTIVE            '/dev/rac1/asm3' [115.94 GB] inherit
  ACTIVE            '/dev/rac1/asm4' [115.94 GB] inherit

As a final test, reboot the Openfiler server to ensure each of the logical volumes will be set to ACTIVE after the boot process. After you have verified that each of the logical volumes will be active on boot, check that the iSCSI target service is running:

# service iscsi-target status
ietd (pid 2668) is running...

Finally, restart each of the Oracle RAC nodes in the cluster - (linux1 and linux2).

Method 2

This method was kindly provided by . His workaround includes amending the /etc/rc.sysinit script to basically wait for the USB disk (/dev/sda in my example) to be detected. After making the changes to the /etc/rc.sysinit script (described below), verify the external drives are powered on and then reboot the Openfiler server.

The following is a small portion of the /etc/rc.sysinit script on the Openfiler server with the changes (highlighted in blue) proposed by Martin:

..............................................................
# LVM2 initialization, take 2
        if [ -c /dev/mapper/control ]; then
                if [ -x /sbin/multipath.static ]; then
                        modprobe dm-multipath >/dev/null 2>&1
                        /sbin/multipath.static -v 0
                        if [ -x /sbin/kpartx ]; then
                                /sbin/dmsetup ls --target multipath --exec "/sbin/kpartx -a"
                        fi
                fi
 

                if [ -x /sbin/dmraid ]; then
                        modprobe dm-mirror > /dev/null 2>&1
                        /sbin/dmraid -i -a y
                fi

#-----
#-----  MJONES - Customisation Start
#-----

       # Check if /dev/sda is ready
         while [ ! -e /dev/sda ]
         do
             echo "Device /dev/sda for first USB Drive is not yet ready."
             echo "Waiting..."
             sleep 5
         done
         echo "INFO - Device /dev/sda for first USB Drive is ready."

#-----
#-----  MJONES - Customisation END
#-----
                if [ -x /sbin/lvm.static ]; then
                        if /sbin/lvm.static vgscan > /dev/null 2>&1 ; then
                                action $"Setting up Logical Volume
Management:" /sbin/lvm.static vgscan --mknodes --ignorelockingfailure &&
/sbin/lvm.static vgchange -a y --ignorelockingfailure
                        fi
                fi
        fi
 

# Clean up SELinux labels
if [ -n "$SELINUX" ]; then
   for file in /etc/mtab /etc/ld.so.cache ; do
      [ -r $file ] && restorecon $file  >/dev/null 2>&1
   done
fi
..............................................................

Finally, restart each of the Oracle RAC nodes in the cluster - (linux1 and linux2).

OCFS2 - o2cb_ctl: Unable to access cluster service while creating node

While configuring the nodes for OCFS2 using ocfs2console, it is possible to run into the error:

o2cb_ctl: Unable to access cluster service while creating node

This error does not show up when you startup ocfs2console for the first time. This message comes up when there is a problem with the cluster configuration or if you do not save the cluster configuration initially while setting it up using ocfs2console. This is a bug!

The work-around is to exit from the ocfs2console, unload the o2cb module and remove the ocfs2 cluster configuration file /etc/ocfs2/cluster.conf. I also like to remove the /config directory. After removing the ocfs2 cluster configuration file, restart the ocfs2console program.

For example:

# /etc/init.d/o2cb offline ocfs2
# /etc/init.d/o2cb unload
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK

# rm -f /etc/ocfs2/cluster.conf
# rm -rf /config

# ocfs2console &

This time, it will add the nodes!

OCFS2 - Adjusting the O2CB Heartbeat Threshold

With previous versions of this article, (using FireWire as opposed to iSCSI for the shared storage), I was able to install and configure OCFS2, format the new volume, and finally install Oracle Clusterware (with its two required shared files; the voting disk and OCR file), located on the new OCFS2 volume. While I was able to install Oracle Clusterware and see the shared drive using FireWire, however, I was receiving many lock-ups and hanging after about 15 minutes when the Clusterware software was running on both nodes. It always varied on which node would hang (either linux1 or linux2 in my example). It also didn't matter whether there was a high I/O load or none at all for it to crash (hang).

After looking through the trace files for OCFS2, it was apparent that access to the voting disk was too slow (exceeding the O2CB heartbeat threshold) and causing the Oracle Clusterware software (and the node) to crash. On the console would be a message similar to the following:

...
Index 0: took 0 ms to do submit_bio for read
Index 1: took 3 ms to do waiting for read completion
Index 2: took 0 ms to do bio alloc write
Index 3: took 0 ms to do bio add page write
Index 4: took 0 ms to do submit_bio for write
Index 5: took 0 ms to do checking slots
Index 6: took 4 ms to do waiting for write completion
Index 7: took 1993 ms to do msleep
Index 8: took 0 ms to do allocating bios for read
Index 9: took 0 ms to do bio alloc read
Index 10: took 0 ms to do bio add page read
Index 11: took 0 ms to do submit_bio for read
Index 12: took 10006 ms to do waiting for read completion
(13,3):o2hb_stop_all_regions:1888 ERROR: stopping heartbeat on all active regions.
Kernel panic - not syncing: ocfs2 is very sorry to be fencing this system by panicing

The solution I used was to increase the O2CB heartbeat threshold from its default value of 31 (which used to be 7 in previous versions of OCFS2), to a value of 61. Some setups may require an even higher setting. This is a configurable parameter that is used to compute the time it takes for a node to "fence" itself. During the installation and configuration of OCFS2, we adjusted this value in the section "". If you encounter a kernel panic from OCFS2 and need to increase the heartbeat threshold, use the same procedures described in the section "".

The following describes how to manually adjust the O2CB heartbeat threshold.

First, let's see how to determine what the O2CB heartbeat threshold is currently set to. This can be done by querying the /proc file system as follows:

# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
31

We see that the value is 31, but what does this value represent? Well, it is used in the formula below to determine the fence time (in seconds):

[fence time in seconds] = (O2CB_HEARTBEAT_THRESHOLD - 1) * 2

So, with an O2CB heartbeat threshold of 31, we would have a fence time of:

(31 - 1) * 2 = 60 seconds

If we want a larger threshold (say 120 seconds), we would need to adjust O2CB_HEARTBEAT_THRESHOLD to 61 as shown below:

(61 - 1) * 2 = 120 seconds

Let's see now how to manually increase the O2CB heartbeat threshold from 31 to 61. This task will need to be performed on all Oracle RAC nodes in the cluster. We first need to modify the file /etc/sysconfig/o2cb and set O2CB_HEARTBEAT_THRESHOLD to 61:

#
# This is a configuration file for automatic startup of the O2CB
# driver.  It is generated by running /etc/init.d/o2cb configure.
# Please use that method to modify this file
#

# O2CB_ENABELED: 'true' means to load the driver on boot.
O2CB_ENABLED=true

# O2CB_BOOTCLUSTER: If not empty, the name of a cluster to start.
O2CB_BOOTCLUSTER=ocfs2

# O2CB_HEARTBEAT_THRESHOLD: Iterations before a node is considered dead.
O2CB_HEARTBEAT_THRESHOLD=61

# O2CB_IDLE_TIMEOUT_MS: Time in ms before a network connection is considered dead.
O2CB_IDLE_TIMEOUT_MS=30000

# O2CB_KEEPALIVE_DELAY_MS: Max time in ms before a keepalive packet is sent
O2CB_KEEPALIVE_DELAY_MS=2000

# O2CB_RECONNECT_DELAY_MS: Min time in ms between connection attempts
O2CB_RECONNECT_DELAY_MS=2000

After modifying the file /etc/sysconfig/o2cb, we need to alter the o2cb configuration. Again, this should be performed on all Oracle RAC nodes in the cluster.

# umount /u02/oradata/orcl/
# /etc/init.d/o2cb offline ocfs2
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
 without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]: 30000
Specify network keepalive delay in ms (>=1000) [2000]: 2000
Specify network reconnect delay in ms (>=2000) [2000]: 2000
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

We can now check again to make sure the settings took place in for the o2cb cluster stack:

# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
61

It is important to note that the value of 61 I used for the O2CB heartbeat threshold may not work for all configurations. In some cases, the O2CB heartbeat threshold value may have to be increased to as high as 601 in order to prevent OCFS2 from panicking the kernel.

 


31. Conclusion

Oracle11g RAC allows the DBA to configure a database solution with superior fault tolerance and load balancing. For those DBAs, however, that want to become more familiar with the features and benefits of Oracle11g RAC will find the costs of configuring even a small RAC cluster costing in the range of US$15,000 to US$20,000.

This article has hopefully given you an economical solution to setting up and configuring an inexpensive Oracle11g Release 1 RAC Cluster using Oracle Enterprise Linux and iSCSI technology. The RAC solution presented in this article can be put together for around US$2,600 and will provide the DBA with a fully functional Oracle11g Release 1 RAC cluster. While the hardware used for this article should be stable enough for educational purposes, it should never be considered for a production environment.



32. Acknowledgements

An article of this magnitude and complexity is generally not the work of one person alone. Although I was able to author and successfully demonstrate the validity of the components that make up this configuration, there are several other individuals that deserve credit in making this article a success.

First, I would like to thank from the Server BDE Team at Oracle. Bane not only introduced me to Openfiler, but shared with me his experience and knowledge of the product and how to best utilize it for Oracle RAC. His research and hard work made the task of configuring Openfiler seamless. Bane was also involved with hardware recommendations and testing.

A special thanks to K Gopalakrishnan for his assistance in delivering the section of this article. In this section, much of the content regarding the history of Oracle RAC can be found in his very popular book . This book comes highly recommended for both DBAs and Developers wanting to successfully implement Oracle RAC and fully understand how many of the advanced services like Cache Fusion and Global Resource Directory operate.

I would next like to thank Oracle ACE Werner Puschitz for his outstanding work on "Installing Oracle Database 10g with Real Application Cluster (RAC) on Red Hat Enterprise Linux Advanced Server 3". This article, along with several others of his, provided information on Oracle RAC that could not be found in any other Oracle documentation. Without his hard work and research into issues like configuring OCFS2 and ASMLib, this article may have never come to fruition. If you are interested in examining technical articles on Linux internals and in-depth Oracle configurations written by Werner Puschitz, please visit his website at .

Also, thanks to for his comments and suggestions on using Oracle's Cluster Verification Utility (CVU).

Lastly, I would like to express my appreciation to the following vendors for generously supplying the hardware for this article; , , , , , and .

阅读(3310) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~