Chinaunix首页 | 论坛 | 博客
  • 博客访问: 522427
  • 博文数量: 134
  • 博客积分: 7990
  • 博客等级: 少将
  • 技术积分: 1290
  • 用 户 组: 普通用户
  • 注册时间: 2007-10-29 11:43
文章分类

全部博文(134)

文章存档

2009年(7)

2008年(80)

2007年(47)

我的朋友

分类: Oracle

2007-12-20 21:09:47

Install Oracle Clusterware 11g


  Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (linux1)! The Oracle Clusterware software will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer.


Overview

We are ready to install the Cluster part of the environment - the Oracle Clusterware. In a previous section, we downloaded and extracted the install files for Oracle Clusterware to linux1 in the directory /home/oracle/orainstall/clusterware. This is the only node we need to perform the install from. During the installation of Oracle Clusterware, you will be asked for the nodes involved and to configure in the RAC cluster. Once the actual installation starts, it will copy the required software to all nodes using the remote access we configured in the section "Configure RAC Nodes for Remote Access using SSH".

So, what exactly is the Oracle Clusterware responsible for? It contains all of the cluster and database configuration metadata along with several system management features for RAC. It allows the DBA to register and invite an Oracle instance (or instances) to the cluster. During normal operation, Oracle Clusterware will send messages (via a special ping operation) to all nodes configured in the cluster - often called the heartbeat. If the heartbeat fails for any of the nodes, it checks with the Oracle Clusterware configuration files (on the shared disk) to distinguish between a real node failure and a network failure.

After installing Oracle Clusterware, the Oracle Universal Installer (OUI) used to install the Oracle Database software (next section) will automatically recognize these nodes. Like the Oracle Clusterware install we will be performing in this section, the Oracle Database software only needs to be run from one node. The OUI will copy the software packages to all nodes configured in the RAC cluster.


Oracle Clusterware Shared Files

The two shared files (actually file groups) used by Oracle Clusterware will be stored on the Oracle Cluster File System, Release 2 (OFCS2) we created earlier. The two shared Oracle Clusterware file groups are:

  • Oracle Cluster Registry (OCR)

    • File 1 : /u02/oradata/orcl/OCRFile
    • File 2 : /u02/oradata/orcl/OCRFile_mirror
    • Size : (2 * 250MB) = 500M

  • CRS Voting Disk

    • File 1 : /u02/oradata/orcl/CSSFile
    • File 2 : /u02/oradata/orcl/CSSFile_mirror1
    • File 3 : /u02/oradata/orcl/CSSFile_mirror2
    • Size : (3 * 20MB) = 60MB

  It is not possible to use Automatic Storage Management (ASM) for the two shared Oracle Clusterware files: Oracle Cluster Registry (OCR) or the CRS Voting Disk files. The problem is that these files need to be in place and accessible BEFORE any Oracle instances can be started. For ASM to be available, the ASM instance would need to be run first.

  The two shared files could be stored on the OCFS2, shared RAW devices, or another vendor's clustered file system.


Verifying Terminal Shell Environment

Before starting the Oracle Universal Installer, you should first verify you are logged onto the server you will be running the installer from (i.e. linux1) then run the xhost command as root from the console to allow X Server connections. Next, login as the oracle user account. If you are using a remote client to connect to the node performing the installation (SSH / Telnet to linux1 from a workstation configured with an X Server), you will need to set the DISPLAY variable to point to your local workstation. Finally, verify remote access / user equivalence to all nodes in the cluster:

Verify Server and Enable X Server Access

# hostname
linux1

# xhost +
access control disabled, clients can connect from any host

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Sat Dec 15 23:50:13 EST 2007
linux1

$ ssh linux2 "date;hostname"
Sat Dec 15 23:50:45 EST 2007
linux2


Install the Oracle Clusterware software

The following tasks are used to install the Oracle Clusterware software:
$ cd ~oracle
$ /home/oracle/orainstall/clusterware/runInstaller

Screen Name Response
Welcome Screen Click Next
Specify Inventory directory
and credentials
Accept the default values:
   Inventory directory: /u01/app/oraInventory
   Operating System group name: oinstall
Specify Home Details Set the Name and Path for the ORACLE_HOME (actually the $ORA_CRS_HOME that I will be using in this article) as follows:
   Name: OraCrs11g_home
   Path: /u01/app/crs
Product-Specific
Prerequisite Checks
The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle Clusterware software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.

Click Next to continue.

Specify Cluster Configuration Cluster Name: linux_cluster
Public Node Name Private Node Name Virtual Node Name
linux1 linux1-priv linux1-vip
linux2 linux2-priv linux2-vip
Specify Network Interface Usage
Interface Name Subnet Interface Type
eth0 192.168.1.0 Public
eth1 192.168.2.0 Private
Specify OCR Location Starting with Oracle Database 10g Release 2 (10.2) with RAC, Oracle Clusterware provides for the creation of a mirrored OCR file, enhancing cluster reliability. By enabling a multiple OCR file configuration, the redundant OCR files allow you to configure a RAC database with multiple OCR files on independent shared physical disks. For the purpose of this example, I did choose to mirror the OCR file by keeping the default option of "Normal Redundancy":

Specify OCR Location: /u02/oradata/orcl/OCRFile
Specify OCR Mirror Location: /u02/oradata/orcl/OCRFile_mirror

Specify Voting Disk Location Release 2 (10.2) with RAC, CSS has been modified to allow you to configure CSS with multiple voting disks. In Release 1 (10.1), you could configure only one voting disk. By enabling a multiple voting disk configuration, the redundant voting disks allow you to configure a RAC database with multiple voting disks on independent shared physical disks. Note that to take advantage of the benefits of multiple voting disks, you must configure at least three voting disks. For the purpose of this example, I did choose to mirror the voting disk by keeping the default option of "Normal Redundancy":

Voting Disk Location: /u02/oradata/orcl/CSSFile
Additional Voting Disk 1 Location: /u02/oradata/orcl/CSSFile_mirror1
Additional Voting Disk 2 Location: /u02/oradata/orcl/CSSFile_mirror2

Summary Click Install to start the installation!
Execute Configuration Scripts After the installation has completed, you will be prompted to run the orainstRoot.sh and root.sh script. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the "root" user account.

Navigate to the /u01/app/oraInventory directory and run orainstRoot.sh ON BOTH NODES in the RAC cluster one at a time starting with the node you are performing the install from.

NOTE: After executing the orainstRoot.sh on both nodes, verify the permissions of the file "/etc/oraInst.loc" are 644 (-rw-r--r--) and owned by root. Problems can occur during the installation of Oracle if the oracle user account does not have read permissions to this file - (the location of the oraInventory directory cannot be determined). For example, during the Oracle Clusterware post-installation process (while running the Oracle Clusterware Verification Utility), the following error will occur: "CRS is not installed on any of the nodes." If the permissions to /etc/oraInst.loc are not set correctly, it is possible you didn't run orainstRoot.sh on both nodes before running root.sh. Also, the umask setting may be off - it should be 0022. Run the following on both nodes in the RAC cluster to correct this problem:

# chmod 644 /etc/oraInst.loc
# ls -l /etc/oraInst.loc
-rw-r--r-- 1 root root 56 Dec 16 00:05 /etc/oraInst.loc


Within the same new console window on each node in the RAC cluster, (starting with the node you are performing the install from), stay logged in as the "root" user account.

Navigate to the /u01/app/crs directory and locate the root.sh file for both of the Oracle RAC nodes in the cluster - (starting with the node you are performing the install from). Run the root.sh file ON BOTH NODES in the RAC cluster ONE AT A TIME starting with the node you are performing the install from..

If the Oracle Clusterware home directory is a subdirectory of the ORACLE_BASE directory (which should never be!), you will receive several warnings regarding permission while running the root.sh script on both nodes. Any warnings regarding directories not being owned by root can be safely ignored.

The root.sh script can take awhile to run. When running root.sh on the last node, you will receive output similar to the following signify a successful install:

...
Cluster Synchronization Services is active on these nodes.
        linux1
        linux2
Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes...
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...


Done.

Go back to the OUI and acknowledge the "Execute Configuration scripts" dialog window.

  During the "Configuration Assistants" dialog, the final tool, "Oracle Cluster Verification Utility" will fail on the CentOS platform if you did not install redhat-release stub package.

This error is documented in the section "Pre-Installation Tasks for Oracle Clusterware 11g" and can be safely ignored. Simply acknowledge the error dialog box by clicking the [OK] button and then click [Next] on the "Configuration Assistants" screen. Acknowledge the final error dialog box which indicates one or more "Recommended" configuration assistances have not completed successfully.

End of installation At the end of the installation, exit from the OUI.


Verify Oracle Clusterware Installation

After the installation of Oracle Clusterware, we can run through several tests to verify the install was successful. Run the following commands on both nodes in the RAC cluster.

Check Cluster Nodes

$ $ORA_CRS_HOME/bin/olsnodes -n
linux1  1
linux2  2
Confirm Oracle Clusterware Function
$ $ORA_CRS_HOME/bin/crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora.linux1.gsd application    0/5    0/0    ONLINE    ONLINE    linux1
ora.linux1.ons application    0/3    0/0    ONLINE    ONLINE    linux1
ora.linux1.vip application    0/0    0/0    ONLINE    ONLINE    linux1
ora.linux2.gsd application    0/5    0/0    ONLINE    ONLINE    linux2
ora.linux2.ons application    0/3    0/0    ONLINE    ONLINE    linux2
ora.linux2.vip application    0/0    0/0    ONLINE    ONLINE    linux2
Check CRS Status
$ $ORA_CRS_HOME/bin/crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
Check Oracle Clusterware Auto-Start Scripts
$ ls -l /etc/init.d/init.*
-rwxr-xr-x 1 root root  2236 Dec 16 00:12 /etc/init.d/init.crs
-rwxr-xr-x 1 root root  5290 Dec 16 00:12 /etc/init.d/init.crsd
-rwxr-xr-x 1 root root 49416 Dec 16 00:12 /etc/init.d/init.cssd
-rwxr-xr-x 1 root root  3859 Dec 16 00:12 /etc/init.d/init.evmd



Install Oracle Database 11g


  Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (linux1)! The Oracle Database software will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer.


Overview

After successfully installing the Oracle Clusterware software, the next step is to install the Oracle Database 11g Release 1 (11.1.0.6.0) with RAC.

  For the purpose of this example, we will forgo the "Create Database" option when installing the Oracle Database software. We will create the database using the Database Configuration Assistant (DBCA) after all installs have been completed.

Like the Oracle Clusterware install (previous section), the Oracle Database software only needs to be run from one node. The OUI will copy the software packages to all nodes configured in the RAC cluster.


Verifying Terminal Shell Environment

As discussed in the previous section, (Install Oracle 11g Clusterware Software), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Oracle Universal Installer. Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Sat Dec 15 23:50:13 EST 2007
linux1

$ ssh linux2 "date;hostname"
Sat Dec 15 23:50:45 EST 2007
linux2


Run the Oracle Cluster Verification Utility

Before installing the Oracle Database Software, we should run the following database pre-installation check using the Cluster Verification Utility (CVU).

  Instructions for configuring CVU can be found in the section "Prerequisites for Using Cluster Verification Utility discussed earlier in this article.

$ cd /home/oracle/orainstall/clusterware
$ ./runcluvfy.sh stage -pre dbinst -n linux1,linux2 -r 11gR1 -verbose
Review the CVU report. Note that all of the checks performed by CVU should be reported as passed before continuing with the Oracle RAC installation.


Install the Oracle Database 11g Release 1 Software

Install the Oracle Database 11g Release 1 software as follows:
$ cd ~oracle
$ /home/oracle/orainstall/database/runInstaller

Screen Name Response
Welcome Screen Click Next
Select Installation Type I selected the Enterprise Edition option. If you need other components like Oracle Label Security or if you want to simply customize the environment, select Custom.
Install Location Set the Oracle Base, Software Location Name and Software Location Path as follows:

   Install Location
   Oracle Base: /u01/app/oracle

   Software Location
   Name: OraDb11g_home1
   Location: /u01/app/oracle/product/11.1.0/db_1

Specify Hardware
Cluster Installation Mode
Select the Cluster Installation option then select all nodes available. Click Select All to select all servers: linux1 and linux2.

  If the installation stops here and the status of any of the RAC nodes is "Node not reachable", perform the following checks:

  • Ensure the Oracle Clusterware is running on the node in question.
  • Ensure you are able to reach the node in question from the node you are performing the installation from.
Product-Specific
Prerequisite Checks
The installer will run through a series of checks to determine if the nodes meet the minimum requirements for installing and configuring the Oracle Database software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox.

It is possible to receive an error about the available swap space not meeting its minimum requirements:

Checking available swap space requirements...
Expected result: 3036MB
Actual Result: 1983MB

In most cases, you will have the minimum required swap space (as shown above) and this can be safely ignored. Simply click the check-box for "Checking available swap space requirements..." and click Next to continue.

Available Product Components
(Custom Database Installs Only)
Select the components that you plan on using for your database environment.
Select Configuration Option Select the option to Install Software Only.

Remember that we will create the clustered database as a separate step using DBCA.

Privileged Operating System Groups Select the UNIX groups that will be used for each of the Oracle group names as follows:

   Database Administrator (OSDBA) Group: dba
   Database Operator (OSOPER) Group: oper
   ASM Administrator (OSASM) Group: asm

Oracle Configuration
Manager Registration
I kept the default option to not enable Oracle Configuration Manager.
Summary Click Install to start the installation!
Root Script Window - Run root.sh After the installation has completed, you will be prompted to run the root.sh script. It is important to keep in mind that the root.sh script will need to be run ON BOTH NODES in the RAC cluster ONE AT A TIME starting with the node you are running the database installation from.

Navigate to the /u01/app/oracle/product/11.1.0/db_1 directory and run root.sh.

After running the root.sh script on both nodes in the cluster, go back to the OUI and acknowledge the "Execute Configuration scripts" dialog window.

End of installation At the end of the installation, exit from the OUI.



Install Oracle Database 11g Examples (formerly Companion)


  Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (linux1)! The Oracle Database 11g Examples will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer.


Overview

After successfully installing the Oracle Database software, the next step is to install the Oracle Database 11g Examples (11.1.0.6.0). Please keep in mind that this is an optional step.


Verifying Terminal Shell Environment

As discussed in the previous section, (Install Oracle Database 11g Software), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Oracle Universal Installer. Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Sat Dec 15 23:50:13 EST 2007
linux1

$ ssh linux2 "date;hostname"
Sat Dec 15 23:50:45 EST 2007
linux2


Install the Oracle Database 11g Examples Software

Install the Oracle Database 11g Examples as follows:
$ cd ~oracle
$ /home/oracle/orainstall/examples/runInstaller

Screen Name Response
Welcome Screen Click Next
Specify Home Details Set the destination for the ORACLE_HOME Name and Path to that of the previous Oracle11g Database software install as follows:
   Name: OraDb11g_home1
   Path: /u01/app/oracle/product/11.1.0/db_1
Specify Hardware
Cluster Installation Mode
The Cluster Installation option will be selected along with all of the available nodes in the cluster by default. Stay with these default options and click Next to continue.

  If the installation stops here and the status of any of the RAC nodes is "Node not reachable", perform the following checks:

  • Ensure the Oracle Clusterware is running on the node in question.
  • Ensure you are able to reach the node in question from the node you are performing the installation from.
Product-Specific
Prerequisite Checks
The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle Examples CD. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.

Click Next to continue.

Summary On the Summary screen, click Install to start the installation!
End of installation At the end of the installation, exit from the OUI.



Create TNS Listener Process


  Perform the following configuration procedures from only one of the Oracle RAC nodes in the cluster (linux1)! The Network Configuration Assistant (NETCA) will setup the TNS listener in a clustered configuration on both of Oracle RAC nodes in the cluster.


Overview

The Database Configuration Assistant (DBCA) requires the Oracle TNS Listener process to be configured and running on all nodes in the RAC cluster before it can create the clustered database.

The process of creating the TNS listener only needs to be performed from one of the nodes in the RAC cluster. All changes will be made and replicated to both Oracle RAC nodes in the cluster. On one of the nodes (I will be using linux1) bring up the Network Configuration Assistant (NETCA) and run through the process of creating a new TNS listener process and to also configure the node for local access.


Verifying Terminal Shell Environment

As discussed in the previous section, (Install Oracle Database 11g Examples), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Network Configuration Assistant (NETCA). Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Sat Dec 15 23:50:13 EST 2007
linux1

$ ssh linux2 "date;hostname"
Sat Dec 15 23:50:45 EST 2007
linux2


Run the Network Configuration Assistant

To start the NETCA, run the following:
$ netca &

The following table walks you through the process of creating a new Oracle listener for our RAC environment.

Screen Name Response
Select the Type of Oracle
Net Services Configuration
Select Cluster configuration
Select the nodes to configure Select all of the nodes: linux1 and linux2.
Type of Configuration Select Listener configuration.
Listener Configuration - Next 6 Screens The following screens are now like any other normal listener configuration. You can simply accept the default parameters for the next six screens:
   What do you want to do: Add
   Listener name: LISTENER
   Selected protocols: TCP
   Port number: 1521
   Configure another listener: No
   Listener configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Select Naming Methods configuration.
Naming Methods Configuration The following screens are:
   Selected Naming Methods: Local Naming
   Naming Methods configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Click Finish to exit the NETCA.


Verify TNS Listener Configuration

The Oracle TNS listener process should now be running on both nodes in the RAC cluster:
$ hostname
linux1

$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_LINUX1

$ $ORA_CRS_HOME/bin/crs_stat ora.linux1.LISTENER_LINUX1.lsnr
NAME=ora.linux1.LISTENER_LINUX1.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on linux1

=====================

$ hostname
linux2

$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_LINUX2

$ $ORA_CRS_HOME/bin/crs_stat ora.linux2.LISTENER_LINUX2.lsnr
NAME=ora.linux2.LISTENER_LINUX2.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on linux2



Create the Oracle Cluster Database


  The database creation process should only be performed from one of the Oracle RAC nodes in the cluster (linux1)!


Overview

We will be using the Oracle Database Configuration Assistant (DBCA) to create the clustered database.

Before executing the Database Configuration Assistant, make sure that $ORACLE_HOME and $PATH are set appropriately for the $ORACLE_BASE/product/11.1.0/db_1 environment.

You should also verify that all services we have installed up to this point (Oracle TNS listener, Oracle Clusterware processes, etc.) are running before attempting to start the clustered database creation process.


Verifying Terminal Shell Environment

As discussed in the previous section, (Create TNS Listener Process), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Database Configuration Assistant (DBCA). Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

$ ssh linux1 "date;hostname"
Sat Dec 15 23:50:13 EST 2007
linux1

$ ssh linux2 "date;hostname"
Sat Dec 15 23:50:45 EST 2007
linux2


Run the Oracle Cluster Verification Utility

Before creating the Oracle clustered database, we should run the following database configuration check using the Cluster Verification Utility (CVU).

  Instructions for configuring CVU can be found in the section "Prerequisites for Using Cluster Verification Utility discussed earlier in this article.

$ cd /home/oracle/orainstall/clusterware
$ ./runcluvfy.sh stage -pre dbcfg -n linux1,linux2 -d ${ORACLE_HOME} -verbose
Review the CVU report. Note that all of the checks performed by CVU should be reported as passed before continuing with creating the Oracle clustered database.


Create the Clustered Database

To start the database creation process, run the following:

$ dbca &
Screen Name Response
Welcome Screen Select Oracle Real Application Clusters database.
Operations Select Create a Database.
Node Selection Click the Select All button to select all servers: linux1 and linux2.
Database Templates Select Custom Database
Database Identification Select:
   Global Database Name: orcl.idevelopment.info
   SID Prefix: orcl

  I used idevelopment.info for the database domain. You may use any domain. Keep in mind that this domain does not have to be a valid DNS domain.

Management Option Leave the default options here which is to Configure Enterprise Manager / Configure Database Control for local management
Database Credentials I selected to Use the Same Administrative Password for All Accounts. Enter the password (twice) and make sure the password does not start with a digit number.
Storage Options For this article, we will select to use Automatic Storage Management (ASM).
Create ASM Instance Supply the SYS password to use for the new ASM instance. Also, starting with Oracle10g Release 2, the ASM instance server parameter file (SPFILE) needs to be on a shared disk. You will need to modify the default entry for "Create server parameter file (SPFILE)" to reside on the OCFS2 partition as follows: /u02/oradata/orcl/dbs/spfile+ASM.ora. All other options can stay at their defaults.

You will then be prompted with a dialog box asking if you want to create and start the ASM instance. Select the OK button to acknowledge this dialog.

The OUI will now create and start the ASM instance on all nodes in the RAC cluster.

ASM Disk Groups To start, click the Create New button. This will bring up the "Create Disk Group" window with the four volumes we configured earlier using ASMLib.

If the ASM volumes we created earlier in this article do not show up in the "Select Member Disks" window: (ORCL:VOL1, ORCL:VOL2, ORCL:VOL3, and ORCL:VOL4) then click on the "Change Disk Discovery Path" button and input "ORCL:VOL*".

For the first "Disk Group Name", I used the string ORCL_DATA1. Select the first two ASM volumes (ORCL:VOL1 and ORCL:VOL2) in the "Select Member Disks" window. Keep the "Redundancy" setting to Normal.

After verifying all values in this window are correct, click the OK button. This will present the "ASM Disk Group Creation" dialog. When the ASM Disk Group Creation process is finished, you will be returned to the "ASM Disk Groups" windows.

Click the Create New button again. For the second "Disk Group Name", I used the string FLASH_RECOVERY_AREA. Select the last two ASM volumes (ORCL:VOL3 and ORCL:VOL4) in the "Select Member Disks" window. Keep the "Redundancy" setting to Normal.

After verifying all values in this window are correct, click the OK button. This will present the "ASM Disk Group Creation" dialog.

When the ASM Disk Group Creation process is finished, you will be returned to the "ASM Disk Groups" window with two disk groups created and selected. Select only one of the disk groups by using the checkbox next to the newly created Disk Group Name ORCL_DATA1 (ensure that the disk group for FLASH_RECOVERY_AREA is not selected) and click Next to continue.

Database File Locations I selected to use the default which is Use Oracle-Managed Files:
   Database Area: +ORCL_DATA1
Recovery Configuration Check the option for Specify Flash Recovery Area.

For the Flash Recovery Area, click the [Browse] button and select the disk group name +FLASH_RECOVERY_AREA.

My disk group has a size of about 118GB. When defining the Flash Recovery Area size, use the entire volume minus 10% — (118-10%=106 GB). I used a Flash Recovery Area Size of 106 GB (108544 MB).

Database Content I left all of the Database Components (and destination tablespaces) set to their default value, although it is perfectly OK to select the Sample Schemas. This option is available since we installed the Oracle Database 11g Examples.
Initialization Parameters Change any parameters for your environment. I left them all at their default settings.
Security Settings Oracle highly recommends using the new enhanced security settings for Oracle 11g. For the purpose of this article, I choose the default setting to Keep the enhanced 11g default security settings. Note that these security settings can be modified after the database creation using DBCA.
Automatic Maintenance Tasks I kept the default to Enable automatic maintenance tasks.
Database Storage Change any parameters for your environment. I left them all at their default settings.
Creation Options Keep the default option Create Database selected. I also always select to Generate Database Creation Scripts. Click Finish to start the database creation process. After acknowledging the database creation report and script generation dialog, the database creation will start.

Click OK on the "Summary" screen.

End of Database Creation At the end of the database creation, exit from the DBCA.

NOTE: When exiting the DBCA you will not receive any feedback from the dialog window for around 30-60 seconds. After awhile, another dialog will come up indicating that it is starting the cluster database and all of its instances. This may take several minutes to complete. When finished, all windows and dialog boxes will disappear.

When the Oracle Database Configuration Assistant has completed, you will have a fully functional Oracle RAC cluster running!

$ $ORA_CRS_HOME/bin/crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    linux1
ora....X1.lsnr application    ONLINE    ONLINE    linux1
ora.linux1.gsd application    ONLINE    ONLINE    linux1
ora.linux1.ons application    ONLINE    ONLINE    linux1
ora.linux1.vip application    ONLINE    ONLINE    linux1
ora....SM2.asm application    ONLINE    ONLINE    linux2
ora....X2.lsnr application    ONLINE    ONLINE    linux2
ora.linux2.gsd application    ONLINE    ONLINE    linux2
ora.linux2.ons application    ONLINE    ONLINE    linux2
ora.linux2.vip application    ONLINE    ONLINE    linux2
ora.orcl.db    application    ONLINE    ONLINE    linux1
ora....l1.inst application    ONLINE    ONLINE    linux1
ora....l2.inst application    ONLINE    ONLINE    linux2



Post-Installation Tasks - (Optional)

This chapter describes several optional tasks that can be applied to your new Oracle 11g environment in order to enhance availability as well as database management.


Enabling Archive Logs in a RAC Environment

Whether a single instance or clustered database, Oracle tracks and logs all changes to database blocks in online redolog files. In an Oracle RAC environment, each instance will have its own set of online redolog files known as a thread. Each Oracle instance will use its group of online redologs in a circular manner. Once an online redolog fills, Oracle moves to the next one. If the database is in "Archive Log Mode", Oracle will make a copy of the online redo log before it gets reused. A thread must contain at least two online redologs (or online redolog groups). The same holds true for a single instance configuration. The single instance must contain at least two online redologs (or online redolog groups).

The size of an online redolog file is completely independent of another intances' redolog size. Although in most configurations the size is the same, it may be different depending on the workload and backup / recovery considerations for each node. It is also worth mentioning that each instance has exclusive write access to its own online redolog files. In a correctly configured RAC environment, however, each instance can read another instance's current online redolog file to perform instance recovery if that instance was terminated abnormally. It is therefore a requirement that online redo logs be located on a shared storage device (just like the database files).

As already mentioned, Oracle writes to its online redolog files in a circular manner. When the current online redolog fills, Oracle will switch to the next one. To facilitate media recovery, Oracle allows the DBA to put the database into "Archive Log Mode" which makes a copy of the online redolog after it fills (and before it gets reused). This is a process known as archiving.

The Database Configuration Assistant (DBCA) allows users to configure a new database to be in archive log mode, however most DBA's opt to bypass this option during initial database creation. In cases like this where the database is in no archive log mode, it is a simple task to put the database into archive log mode. Note however that this will require a short database outage. From one of the nodes in the Oracle RAC configuration, use the following tasks to put a RAC enabled database into archive log mode. For the purpose of this article, I will use the node linux1 which runs the orcl1 instance:

  1. Login to one of the nodes (i.e. linux1) and disable the cluster instance parameter by setting cluster_database to FALSE from the current instance:
    $ sqlplus "/ as sysdba"
    SQL> alter system set cluster_database=false scope=spfile sid='orcl1';

  2. Shutdown all instances accessing the clustered database:
    $ srvctl stop database -d orcl

  3. Using the local instance, MOUNT the database:
    $ sqlplus "/ as sysdba"
    SQL> startup mount

  4. Enable archiving:
    SQL> alter database archivelog;

  5. Re-enable support for clustering by modifying the instance parameter cluster_database to TRUE from the current instance:
    SQL> alter system set cluster_database=true scope=spfile sid='orcl1';

  6. Shutdown the local instance:
    SQL> shutdown immediate

  7. Bring all instances back up using srvctl:
    $ srvctl start database -d orcl

  8. Login to the local instance and verify Archive Log Mode is enabled:
    $ sqlplus "/ as sysdba"
    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     53
    Next log sequence to archive   54
    Current log sequence           54

After enabling Archive Log Mode, each instance in the RAC configuration can automatically archive redologs!


Download and Install Custom Oracle Database Scripts

DBA's rely on Oracle's data dictionary views and dynamic performance views in order to support and better manage their databases. Although these views provide a simple and easy mechanism to query critical information regarding the database, it helps to have a collection of accurate and readily available SQL scripts to query these views.

In this section you will download and install a collection of Oracle DBA scripts that can be used to manage many aspects of your database including space management, performance, backups, security, and session management. The Oracle DBA scripts archive can be downloaded using the following link http://www.idevelopment.info/data/Oracle/DBA_scripts/common.zip. As the oracle user account, download the common.zip archive to the $ORACLE_BASE directory of each node in the cluster. For the purpose of this example, the common.zip archive will be copied to /u01/app/oracle. Next, unzip the archive file to the $ORACLE_BASE directory.

For example, perform the following on both nodes in the Oracle RAC cluster as the oracle user account:

$ mv common.zip /u01/app/oracle
$ cd /u01/app/oracle
$ unzip common.zip
The final step is to verify (or set) the appropriate environment variable for the current UNIX shell to ensure the Oracle SQL scripts can be run from SQL*Plus while in any directory. For UNIX verify the following environment variable is set and included in your login shell script:
ORACLE_PATH=$ORACLE_BASE/common/oracle/sql:.:$ORACLE_HOME/rdbms/admin
export ORACLE_PATH

  Note that the ORACLE_PATH environment variable should already be set in the .bash_profile login script that was created in the section Create Login Script for oracle User Account.

Now that the Oracle DBA scripts have been unzipped and the UNIX environment variable ($ORACLE_PATH) has been set to the appropriate directory, you should now be able to run any of the SQL scripts in your $ORACLE_BASE/common/oracle/sql while logged into SQL*Plus. For example, to query tablespace information while logged into the Oracle database as a DBA user:

SQL> @dba_tablespaces

Status    Tablespace Name TS Type      Ext. Mgt.  Seg. Mgt.    Tablespace Size    Used (in bytes) Pct. Used
--------- --------------- ------------ ---------- --------- ------------------ ------------------ ---------
ONLINE    SYSAUX          PERMANENT    LOCAL      AUTO             606,470,144        579,534,848        96
ONLINE    UNDOTBS1        UNDO         LOCAL      MANUAL           335,544,320        241,500,160        72
ONLINE    USERS           PERMANENT    LOCAL      AUTO               5,242,880             65,536         1
ONLINE    SYSTEM          PERMANENT    LOCAL      MANUAL           734,003,200        726,728,704        99
ONLINE    EXAMPLE         PERMANENT    LOCAL      AUTO             157,286,400         83,886,080        53
ONLINE    UNDOTBS2        UNDO         LOCAL      MANUAL           209,715,200          5,308,416         3
ONLINE    TEMP            TEMPORARY    LOCAL      MANUAL            41,943,040         39,845,888        95
                                                            ------------------ ------------------ ---------
avg                                                                                                      60
sum                                                              2,090,205,184      1,676,869,632

7 rows selected.
To obtain a list of all available Oracle DBA scripts while logged into SQL*Plus, run the help.sql script:
SQL> @help.sql

========================================
Automatic Shared Memory Management
========================================
asmm_components.sql

========================================
Automatic Storage Management
========================================
asm_alias.sql
asm_clients.sql
asm_diskgroups.sql
asm_disks.sql
asm_disks_perf.sql
asm_drop_files.sql
asm_files.sql
asm_files2.sql
asm_templates.sql

< --- SNIP --- >

perf_top_sql_by_buffer_gets.sql
perf_top_sql_by_disk_reads.sql

========================================
Workspace Manager
========================================
wm_create_workspace.sql
wm_disable_versioning.sql
wm_enable_versioning.sql
wm_freeze_workspace.sql
wm_get_workspace.sql
wm_goto_workspace.sql
wm_merge_workspace.sql
wm_refresh_workspace.sql
wm_remove_workspace.sql
wm_unfreeze_workspace.sql
wm_workspaces.sql


Create Shared Oracle Password Files

In this section, I present the steps required to configure a shared Oracle password file between all instances in the Oracle clustered database. The password file for the database in UNIX is located at $ORACLE_HOME/dbs/orapw for each instance and contains a list of all database users that have SYSDBA privileges. When a database user is granted the SYSDBA role, the instance records this in the database password file for the instance you are logged into. But what about the other instances in the cluster? The database password file on other instances do not get updated and will not contain the user who was just granted the SYSDBA role. Therefore a program like RMAN that tries to login as this new user with SYSDBA privileges will fail if it tries to use an instance with a password file that does not contain his or her name.

A common solution to this problem is to place a single database password file on a shared / clustered file system and then create symbolic links from each of the instances to this single version of the database password file. Since the environment described in this article makes use of the Oracle Clustered File System (OCFS2), we will use it to store the single version of the database password file.

  In this section, we will also be including the Oracle password file for the ASM instance.

  1. Create the database password directory on the clustered file system mounted on /u02/oradata/orcl. Perform the following from only one node in the cluster as the oracle user account - (linux1):
    $ mkdir -p /u02/oradata/orcl/dbs
  2. From one node in the cluster (linux1), move the database password files to the database password directory on the clustered file system. Chose a node that contains a database password file that has the most recent SYSDBA additions. In most cases, this will not matter since any missing entries can be easily added by granting them the SYSDBA role - (plus the fact that this is a fresh install and unlikely you created any SYSDBA users at this point!). Note that the database server does not need to be shutdown while performing the following actions. From linux1 as the oracle user account:

    $ mv $ORACLE_HOME/dbs/orapw+ASM1 /u02/oradata/orcl/dbs/orapw+ASM
    $ mv $ORACLE_HOME/dbs/orapworcl1 /u02/oradata/orcl/dbs/orapworcl
    
    $ ln -s /u02/oradata/orcl/dbs/orapw+ASM $ORACLE_HOME/dbs/orapw+ASM1
    $ ln -s /u02/oradata/orcl/dbs/orapworcl $ORACLE_HOME/dbs/orapworcl1
  3. From the second node in the cluster (linux2):
    $ rm $ORACLE_HOME/dbs/orapw+ASM2
    $ rm $ORACLE_HOME/dbs/orapworcl2
    
    $ ln -s /u02/oradata/orcl/dbs/orapw+ASM $ORACLE_HOME/dbs/orapw+ASM2
    $ ln -s /u02/oradata/orcl/dbs/orapworcl $ORACLE_HOME/dbs/orapworcl2

Now, when a user is granted the SYSDBA role, all instances will have access to the same password file:

SQL> GRANT sysdba TO scott;
阅读(1050) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~