分类: Oracle
2007-12-20 21:06:27
Perform the following configuration tasks on the network storage server (openfiler1)! Openfiler administration is performed using the Openfiler Storage Control Center — a browser based tool over an https connection on port 446. For example:
From the Openfiler Storage Control Center home page, login as an administrator. The default administration login credentials for Openfiler are:
- Username: openfiler
- Password: password
The first page the administrator sees is the [Accounts] / [Authentication] screen. Configuring user accounts and groups is not necessary for this article and will therefore not be discussed.
To use Openfiler as an iSCSI storage server, we have to perform three major tasks; set up iSCSI services, configure network access, and create physical storage.
ServicesTo control services, we use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]:
Figure 6: Enable iSCSI Openfiler ServiceTo enable the iSCSI service, click on 'Enable' under the 'iSCSI target' service name. After that, the 'iSCSI target' status should change to 'Enabled'.
The ietd program implements the user level part of iSCSI Enterprise Target software for building an iSCSI storage system on Linux. With the iSCSI target enabled, we should be able to SSH into the Openfiler server and see the iscsi-target service running:
[root@openfiler1 ~]# service iscsi-target status ietd (pid 3784) is running...
Network Access RestrictionThe next step is to configure network access in Openfiler so both Oracle RAC nodes (linux1 and linux2) have permissions to our iSCSI volumes through the storage (private) network.
iSCSI volumes will be created in the next section! Again, this task can be completed using the Openfiler Storage Control Center by navigating to [General] / [Local Networks]. The Local Networks screen allows an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfiler appliance. For the purpose of this article, we will want to add both Oracle RAC nodes individually rather than allowing the entire 192.168.2.0 network have access to Openfiler resources.
When entering each of the Oracle RAC nodes, note that the 'Name' field is just a logical name used for reference only. As a convention when entering nodes, I simply use the node name defined for that IP address. Next, when entering the actual node in the 'Network/Host' field, always use it's IP address even though its host name may already be defined in your /etc/hosts file or DNS. Lastly, when entering actual hosts in our Class C network, use a subnet mask of 255.255.255.255.
It is important to remember that you will be entering the IP address of the private network (eth1) for each of the RAC nodes in the cluster.
The following image shows the results of adding both Oracle RAC nodes:
Figure 7: Configure Openfiler Host Access for Oracle RAC Nodes
Physical StorageStorage devices like internal IDE/SATA/SCSI disks, external USB or FireWire drives, or any other storage can be connected to the Openfiler server, and served to the clients. Once these devices are discovered at the OS level, Openfiler Storage Control Center can be used to set up and manage all that storage.In this section, we will be creating the five iSCSI volumes to be used as shared storage by both of the Oracle RAC nodes in the cluster. This involves multiple steps that will be performed on the internal SATA 500GB hard drive connected to the Openfiler server.
For the purpose of this article, I have a 500GB SATA hard drive dedicated for all shared storage needs. On the Openfiler server my 500GB SATA hard drive was configured on /dev/sda (with description ATA Maxtor 6H500F0). To see this and to start the process of creating our iSCSI volumes, navigate to [Volumes] / [Physical Storage Mgmt.] from the Openfiler Storage Control Center:
Figure 8: Openfiler Physical StoragePartitioning the Physical Disk
The first step we will perform is to create a single primary partition on the /dev/sda internal hard drive. By clicking on the /dev/sda link, we are presented with the options to 'Edit' or 'Create' a partition. Since we will be creating a single primary partition that spans the entire disk, most of the options can be left to their default setting where the only modification would be to change the 'Partition Type' from 'Extended partition' to 'Physical volume'. Here are the values I specified to create the primary partition on /dev/sda:Volume Group Management
Mode: Primary
Partition Type: Physical volume
Starting Cylinder: 1
Ending Cylinder: 60801The size now shows 465.76 GB. To accept that, we click on the Create button. This results in a new partition (/dev/sda1) on our internal hard drive:
Figure 9: Partition the Physical VolumeThe next step is to create a Volume Group. We will be creating a single volume group named rac1 that contains the newly created primary partition.Logical VolumesFrom the Openfiler Storage Control Center, navigate to [Volumes] / [Volume Group Mgmt.]. There we would see any existing volume groups, or none as in our case. Using the Volume Group Management screen, enter the name of the new volume group (rac1), click on the checkbox in front of /dev/sda1 to select that partition, and finally click on the 'Add volume group' button. After that we are presented with the list that now shows our newly created volume group named "rac1":
Figure 10: New Volume Group CreatedWe can now create the five logical volumes in the newly created volume group (rac1).Grant Access Rights to New Logical VolumesFrom the Openfiler Storage Control Center, navigate to [Volumes] / [Create New Volume]. There we will see the newly created volume group (rac1) along with its block storage statistics. Also available at the bottom of this screen is the option to create a new volume in the selected volume group. Use this screen to create the following five logical (iSCSI) volumes. After creating each logical volume, the application will point you to the "List of Existing Volumes" screen. You will then need to click back to the "Create New Volume" tab to create the next logical volume until all five iSCSI volumes are created:
iSCSI / Logical Volumes Volume Name Volume Description Required Space (MB) Filesystem Type crs Oracle Clusterware 2,048 iSCSI asm1 Oracle ASM Volume 1 118,720 iSCSI asm2 Oracle ASM Volume 2 118,720 iSCSI asm3 Oracle ASM Volume 3 118,720 iSCSI asm4 Oracle ASM Volume 4 118,720 iSCSI In effect we have created five iSCSI disks that can now be presented to iSCSI clients (linux1 and linux2) on the network. The "List of Existing Volumes" screen should look as follows:
Figure 11: New Logical (iSCSI) VolumesBefore an iSCSI client can have access to the newly created iSCSI volumes, it needs to be granted the appropriate permissions. Awhile back, we configured Openfiler with two hosts (the Oracle RAC nodes) that can be configured with access rights to resources. We now need to grant both of the Oracle RAC nodes access to each of the newly created iSCSI volumes.Make iSCSI Targets Available to ClientsFrom the Openfiler Storage Control Center, navigate to [Volumes] / [List of Existing Volumes]. This will present the screen shown in the previous section. For each of the logical volumes, click on the 'Edit' link (under the Properties column). This will bring up the 'Edit properties' screen for that volume. Scroll to the bottom of this screen, change both hosts from 'Deny' to 'Allow' and click the 'Update' button:
Figure 12: Grant Host Access to Logical (iSCSI) VolumesPerform this task for all five logical volumes.
Every time a new logical volume is added, we need to restart the associated service on the Openfiler server. In our case we created five iSCSI logical volumes, so we have to restart the iSCSI target (iscsi-target) service. This will make the new iSCSI targets available to all clients on the network who have privileges to access them.To restart the iSCSI target service, use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]. The iSCSI target service should already be enabled (several sections back). If so, disable the service then enable it again. (See Figure 6)
The same task can be achieved through an SSH session on the Openfiler server:
[root@openfiler1 ~]# service iscsi-target restart Stopping iSCSI target service: [ OK ] Starting iSCSI target service: [ OK ]
Configure iSCSI Volumes on Oracle RAC Nodes
Configure the iSCSI initiator on both Oracle RAC nodes in the cluster! Creating partitions, however, should only be executed on one of nodes in the RAC cluster. An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) is available. In our case, the clients are two Linux servers, (linux1 and linux2), running CentOS 5.1.
In this section we will be configuring the iSCSI software initiator on both of the Oracle RAC nodes. CentOS 5.1 includes the iSCSI software initiator which can be found in the iscsi-initiator-utils RPM. This is a change from previous versions of CentOS (4.x) which included the Linux iscsi-sfnet software driver developed as part of the Linux-iSCSI Project. All iSCSI management tasks like discovery and logins will use the command-line interface iscsiadm which is included with Open-iSCSI.
The iSCSI software initiator will be configured to automatically login to the network storage server (openfiler1) and discover the iSCSI volumes created in the previous section. We will then go through the steps of creating persistent local SCSI device names (i.e. /dev/iscsi/asm1) for each of the iSCSI target names discovered using udev. Having a consistent local SCSI device name and which iSCSI target it maps to is required in order to know which volume (device) is to be used for OCFS2 and which volumes belong to ASM. Before we can do any of this, however, we must first install the iSCSI initiator software!
Installing the iSCSI (initiator) ServiceWith CentOS 5.1, the Open-iSCSI iSCSI software initiator does not get installed by default. The software is included in the iscsi-initiator-utils package which can be found on CD #1. To determine if this package is installed (which in most cases, it will not be), perform the following on both Oracle RAC nodes:# rpm -qa | grep iscsi-initiator-utilsIf the iscsi-initiator-utils package is not installed, load CD #1 into each of the Oracle RAC nodes and perform the following:# mount -r /dev/cdrom /media/cdrom # cd /media/cdrom/CentOS # rpm -Uvh iscsi-initiator-utils-6.2.0.865-0.8.el5.i386.rpm # cd / # eject
Configure the iSCSI (initiator) ServiceAfter verifying that the iscsi-initiator-utils package is installed on both Oracle RAC nodes, start the iscsid service and enable it to automatically start when the system boots. We will also configure the iscsi service to automatically start which logs into iSCSI targets needed at system startup.# service iscsid start Turning off network shutdown. Starting iSCSI daemon: [ OK ] [ OK ] # chkconfig iscsid on # chkconfig iscsi onNow that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server. This should be performed on both Oracle RAC nodes to verify the configuration is functioning properly:# iscsiadm -m discovery -t sendtargets -p openfiler1-priv 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:rac1.asm1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:rac1.asm2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:rac1.asm3 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:rac1.asm4 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:rac1.crs
Manually Login to iSCSI TargetsAt this point the iSCSI initiator service has been started and each of the Oracle RAC nodes were able to discover the available targets from the network storage server. The next step is to manually login to each of the available targets which can be done using the iscsiadm command-line interface. This needs to be run on both Oracle RAC nodes. Note that I had to specify the IP address and not the host name of the network storage server (openfiler1-priv) - I believe this is required given the discovery (above) shows the targets using the IP address.# iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.asm1 -p 192.168.2.195 -l # iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.asm2 -p 192.168.2.195 -l # iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.asm3 -p 192.168.2.195 -l # iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.asm4 -p 192.168.2.195 -l # iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.crs -p 192.168.2.195 -l
Configure Automatic LoginThe next step is to ensure the client will automatically login to each of the targets listed above when the machine is booted (or the iSCSI initiator service is started/restarted). As with the manual login process described above, perform the following on both Oracle RAC nodes:# iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.asm1 -p 192.168.2.195 --op update -n node.startup -v automatic # iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.asm2 -p 192.168.2.195 --op update -n node.startup -v automatic # iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.asm3 -p 192.168.2.195 --op update -n node.startup -v automatic # iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.asm4 -p 192.168.2.195 --op update -n node.startup -v automatic # iscsiadm -m node -T iqn.2006-01.com.openfiler:rac1.crs -p 192.168.2.195 --op update -n node.startup -v automatic
Create Persistent Local SCSI Device NamesIn this section, we will go through the steps to create persistent local SCSI device names for each of the iSCSI target names. This will be done using udev. Having a consistent local SCSI device name and which iSCSI target it maps to is required in order to know which volume (device) is to be used for OCFS2 and which volumes belong to ASM.When either of the Oracle RAC nodes boot and the iSCSI initiator service is started, it will automatically login to each of the targets configured in a random fashion and map them to the next available local SCSI device name. For example, the target iqn.2006-01.com.openfiler:rac1.asm1 may get mapped to /dev/sda. I can actually determine the current mappings for all targets by looking at the /dev/disk/by-path directory:
# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}') ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm1 -> ../../sda ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm2 -> ../../sdb ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm3 -> ../../sdc ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm4 -> ../../sdd ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.crs -> ../../sdeUsing the output from the above listing, we can establish the following current mappings:
Current iSCSI Target Name to local SCSI Device Name Mappings iSCSI Target Name SCSI Device Name iqn.2006-01.com.openfiler:rac1.asm1 /dev/sda iqn.2006-01.com.openfiler:rac1.asm2 /dev/sdb iqn.2006-01.com.openfiler:rac1.asm3 /dev/sdc iqn.2006-01.com.openfiler:rac1.asm4 /dev/sdd iqn.2006-01.com.openfiler:rac1.crs /dev/sde This mapping, however, may change every time the Oracle RAC node is rebooted. For example, after a reboot it may be determined that the iSCSI target iqn.2006-01.com.openfiler:rac1.asm1 gets mapped to the local SCSI device /dev/sdd. It is therefore impractical to rely on using the local SCSI device name given there is no way to predict the iSCSI target mappings after a reboot.
What we need is a consistent device name we can reference (i.e. /dev/iscsi/asm1) that will always point to the appropriate iSCSI target through reboots. This is where the Dynamic Device Management tool named udev comes in. udev provides a dynamic device directory using symbolic links that point to the actual device using a configurable set of rules. When udev receives a device event (for example, the client logging in to an iSCSI target), it matches its configured rules against the available device attributes provided in sysfs to identify the device. Rules that match may provide additional device information or specify a device node name and multiple symlink names and instruct udev to run additional programs (a SHELL script for example) as part of the device event handling process.
The first step is to create a new rules file. The file will be named /etc/udev/rules.d/55-openiscsi.rules and contain only a single line of name=value pairs used to receive events we are interested in. It will also define a call-out SHELL script (/etc/udev/scripts/iscsidev.sh) to handle the event.
Create the following rules file /etc/udev/rules.d/55-openiscsi.rules on both Oracle RAC nodes:
/etc/udev/rules.d/55-openiscsi.rules # /etc/udev/rules.d/55-openiscsi.rules KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"We now need to create the UNIX SHELL script that will be called when this event is received. Let's first create a separate directory on both Oracle RAC nodes where udev scripts can be stored:
# mkdir -p /etc/udev/scriptsNext, create the UNIX shell script /etc/udev/scripts/iscsidev.sh on both Oracle RAC nodes:
/etc/udev/scripts/iscsidev.sh #!/bin/sh # FILE: /etc/udev/scripts/iscsidev.sh BUS=${1} HOST=${BUS%%:*} [ -e /sys/class/iscsi_host ] || exit 1 file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname" target_name=$(cat ${file}) # This is not an open-scsi drive if [ -z "${target_name}" ]; then exit 1 fi echo "${target_name##*.}"After creating the UNIX SHELL script, change it to executable:
# chmod 755 /etc/udev/scripts/iscsidev.shNow that udev is configured, restart the iSCSI service on both Oracle RAC nodes:# service iscsi stop Logout session [sid: 1, target: iqn.2006-01.com.openfiler:rac1.asm1, portal: 192.168.2.195,3260] Logout session [sid: 2, target: iqn.2006-01.com.openfiler:rac1.asm2, portal: 192.168.2.195,3260] Logout session [sid: 3, target: iqn.2006-01.com.openfiler:rac1.asm3, portal: 192.168.2.195,3260] Logout session [sid: 4, target: iqn.2006-01.com.openfiler:rac1.asm4, portal: 192.168.2.195,3260] Logout session [sid: 5, target: iqn.2006-01.com.openfiler:rac1.crs, portal: 192.168.2.195,3260] Stopping iSCSI daemon: /etc/init.d/iscsi: line 33: 3277 Killed /etc/init.d/iscsid stop # service iscsi start iscsid dead but pid file exists Turning off network shutdown. Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Login session [iface: default, target: iqn.2006-01.com.openfiler:rac1.crs, portal: 192.168.2.195,3260] Login session [iface: default, target: iqn.2006-01.com.openfiler:rac1.asm3, portal: 192.168.2.195,3260] Login session [iface: default, target: iqn.2006-01.com.openfiler:rac1.asm4, portal: 192.168.2.195,3260] Login session [iface: default, target: iqn.2006-01.com.openfiler:rac1.asm2, portal: 192.168.2.195,3260] Login session [iface: default, target: iqn.2006-01.com.openfiler:rac1.asm1, portal: 192.168.2.195,3260] [ OK ]Let's see if our hard work paid off:# ls -l /dev/iscsi/* /dev/iscsi/asm1: total 0 lrwxrwxrwx 1 root root 9 Dec 12 18:25 part -> ../../sde /dev/iscsi/asm2: total 0 lrwxrwxrwx 1 root root 9 Dec 12 18:25 part -> ../../sdd /dev/iscsi/asm3: total 0 lrwxrwxrwx 1 root root 9 Dec 12 18:25 part -> ../../sdb /dev/iscsi/asm4: total 0 lrwxrwxrwx 1 root root 9 Dec 12 18:25 part -> ../../sdc /dev/iscsi/crs: total 0 lrwxrwxrwx 1 root root 9 Dec 12 18:25 part -> ../../sdaThe listing above shows that udev did the job is was suppose to do! We now have a consistent set of local device names that can be used to reference the iSCSI targets. For example, we can safely assume that the device name /dev/iscsi/asm1/part will always reference the iSCSI target iqn.2006-01.com.openfiler:rac1.asm1. We now have a consistent iSCSI target name to local device name mapping which is described in the following table:
iSCSI Target Name to Local Device Name Mappings iSCSI Target Name Local Device Name iqn.2006-01.com.openfiler:rac1.asm1 /dev/iscsi/asm1/part iqn.2006-01.com.openfiler:rac1.asm2 /dev/iscsi/asm2/part iqn.2006-01.com.openfiler:rac1.asm3 /dev/iscsi/asm3/part iqn.2006-01.com.openfiler:rac1.asm4 /dev/iscsi/asm4/part iqn.2006-01.com.openfiler:rac1.crs /dev/iscsi/crs/part
Create Partitions on iSCSI VolumesWe now need create a single primary partition on each of the iSCSI volumes that spans the entire size of the volume. As mentioned earlier in this article, I will be using Oracle's Cluster File System, Release 2 (OCFS2) to store the two files to be shared for Oracle's Clusterware software. We will then be using Automatic Storage Management (ASM) to create four ASM volumes; two for all physical database files (data/index files, online redo log files, and control files) and two for the Flash Recovery Area (RMAN backups and archived redo log files).The following table lists the five iSCSI volumes and what file systems they will support:
Oracle Shared Drive Configuration File System
TypeiSCSI Target
(short) NameSize Mount
PointASM Diskgroup
NameFile
TypesOCFS2 crs 2 GB /u02 Oracle Cluster Registry (OCR) File - (~250 MB)
Voting Disk - (~20MB)ASM asm1 118 GB ORCL:VOL1 +ORCL_DATA1 Oracle Database Files ASM asm2 118 GB ORCL:VOL2 +ORCL_DATA1 Oracle Database Files ASM asm3 118 GB ORCL:VOL3 +FLASH_RECOVERY_AREA Oracle Flash Recovery Area ASM asm4 118 GB ORCL:VOL4 +FLASH_RECOVERY_AREA Oracle Flash Recovery Area Total 474 GB As shown in the table above, we will need to create a single Linux primary partition on each of the five iSCSI volumes. The fdisk command is used in Linux for creating (and removing) partitions. For each of the five iSCSI volumes, you can use the default values when creating the primary partition as the default action is to use the entire disk. You can safely ignore any warnings that may indicate the device does not contain a valid DOS partition (or Sun, SGI or OSF disklabel).
In this example, I will be running the fdisk command from linux1 to create a single primary partition on each iSCSI target using the local device names created by udev in the previous section:
- /dev/iscsi/asm1/part
- /dev/iscsi/asm2/part
- /dev/iscsi/asm3/part
- /dev/iscsi/asm4/part
- /dev/iscsi/crs/part
Creating the single partition on each of the iSCSI volumes must only be run from one of the nodes in the Oracle RAC cluster! (i.e. linux1) # --------------------------------------- # fdisk /dev/iscsi/asm1/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-15134, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-15134, default 15134): 15134 Command (m for help): p Disk /dev/iscsi/asm1/part: 124.4 GB, 124486942720 bytes 255 heads, 63 sectors/track, 15134 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/iscsi/asm1/part1 1 15134 121563823+ 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. # --------------------------------------- # fdisk /dev/iscsi/asm2/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-15134, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-15134, default 15134): 15134 Command (m for help): p Disk /dev/iscsi/asm2/part: 124.4 GB, 124486942720 bytes 255 heads, 63 sectors/track, 15134 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/iscsi/asm2/part1 1 15134 121563823+ 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. # --------------------------------------- # fdisk /dev/iscsi/asm3/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-15134, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-15134, default 15134): 15134 Command (m for help): p Disk /dev/iscsi/asm3/part: 124.4 GB, 124486942720 bytes 255 heads, 63 sectors/track, 15134 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/iscsi/asm3/part1 1 15134 121563823+ 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. # --------------------------------------- # fdisk /dev/iscsi/asm4/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-15134, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-15134, default 15134): 15134 Command (m for help): p Disk /dev/iscsi/asm4/part: 124.4 GB, 124486942720 bytes 255 heads, 63 sectors/track, 15134 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/iscsi/asm4/part1 1 15134 121563823+ 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. # --------------------------------------- # fdisk /dev/iscsi/crs/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1009, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009): 1009 Command (m for help): p Disk /dev/iscsi/crs/part: 2147 MB, 2147483648 bytes 67 heads, 62 sectors/track, 1009 cylinders Units = cylinders of 4154 * 512 = 2126848 bytes Device Boot Start End Blocks Id System /dev/iscsi/crs/part1 1 1009 2095662 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Verify New PartitionsAfter creating all required partitions from linux1, you should now inform the kernel of the partition changes using the following command as the "root" user account from all remaining nodes in the Oracle RAC cluster (linux2). Note that the mapping of iSCSI target names discovered from Openfiler and the local SCSI device name will be different on both Oracle RAC nodes. This is not a concern and will not cause any problems since we will not be using the local SCSI device names but rather the local device names created by udev in the previous section.From linux2, run the following commands:
# partprobe # fdisk -l Disk /dev/hda: 40.0 GB, 40000000000 bytes 255 heads, 63 sectors/track, 4863 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 83 Linux /dev/hda2 14 4863 38957625 8e Linux LVM Disk /dev/sda: 124.4 GB, 124486942720 bytes 255 heads, 63 sectors/track, 15134 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 15134 121563823+ 83 Linux Disk /dev/sdb: 124.4 GB, 124486942720 bytes 255 heads, 63 sectors/track, 15134 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 15134 121563823+ 83 Linux Disk /dev/sdc: 124.4 GB, 124486942720 bytes 255 heads, 63 sectors/track, 15134 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 15134 121563823+ 83 Linux Disk /dev/sdd: 124.4 GB, 124486942720 bytes 255 heads, 63 sectors/track, 15134 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 15134 121563823+ 83 Linux Disk /dev/sde: 2147 MB, 2147483648 bytes 67 heads, 62 sectors/track, 1009 cylinders Units = cylinders of 4154 * 512 = 2126848 bytes Device Boot Start End Blocks Id System /dev/sde1 1 1009 2095662 83 LinuxAs a final step you should run the following command on both Oracle RAC nodes to verify that udev created the new symbolic links for each new partition:# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}') ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm1 -> ../../sde ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm1-part1 -> ../../sde1 ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm2 -> ../../sdd ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm2-part1 -> ../../sdd1 ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm3 -> ../../sdb ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm3-part1 -> ../../sdb1 ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm4 -> ../../sdc ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.asm4-part1 -> ../../sdc1 ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.crs -> ../../sda ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:rac1.crs-part1 -> ../../sda1The listing above shows that udev did indeed create new device names for each of the new partitions. We will be using these new device names when configuring the volumes for OCFS2 and ASMlib later in this guide:
- /dev/iscsi/asm1/part1
- /dev/iscsi/asm2/part1
- /dev/iscsi/asm3/part1
- /dev/iscsi/asm4/part1
- /dev/iscsi/crs/part1
Create "oracle" User and Directories
Perform the following tasks on both Oracle RAC nodes in the cluster! In this section we will create the oracle UNIX user account, recommended O/S groups, and all required directories. The following O/S groups will be created:
Description Oracle Privilege Oracle Group Name UNIX Group name Oracle Inventory and Software Owner oinstall Database Administrator SYSDBA OSDBA dba Database Operator SYSOPER OSOPER oper ASM Administrator SYSASM OSASM asm OSDBA Group for ASM asmdba The oracle user account will own the Oracle Clusterware, Oracle RAC Database, and ASM software. The UID and GID must be consistent across all of the Oracle RAC nodes.
Note that members of the UNIX group oinstall are considered the "owners" of the Oracle software. Members of the dba group can administer Oracle databases, for example starting up and shutting down databases. New to Oracle 11g is the SYSASM privilege that is specifically intended for performing ASM administration tasks. Using the SYSASM privilege instead of the SYSDBA privilege provides a clearer division of responsibility between ASM administration and database administration. OSASM is a new operating system group that is used exclusively for ASM. Members of the OSASM group can connect as SYSASM using operating system authentication and have full access to ASM. The final group (asmdba) is the OSDBA Group for ASM. You must create an OSDBA group for ASM to provide access to the ASM instance. This is necessary if OSASM and OSDBA are different groups. In this article, we are creating the oracle user account to have all responsibilities!
This guide adheres to the Optimal Flexible Architecture (OFA) for naming conventions used in creating the directory structure.
Create Groups and User for OracleLets start this section by creating the recommended UNIX groups and oracle user account:# groupadd -g 501 oinstall # groupadd -g 502 dba # groupadd -g 503 oper # groupadd -g 504 asm # groupadd -g 506 asmdba # useradd -m -u 501 -g oinstall -G dba,oper,asm -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle # id oracle uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),503(oper),504(asm)Set the password for the oracle account:
# passwd oracle Changing password for user oracle. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully.
Verify That the User nobody ExistsBefore installing the Oracle software, complete the following procedure to verify that the user nobody exists on the system:
- To determine if the user exists, enter the following command:
# id nobody uid=99(nobody) gid=99(nobody) groups=99(nobody)If this command displays information about the nobody user, then you do not have to create that user.- If the user nobody does not exist, then enter the following command to create it:
# /usr/sbin/useradd nobody- Repeat this procedure on all the other Oracle RAC nodes in the cluster.
Create the Oracle Base DirectoryThe next step is to create a new directory that will be used to store the Oracle Database software. When configuring the oracle user's environment (later in this section) we will be assigning the location of this directory to the $ORACLE_BASE environment variable.The following assumes that the directories are being created in the root file system. Please note that this is being done for the sake of simplicity and is not recommended as a general practice. Normally, these directories would be created on a separate file system.
After the directory is created, you must then specify the correct owner, group, and permissions for it. Perform the following on both Oracle RAC nodes:
# mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01/app # chmod -R 775 /u01/appAt the end of this procedure, you will have the following:
- /u01 owned by root.
- /u01/app owned by oracle:oinstall with 775 permissions. This ownership and permissions enable the OUI to create the oraInventory directory, in the path /u01/app/oraInventory.
- /u01/app/oracle owned by oracle:oinstall with 775 permissions.
Create the Oracle Clusterware Home DirectoryNext, create a new directory that will be used to store the Oracle Clusterware software. When configuring the oracle user's environment (later in this section) we will be assigning the location of this directory to the $ORA_CRS_HOME environment variable.As noted in the previous section, the following assumes that the directories are being created in the root file system. This is being done for the sake of simplicity and is not recommended as a general practice. Normally, these directories would be created on a separate file system.
After the directory is created, you must then specify the correct owner, group, and permissions for it. Perform the following on both Oracle RAC nodes:
# mkdir -p /u01/app/crs # chown -R oracle:oinstall /u01/app/crs # chmod -R 775 /u01/app/crsAt the end of this procedure, you will have the following:
- /u01/app/crs owned by oracle:oinstall with 775 permissions. These permissions are required for Oracle Clusterware installation and are changed during the installation process.
Create Mount Point for OCFS2 / ClusterwareLet's now create the mount point for the Oracle Cluster File System, Release 2 (OCFS2) that will be used to store the two Oracle Clusterware shared files.Perform the following on both Oracle RAC nodes:
# mkdir -p /u02 # chown -R oracle:oinstall /u02 # chmod -R 775 /u02
Create Login Script for oracle User AccountTo ensure that the environment is setup correctly for the "oracle" UNIX userid on both Oracle RAC nodes, use the following .bash_profile:
When you are setting the Oracle environment variables for each Oracle RAC node, ensure to assign each RAC node a unique Oracle SID! For this example, I used:
- linux1 : ORACLE_SID=orcl1
- linux2 : ORACLE_SID=orcl2
Login to each node as the oracle user account:
# su - oracle
.bash_profile for Oracle User # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi alias ls="ls -FA" export JAVA_HOME=/usr/local/java # User specific environment and startup programs export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1 export ORA_CRS_HOME=/u01/app/crs export ORACLE_PATH=$ORACLE_BASE/common/oracle/sql:.:$ORACLE_HOME/rdbms/admin # Each RAC node must have a unique ORACLE_SID. (i.e. orcl1, orcl2,...) export ORACLE_SID=orcl1 export PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin export PATH=${PATH}:$ORACLE_BASE/common/oracle/bin export ORACLE_TERM=xterm export TNS_ADMIN=$ORACLE_HOME/network/admin export ORA_NLS10=$ORACLE_HOME/nls/data export LD_LIBRARY_PATH=$ORACLE_HOME/lib export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export CLASSPATH=$ORACLE_HOME/JRE export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export THREADS_FLAG=native export TEMP=/tmp export TMPDIR=/tmp
Configure the Linux Servers for Oracle
Perform the following configuration procedures on both Oracle RAC nodes in the cluster!
The kernel parameters discussed in this section will need to be defined on both Oracle RAC nodes in the cluster every time the machine is booted. This section provides very detailed information about setting those kernel parameters required for Oracle. Instructions for placing them in a startup script (/etc/sysctl.conf) are included in section "All Startup Commands for Both Oracle RAC Nodes".
OverviewThis section focuses on configuring both Oracle RAC Linux servers - getting each one prepared for the Oracle RAC 11g installation. This includes verifying enough swap space, setting shared memory and semaphores, setting the maximum amount of file handles, setting the IP local port range, setting shell limits for the oracle user, activating all kernel parameters for the system, and finally how to verify the correct date and time for both nodes in the cluster.Throughout this section you will notice that there are several different ways to configure (set) these parameters. For the purpose of this article, I will be making all changes permanent (through reboots) by placing all commands in the /etc/sysctl.conf file.
Swap Space Considerations
- Installing Oracle Database 11g Release 1 requires a minimum of 1GB of memory.
(An inadequate amount of swap during the installation will cause the Oracle Universal Installer to either "hang" or "die")
I highly recommend installing 2GB of memory for both Oracle RAC nodes. Although 1GB will work, it is extremely tight.
- To check the amount of memory you have, type:
# cat /proc/meminfo | grep MemTotal MemTotal: 2074164 kB- To check the amount of swap you have allocated, type:
# cat /proc/meminfo | grep SwapTotal SwapTotal: 2031608 kB- If you have less than 2GB of memory (between your RAM and SWAP), you can add temporary swap space by creating a temporary swap file. This way you do not have to use a raw device or even more drastic, rebuild your system.
As root, make a file that will act as additional swap space, let's say about 500MB:
# dd if=/dev/zero of=tempswap bs=1k count=500000Now we should change the file permissions:
# chmod 600 tempswapFinally we format the "partition" as swap and add it to the swap space:
# mke2fs tempswap
# mkswap tempswap
# swapon tempswap
Configuring Kernel Parameters and Shell LimitsThe kernel parameters and shell limits presented in this section are recommended values only as documented by Oracle. For production database systems, Oracle recommends that you tune these values to optimize the performance of the system.On both Oracle RAC nodes, verify that the kernel parameters shown in this section are set to values greater than or equal to the recommended values. Also note that when setting the four semaphore values that all four values need to be entered on one line.
Setting Shared MemoryShared memory allows processes to access common structures and data by placing them in a shared memory segment. This is the fastest form of Inter-Process Communications (IPC) available - mainly due to the fact that no kernel involvement occurs when data is being passed between the processes. With shared memory, data does not need to be copied between processes.Oracle makes use of shared memory for its Shared Global Area (SGA) which is an area of memory that is shared by all Oracle backup and foreground processes. Adequate sizing of the SGA is critical to Oracle performance since it is responsible for holding the database buffer cache, shared SQL, access paths, and so much more.
To determine all shared memory limits, use the following:
# ipcs -lm ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 4194303 max total shared memory (kbytes) = 1073741824 min seg size (bytes) = 1Setting SHMMAX
The SHMMAX parameters defines the maximum size (in bytes) for a shared memory segment. The Oracle SGA is comprised of shared memory and it is possible that incorrectly setting SHMMAX could limit the size of the SGA. When setting SHMMAX, keep in mind that the size of the SGA should fit within one shared memory segment. An inadequate SHMMAX setting could result in the following:ORA-27123: unable to attach to shared memory segmentYou can determine the value of SHMMAX by performing the following:
# cat /proc/sys/kernel/shmmax 4294967295NOTE: For most Linux systems, the default value for SHMMAX is 32MB. This size is often too small to configure the Oracle SGA. The default value for SHMMAX in CentOS 5 is 4GB. Note that this value of 4GB is not the "normal" default value for SHMMAX in a Linux environment — CentOS 5 inserts the following two entries in the file /etc/sysctl.conf:# Controls the maximum shared segment size, in bytes kernel.shmmax = 4294967295 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 268435456I highly recommend removing these two values from both Oracle RAC nodes and replacing them with the recommended values document by Oracle. All recommended Oracle kernel parameter values are documented in this section.
Oracle recommends sizing the SHMMAX parameter as the minimum of (4GB - 1 byte), or half the size of physical memory (in bytes), whichever is lower. Given my nodes are configured with 2GB of physical RAM, I will configure SHMMAX to 1GB:
- You can alter the default setting for SHMMAX without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/kernel/shmmax) by using the following command:
# sysctl -w kernel.shmmax=1073741823- You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.conf startup file:
# echo "kernel.shmmax=1073741823" >> /etc/sysctl.confSetting SHMMNI
We now look at the SHMMNI parameters. This kernel parameter is used to set the maximum number of shared memory segments system wide. The default value for this parameter is 4096.You can determine the value of SHMMNI by performing the following:
# cat /proc/sys/kernel/shmmni 4096The default setting for SHMMNI should be adequate for our Oracle11g Release 1 RAC installation.Setting SHMALL
Finally, we look at the SHMALL shared memory kernel parameter. This parameter controls the total amount of shared memory (in pages) that can be used at one time on the system. In short, the value of this parameter should always be at least:ceil(SHMMAX/PAGE_SIZE)The default size of SHMALL is 2097152 and can be queried using the following command:# cat /proc/sys/kernel/shmall 2097152The default setting for SHMALL should be adequate for our Oracle11g Release 1 RAC installation.
The page size in Red Hat Linux on the i386 platform is 4096 bytes. You can, however, use bigpages which supports the configuration of larger memory page sizes.
Setting SemaphoresNow that you have configured your shared memory settings, it is time to configure your semaphores. The best way to describe a "semaphore" is as a counter that is used to provide synchronization between processes (or threads within a process) for shared resources like shared memory. Semaphore sets are supported in UNIX System V where each one is a counting semaphore. When an application requests semaphores, it does so using "sets".To determine all semaphore limits, use the following:
# ipcs -ls ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767You can also use the following command:# cat /proc/sys/kernel/sem 250 32000 32 128Setting SEMMSL
The SEMMSL kernel parameter is used to control the maximum number of semaphores per semaphore set.Oracle recommends setting SEMMSL to the largest PROCESS instance parameter setting in the init.ora file for all databases on the Linux system plus 10. Also, Oracle recommends setting the SEMMSL to a value of no less than 100.
Setting SEMMNI
The SEMMNI kernel parameter is used to control the maximum number of semaphore sets in the entire Linux system.Oracle recommends setting the SEMMNI to a value of no less than 100.
Setting SEMMNS
The SEMMNS kernel parameter is used to control the maximum number of semaphores (not semaphore sets) in the entire Linux system.Oracle recommends setting the SEMMNS to the sum of the PROCESSES instance parameter setting for each database on the system, adding the largest PROCESSES twice, and then finally adding 10 for each Oracle database on the system.
Use the following calculation to determine the maximum number of semaphores that can be allocated on a Linux system. It will be the lesser of:
SEMMNS -or- (SEMMSL * SEMMNI)Setting SEMOPM
The SEMOPM kernel parameter is used to control the number of semaphore operations that can be performed per semop system call.The semop system call (function) provides the ability to do operations for multiple semaphores with one semop system call. A semaphore set can have the maximum number of SEMMSL semaphores per semaphore set and is therefore recommended to set SEMOPM equal to SEMMSL.
Oracle recommends setting the SEMOPM to a value of no less than 100.
Setting Semaphore Kernel Parameters
Finally, we see how to set all semaphore parameters. In the following, the only parameter I care about changing (raising) is SEMOPM. All other default settings should be sufficient for our example installation.
- You can alter the default setting for all semaphore settings without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/kernel/sem) by using the following command:
# sysctl -w kernel.sem="250 32000 100 128"- You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.conf startup file:
# echo "kernel.sem=250 32000 100 128" >> /etc/sysctl.conf
Setting File HandlesWhen configuring the Red Hat Linux server, it is critical to ensure that the maximum number of file handles is large enough. The setting for file handles denotes the number of open files that you can have on the Linux system.Use the following command to determine the maximum number of file handles for the entire system:
# cat /proc/sys/fs/file-max 205733Oracle recommends that the file handles for the entire system be set to at least 65536.
- You can alter the default setting for the maximum number of file handles without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/fs/file-max) using the following:
# sysctl -w fs.file-max=65536- You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.conf startup file:
# echo "fs.file-max=65536" >> /etc/sysctl.conf
You can query the current usage of file handles by using the following: # cat /proc/sys/fs/file-nr 825 0 65536The file-nr file displays three parameters:
- Total allocated file handles
- Currently used file handles
- Maximum file handles that can be allocated
If you need to increase the value in /proc/sys/fs/file-max
, then make sure that the ulimit is set properly. Usually for Linux 2.4 and 2.6 it is set to unlimited. Verify the ulimit setting my issuing the ulimit command:# ulimit unlimited
Setting IP Local Port RangeConfigure the system to allow a local port range of 1024 through 65000.Use the following command to determine the value of ip_local_port_range:
# cat /proc/sys/net/ipv4/ip_local_port_range 32768 61000The default value for ip_local_port_range is ports 32768 through 61000. Oracle recommends a local port range of 1024 to 65000.
- You can alter the default setting for the local port range without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/net/ipv4/ip_local_port_range) by using the following command:
# sysctl -w net.ipv4.ip_local_port_range="1024 65000"- You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.conf startup file:
# echo "net.ipv4.ip_local_port_range = 1024 65000" >> /etc/sysctl.conf
Setting Shell Limits for the oracle UserTo improve the performance of the software on Linux systems, Oracle recommends you increase the following shell limits for the oracle user:
Shell Limit Item in limits.conf Hard Limit Maximum number of open file descriptors nofile 65536 Maximum number of processes available to a single user nproc 16384 To make these changes, run the following as root:
cat >> /etc/security/limits.conf <Update the default shell startup file for the "oracle" UNIX account.cat >> /etc/pam.d/login <
- For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file by running the following command:
cat >> /etc/profile <- For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file by running the following command:
cat >> /etc/csh.login <
Activating All Kernel Parameters for the SystemAt this point, we have covered all of the required Linux kernel parameters needed for a successful Oracle installation and configuration. Within each section above, we configured the Linux system to persist each of the kernel parameters through reboots on system startup by placing them all in the /etc/sysctl.conf file.We could reboot at this point to ensure all of these parameters are set in the kernel or we could simply "run" the /etc/sysctl.conf file by running the following command as root. Perform this on both Oracle RAC nodes in the cluster!
# sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 262144 kernel.shmmax = 1073741823 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000
Setting the Correct Date and Time on Both Oracle RAC NodesDuring the installation of Oracle Clusterware, the Database, and the Companion CD, the Oracle Universal Installer (OUI) first installs the software to the local node running the installer (i.e. linux1). The software is then copied remotely to all of the remaining nodes in the cluster (i.e. linux2). During the remote copy process, the OUI will execute the UNIX "tar" command on each of the remote nodes to extract the files that were archived and copied over. If the date and time on the node performing the install is greater than that of the node it is copying to, the OUI will throw an error from the "tar" command indicating it is attempting to extract files stamped with a time in the future:Error while copying directory /u01/app/crs with exclude file list 'null' to nodes 'linux2'. [PRKC-1002 : All the submitted commands did not execute successfully] --------------------------------------------- linux2: /bin/tar: ./bin/lsnodes: time stamp 2007-12-13 09:21:34 is 735 s in the future /bin/tar: ./bin/olsnodes: time stamp 2007-12-13 09:21:34 is 735 s in the future ...(more errors on this node)Please note that although this would seem like a severe error from the OUI, it can safely be disregarded as a warning. The "tar" command DOES actually extract the files; however, when you perform a listing of the files (using ls -l) on the remote node, they will be missing the time field until the time on the server is greater than the timestamp of the file.
Before starting any of the above noted installations, ensure that each member node of the cluster is set as closely as possible to the same date and time. Oracle strongly recommends using the Network Time Protocol feature of most operating systems for this purpose, with all nodes using the same reference Network Time Protocol server.
Accessing a Network Time Protocol server, however, may not always be an option. In this case, when manually setting the date and time for the nodes in the cluster, ensure that the date and time of the node you are performing the software installations from (linux1) is less than all other nodes in the cluster (linux2). I generally use a 20 second difference as shown in the following example:
Setting the date and time from linux1:
# date -s "12/13/2007 01:12:00"Setting the date and time from linux2:
# date -s "12/13/2007 01:12:20"The two-node RAC configuration described in this article does not make use of a Network Time Protocol server.