Chinaunix首页 | 论坛 | 博客
  • 博客访问: 195620
  • 博文数量: 33
  • 博客积分: 2020
  • 博客等级: 大尉
  • 技术积分: 380
  • 用 户 组: 普通用户
  • 注册时间: 2007-12-20 14:56
文章分类

全部博文(33)

文章存档

2010年(26)

2009年(7)

我的朋友

分类: LINUX

2010-05-07 16:18:29

 

linux下,san存储多路径软件的配置

1、使用 ntsysv 命令
将mulitipath服务开启

  • multipathd

    2、启动该服务

    [root@mail init.d]# service multipathd start
    Starting multipathd daemon:

    3、修改配置文件/etc/multipath.conf

    vi /etc/multipath.conf

    # This is a basic configuration file with some examples, for device mapper
    # multipath.
    # For a complete list of the default configuration values, see
    # /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults
    # For a list of configuration options with descriptions, see
    # /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated


    # Blacklist all devices by default. Remove this to enable multipathing
    # on the default devices.
    blacklist {
    #        devnode "*"
    }

    ## By default, devices with vendor = "IBM" and product = "S/390.*" are
    ## blacklisted. To enable mulitpathing on these devies, uncomment the
    ## following lines.
    #blacklist_exceptions {

    4、使用  multipath -F 删除现有路径

    5、multipath -v2 格式化路径

    6、使用 multipath -ll 查看多路径

    [root@mail /]# multipath -ll
    mpath0 (3600508b40006ea6e0001a000002a0000) dm-2 HP,HSV210
    [size=500G][features=1 queue_if_no_path][hwhandler=0]
    \_ round-robin 0 [prio=100][active]
    \_ 0:0:2:1 sdc 8:32  [active][ready]
    \_ 0:0:3:1 sdd 8:48  [active][ready]
    \_ round-robin 0 [prio=20][enabled]
    \_ 0:0:0:1 sda 8:0   [active][ready]
    \_ 0:0:1:1 sdb 8:16  [active][ready]

    7、使用fdisk -l 查看多路径的情况

    [root@mail /]# fdisk -l

    Disk /dev/cciss/c0d0: 146.7 GB, 146778685440 bytes
    255 heads, 63 sectors/track, 17844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

               Device Boot      Start         End      Blocks   Id  System
    /dev/cciss/c0d0p1   *           1          13      104391   83  Linux
    /dev/cciss/c0d0p2              14       17844   143227507+  8e  Linux LVM

    Disk /dev/cciss/c0d1: 733.9 GB, 733909245952 bytes
    255 heads, 63 sectors/track, 89226 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

               Device Boot      Start         End      Blocks   Id  System
    /dev/cciss/c0d1p1   *           1       89226   716707813+  83  Linux

    Disk /dev/sda: 536.8 GB, 536870912000 bytes
    255 heads, 63 sectors/track, 65270 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/sda doesn't contain a valid partition table

    Disk /dev/sdb: 536.8 GB, 536870912000 bytes
    255 heads, 63 sectors/track, 65270 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/sdb doesn't contain a valid partition table

    Disk /dev/sdc: 536.8 GB, 536870912000 bytes
    255 heads, 63 sectors/track, 65270 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/sdc doesn't contain a valid partition table

    Disk /dev/sdd: 536.8 GB, 536870912000 bytes
    255 heads, 63 sectors/track, 65270 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/sdd doesn't contain a valid partition table

    Disk /dev/dm-2: 536.8 GB, 536870912000 bytes
    255 heads, 63 sectors/track, 65270 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/dm-2 doesn't contain a valid partition table

    8、格式化

    mkfs -t extx  /dev/dm-2

    9、挂载即可使用

  • Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5 [ID 564580.1]  

      修改时间 24-JAN-2010     类型 HOWTO     状态 PUBLISHED  

    In this Document
      
      
         
         A Bit About Udev and Device Name Persistency
         Multipath, Raw and Udev
         Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5
         
         
         1a. Whitelist SCSI devices
         1b. List all SCSI (Clusterware) devices
         1c. Obtain Clusterware device unique SCSI identifiers
         
         
         
         
         
         
         
         
      


    Applies to:

    Linux OS - Version: 5.0 to 5.0
    Linux x86
    Linux x86-64
    Linux Itanium
    Linux Kernel - Version: 5.0 to 5.0

    Goal

    This article is intended for Oracle on Linux Database and System Administrators, particularly those intending to install (or migrate to) Oracle Real Application Clusters 10g Release 2 (10.2.0) on Red Hat/Oracle Enterprise Linux 5 (EL5). The article is intended to focus on the configuration of raw devices against multipathed devices on EL5 in preparation for RAC Clusterware usage, rather than on multipathing or installation of the Clusterware.

    Examples were taken from a working system of the following configuration:
    • Enterprise Linux 5 (GA) - 2.6.18-8.el5
    • Oracle Clusterware 10g Release 2 (10.2.0)
    • Shared storage for Clusterware files served via iSCSI
    Note: this document differs to that describes configuration of raw devices against single pathed devices. This Note describes configuration of raw devices against multipathed devices.

    Solution

    Deprecation of Support for Raw Devices

    In versions prior to EL5, applications such as Oracle, could access unstructured data on block devices by binding to them via character raw devices, such as /dev/raw/raw1 using the raw(8) command. Persistent device assignments could be configured using the /etc/sysconfig/rawdevices file in conjunction with the rawdevices service.

    Support for raw devices was initially deprecated in the Linux 2.6 kernel (EL5 < U4) in favour of directio (O_DIRECT) access, however was later undeprecated from EL5 U4 (initscripts-8.45.30-2).

    For details of the deprecation and undeprecation of support for rawio, refer to Linux kernel/version documentation including:
    • /usr/share/doc/kernel-doc-2.6.18/Documentation/feature-removal-schedule.txt
    • Red Hat Enterprise Linux 4/5 Release notes
    Both the /etc/sysconfig/rawdevices file (EL4) and /etc/udev/rules.d/60-raw.rules file (EL5) similarly discuss deprecation of raw.

    OCFS2, Oracle's Cluster Filesystem version 2 (), is an extent based, POSIX-compliant file system that provides for shared, O_DIRECT file access. For certified ports and distributions, Oracle extends free support of OCFS2 users with an Oracle database license for use in storing Oracle datafiles, redologs, archivelogs, control files, voting disk (CRS), cluster registry (OCR), etc. along with shared Oracle home.

    A Bit About Udev and Device Name Persistency

    Unlike devlabel in the 2.4 kernel, udev (the 2.6 kernel device file naming scheme) dynamically creates device file names at boot time. This can, however, give rise to the possibility that device file names may change - a device that may once have been named /dev/sdd say, may be named /dev/sdf, say, after reboot. Without specific configuration, if udev is left to dynamically name devices, the potential exists for devices referred to, or inadvertently accessed by, their arbitrary kernel-assigned name (e.g. Oracle Clusterware files; Cluster Registry, Voting disks, etc.) to become corrupt.

    Multipath, Raw and Udev

    The necessity for high availability access to storage is well understood. For singlepath environments, raw devices can easily be configured via udev rules as described in . For multipath environments, however, configuration of raw devices against multipathed devices via udev is more complex. In fact, significant modification of default udev rules can introduce issues with supportability. Therefore, other means are recommended to achieve configuration of raw devices against multipathed devices with multipath device naming persistency.

    Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5

    The following procedure outlines the steps necessary to configure persistent multipath device naming and creation of raw devices (including permissions) in preparation for Oracle 10gR2 (10.2.0) Clusterware devices. From Oracle11g Release 1 (11.1.0), Clusterware files may be placed on either block or raw devices located on shared disk partitions, therefore the following procedure only strictly applies when using Oracle 10gR2 (10.2.0) and multipathing.

    Therefore, take this opportunity to consider whether you wish to proceed using 10gR2 or 11gR1 Clusterware to manage your 10gR2 databases - multipath device configuration for Oracle 11g Clusterware is described in . The following procedure may also be used as a basis for configuring raw devices on EL4 (Update 2 or higher). Unless otherwise stated, all steps should be performed on each cluster node and as a privileged user.

    Assumptions

    The following procedure assumes the following to have occured:
    • Clusterware devices have been created on supported shared storage
    • Clusterware devices have been appropriately sized according to Oracle10g Release 2 (10.2.0) RAC documentation
    • Clusterware devices have been partitioned
    • All cluster nodes have multipath access to shared devices
    • Cluster nodes are configured to satisfy Oracle Universal Installer (OUI) requirements

    1. Configure SCSI_ID to Return Unique Device Identifiers

    1a. Whitelist SCSI devices

    Before being able to configure udev to explicitly name devices, scsi_id(8) should first be configured to return SCSI device identifiers. Modify the /etc/scsi_id.config file - add or replace the option=-b parameter/value pair (if exists) with option=-g, for example:

    # grep -v ^# /etc/scsi_id.config
    vendor="ATA",options=-p 0x80
    options=-g

    1b. List all SCSI (Clusterware) devices

    Clusterware devices must be visible and accessible to all cluster nodes. Typically, cluster node operating systems need to be updated in order to see newly provisioned (or modified) devices on shared storage i.e. use '/sbin/partprobe ' or '/sbin/sfdisk -r ', etc., or simply reboot. Resolve any issues preventing cluster nodes from correctly seeing or accessing Clusterware devices before proceeding.

    Run the fdisk(8) and/or 'cat /proc/partitions' commands to ensure Clusterware devices are visible, for example:

    # cat /proc/partitions
    major minor  #blocks  name

       8     0    6291456 sda
       8     1    5735173 sda1
       8     2     554242 sda2
       8    16     987966 sdb
       8    17     987681 sdb1
       8    32     987966 sdc
       8    33     987681 sdc1
       8    48     987966 sdd
       8    49     987681 sdd1
       8    64     987966 sde
       8    65     987681 sde1
       8    80     987966 sdf
       8    81     987681 sdf1
       8    96     987966 sdg
       8    97     987681 sdg1
       8   112    1004031 sdh
       8   113    1003873 sdh1
       8   128    1004031 sdi
       8   129    1003873 sdi1
       8   144    1004031 sdj
       8   145    1003873 sdj1
       8   160    1004031 sdk
       8   161    1003873 sdk1
       8   176    1004031 sdl
       8   177    1003873 sdl1
       8   192    1004031 sdm
       8   193    1003873 sdm1

    Above, though perhaps not entirely evident, the kernel has assigned two device files per multipathed device i.e. devices /dev/sdb and /dev/sdc both refer to the same device/LUN on shared storage, as do /dev/sdd and  /dev/sde and so on.

    Note, at this point, each cluster node may refer to the would-be Clusterware devices by different device file names - this is expected.

    1c. Obtain Clusterware device unique SCSI identifiers

    Run the scsi_id(8) command against Clusterware devices from one cluster node to obtain their unique device identifiers. When running the scsi_id(8) command with the -s argument, the device path and name passed should be that relative to sysfs directory /sys/ i.e. /block/ when referring to /sys/block/. Record the unique SCSI identifiers of Clusterware devices - these are required later (Step 2a.), for example:

    # for i in `cat /proc/partitions | awk {'print $4'} |grep sd`; do echo "### $i: `scsi_id -g -u -s /block/$i`"; done
    ...
    ### sdb: 1494554000000000000000000010000005c3900000d000000
    ### sdb1:
    ### sdc: 1494554000000000000000000010000005c3900000d000000
    ### sdc1:
    ### sdd: 149455400000000000000000001000000843900000d000000
    ### sdd1:
    ### sde: 149455400000000000000000001000000843900000d000000
    ### sde1:
    ### sdf: 149455400000000000000000001000000ae3900000d000000
    ### sdf1:
    ### sdg: 149455400000000000000000001000000ae3900000d000000
    ### sdg1:
    ### sdh: 149455400000000000000000001000000d03900000d000000
    ### sdh1:
    ### sdi: 149455400000000000000000001000000d03900000d000000
    ### sdi1:
    ### sdj: 149455400000000000000000001000000e63900000d000000
    ### sdj1:
    ### sdk: 149455400000000000000000001000000e63900000d000000
    ### sdk1:
    ### sdl: 149455400000000000000000001000000083a00000d000000
    ### sdl1:
    ### sdm: 149455400000000000000000001000000083a00000d000000
    ### sdm1:

    From the output above, note that multiple devices share common SCSI identifiers. It's should now be evident that devices such as /dev/sdb and /dev/sdc refer to the same shared storage device (LUN).

    Note: Irrespective of which cluster node the scsi_id(8) command is run from, the value returned for a given device (LUN) should always be the same.

    2. Configure Multipath for Persistent Naming of Clusterware Devices

    The purpose of this step is to provide persistent and meaningful, user-defined Clusterware multipath device names. This step is provided to ensure correct use of the intended Clusterware multipath devices that could otherwise be confused if solely relying on default multipath-assigned names (mpathn/mpathnpn), especially when many devices are involved.

    2a. Configure Multipathing

    Configure multipathing by modifying multipath configuration file /etc/multipath.conf. Comment and uncomment various stanzas accordingly to include (whitelist) or exclude (blacklist) specific devices/types as candidates for multipathing. Specific devices, such as our intended Clusterware devices, should be explicitly whitelisted as multipathing candidates. This can be accomplished by creating dedicated multipath stanzas for each device. Ideally, at a minimum, each device stanza should include the device wwid and an alias, for example:

    # cat /etc/multipath.conf
    ...
            multipath {
                    wwid    1494554000000000000000000010000005c3900000d000000
                    alias   voting1
            }
    ...

    Following is a sample multipath.conf file. Modify your configuration according to your own environment and preferences, but ensuring to include Clusterware device-specific multipath stanzas - substitite wwid values for your own i.e. those returned from running Step 1c. above.

    # grep -v ^# /etc/multipath.conf
    defaults {
            user_friendly_names yes
    }
    defaults {
            udev_dir                /dev
            polling_interval        10
            selector                "round-robin 0"
            path_grouping_policy    failover
            getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
            prio_callout            /bin/true
            path_checker            readsector0
            rr_min_io               100
            rr_weight               priorities
            failback                immediate
            #no_path_retry          fail
            user_friendly_name      yes
    }
    devnode_blacklist {
            devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
            devnode "^hd[a-z]"
            devnode "^sda"
            devnode "^cciss!c[0-9]d[0-9]*"
    }
    multipaths {
            multipath {
                    wwid    1494554000000000000000000010000005c3900000d000000
                    alias   voting1
            }
            multipath {
                    wwid    149455400000000000000000001000000843900000d000000
                    alias   voting2
            }
            multipath {
                    wwid    149455400000000000000000001000000ae3900000d000000
                    alias   voting3
            }
            multipath {
                    wwid    149455400000000000000000001000000d03900000d000000
                    alias   ocr1
            }
            multipath {
                    wwid    149455400000000000000000001000000e63900000d000000
                    alias   ocr2
            }
            multipath {
                    wwid    149455400000000000000000001000000083a00000d000000
                    alias   ocr3
            }
    }

    In the example above, devices with a specific wwid (per scsi_id(8)) are assigned persistent, user-defined names (aliases) i.e. voting1, voting2, voting3, ocr1, ocr2 and ocr3.

    2b. Verify Multipath Devices

    Once multipathing has been configured and multipathd service started, you should now have multipathed Clusterware devices referable by user-defined names, for example:

    # multipath -ll
    ocr3 (149455400000000000000000001000000083a00000d000000) dm-9 IET,VIRTUAL-DISK
    [size=980M][features=0][hwhandler=0]
    \_ round-robin 0 [prio=0][active]
     \_ 2:0:0:10 sdl 8:176 [active][ready]
    \_ round-robin 0 [prio=0][enabled]
     \_ 2:0:0:11 sdm 8:192 [active][ready]
    ocr2 (149455400000000000000000001000000e63900000d000000) dm-3 IET,VIRTUAL-DISK
    [size=980M][features=0][hwhandler=0]
    \_ round-robin 0 [prio=0][active]
     \_ 2:0:0:8  sdj 8:144 [active][ready]
    \_ round-robin 0 [prio=0][enabled]
     \_ 2:0:0:9  sdk 8:160 [active][ready]
    ocr1 (149455400000000000000000001000000d03900000d000000) dm-6 IET,VIRTUAL-DISK
    [size=980M][features=0][hwhandler=0]
    \_ round-robin 0 [prio=0][active]
     \_ 2:0:0:6  sdh 8:112 [active][ready]
    \_ round-robin 0 [prio=0][enabled]
     \_ 2:0:0:7  sdi 8:128 [active][ready]
    voting3 (149455400000000000000000001000000ae3900000d000000) dm-2 IET,VIRTUAL-DISK
    [size=965M][features=0][hwhandler=0]
    \_ round-robin 0 [prio=0][active]
     \_ 2:0:0:4  sdf 8:80  [active][ready]
    \_ round-robin 0 [prio=0][enabled]
     \_ 2:0:0:5  sdg 8:96  [active][ready]
    voting2 (149455400000000000000000001000000843900000d000000) dm-1 IET,VIRTUAL-DISK
    [size=965M][features=0][hwhandler=0]
    \_ round-robin 0 [prio=0][active]
     \_ 2:0:0:2  sdd 8:48  [active][ready]
    \_ round-robin 0 [prio=0][enabled]
     \_ 2:0:0:3  sde 8:64  [active][ready]
    voting1 (1494554000000000000000000010000005c3900000d000000) dm-0 IET,VIRTUAL-DISK
    [size=965M][features=0][hwhandler=0]
    \_ round-robin 0 [prio=0][active]
     \_ 2:0:0:0  sdb 8:16  [active][ready]
    \_ round-robin 0 [prio=0][enabled]
     \_ 2:0:0:1  sdc 8:32  [active][ready]

    In fact, various device names are created and used to refer to multipathed devices i.e.:
    # dmsetup ls | sort
    ocr1    (253, 6)
    ocr1p1  (253, 11)
    ocr2    (253, 3)
    ocr2p1  (253, 8)
    ocr3    (253, 9)
    ocr3p1  (253, 10)
    voting1 (253, 0)
    voting1p1       (253, 5)
    voting2 (253, 1)
    voting2p1       (253, 4)
    voting3 (253, 2)
    voting3p1       (253, 7)


    # ll /dev/disk/by-id/
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000083a00000d000000 -> ../../sdm
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000083a00000d000000-part1 -> ../../sdm1
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-1494554000000000000000000010000005c3900000d000000 -> ../../sdc
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-1494554000000000000000000010000005c3900000d000000-part1 -> ../../sdc1
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000843900000d000000 -> ../../sde
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000843900000d000000-part1 -> ../../sde1
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000ae3900000d000000 -> ../../sdg
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000ae3900000d000000-part1 -> ../../sdg1
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000d03900000d000000 -> ../../sdi
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000d03900000d000000-part1 -> ../../sdi1
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000e63900000d000000 -> ../../sdk
    lrwxrwxrwx 1 root root 9 Apr 23 11:05 scsi-149455400000000000000000001000000e63900000d000000-part1 -> ../../sdk1

    # ls -l /dev/dm-*
    brw-rw---- 1 root root 253,  0 Apr 23 11:15 /dev/dm-0
    brw-rw---- 1 root root 253,  1 Apr 23 11:15 /dev/dm-1
    brw-rw---- 1 root root 253, 10 Apr 23 11:15 /dev/dm-10
    brw-rw---- 1 root root 253, 11 Apr 23 11:15 /dev/dm-11
    brw-rw---- 1 root root 253,  2 Apr 23 11:15 /dev/dm-2
    brw-rw---- 1 root root 253,  3 Apr 23 11:15 /dev/dm-3
    brw-rw---- 1 root root 253,  4 Apr 23 11:15 /dev/dm-4
    brw-rw---- 1 root root 253,  5 Apr 23 11:15 /dev/dm-5
    brw-rw---- 1 root root 253,  6 Apr 23 11:15 /dev/dm-6
    brw-rw---- 1 root root 253,  7 Apr 23 11:15 /dev/dm-7
    brw-rw---- 1 root root 253,  8 Apr 23 11:15 /dev/dm-8
    brw-rw---- 1 root root 253,  9 Apr 23 11:15 /dev/dm-9

    # ll /dev/mpath/
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr1 -> ../dm-6
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr1p1 -> ../dm-11
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr2 -> ../dm-3
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr2p1 -> ../dm-8
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr3 -> ../dm-9
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 ocr3p1 -> ../dm-10
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting1 -> ../dm-0
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting1p1 -> ../dm-5
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting2 -> ../dm-1
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting2p1 -> ../dm-4
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting3 -> ../dm-2
    lrwxrwxrwx 1 root root 7 Apr 23 11:15 voting3p1 -> ../dm-7

    # ll /dev/mapper/
    brw-rw---- 1 root disk 253,   6 Apr 23 11:15 ocr1
    brw-rw---- 1 root disk 253,  11 Apr 23 11:15 ocr1p1
    brw-rw---- 1 root disk 253,   3 Apr 23 11:15 ocr2
    brw-rw---- 1 root disk 253,   8 Apr 23 11:15 ocr2p1
    brw-rw---- 1 root disk 253,   9 Apr 23 11:15 ocr3
    brw-rw---- 1 root disk 253,  10 Apr 23 11:15 ocr3p1
    brw-rw---- 1 root disk 253,   0 Apr 23 11:15 voting1
    brw-rw---- 1 root disk 253,   5 Apr 23 11:15 voting1p1
    brw-rw---- 1 root disk 253,   1 Apr 23 11:15 voting2
    brw-rw---- 1 root disk 253,   4 Apr 23 11:15 voting2p1
    brw-rw---- 1 root disk 253,   2 Apr 23 11:15 voting3
    brw-rw---- 1 root disk 253,   7 Apr 23 11:15 voting3p1

    The /dev/dm-N devices are used internally by device-mapper-multipath and are non-persistent across reboot, so should not be used. The /dev/mpath/ devices are created for multipath devices to be visible together, however, may not be available during early stages of boot, so, again, should not be used. However, /dev/mapper/ devices are persistent and created sufficiently early during boot - use only these devices to access and interact with multipathed devices.

    3. Create Raw Devices

    During the installation of Oracle Clusterware 10g Release 2 (10.2.0), the Universal Installer (OUI) is unable to verify the sharedness of block devices, therefore requires the use of raw devices (whether to singlepath or multipath devices) to be specified for OCR and voting disks. As mentioned earlier, this is no longer the case from Oracle11g R1 (11.1.0) that can use multipathed block devices directly.

    Manually create raw devices to bind against multipathed device partitions (/dev/mapper/*pN). Disregard device permissions for now - this will be addressed later. For example:

    # raw /dev/raw/raw1 /dev/mapper/ocr1p1
    /dev/raw/raw1:  bound to major 253, minor 11
    # raw /dev/raw/raw2 /dev/mapper/ocr2p1
    /dev/raw/raw2:  bound to major 253, minor 8
    # raw /dev/raw/raw3 /dev/mapper/ocr3p1
    /dev/raw/raw3:  bound to major 253, minor 10
    # raw /dev/raw/raw4 /dev/mapper/voting1p1
    /dev/raw/raw4:  bound to major 253, minor 5
    # raw /dev/raw/raw5 /dev/mapper/voting2p1
    /dev/raw/raw5:  bound to major 253, minor 4
    # raw /dev/raw/raw6 /dev/mapper/voting3p1
    /dev/raw/raw6:  bound to major 253, minor 7

    # raw -qa
    /dev/raw/raw1:  bound to major 253, minor 11
    /dev/raw/raw2:  bound to major 253, minor 8
    /dev/raw/raw3:  bound to major 253, minor 10
    /dev/raw/raw4:  bound to major 253, minor 5
    /dev/raw/raw5:  bound to major 253, minor 4
    /dev/raw/raw6:  bound to major 253, minor 7

    # ls -l /dev/raw/
    crw------- 1 root root 162, 1 Apr 23 11:52 raw1
    crw------- 1 root root 162, 2 Apr 23 11:52 raw2
    crw------- 1 root root 162, 3 Apr 23 11:52 raw3
    crw------- 1 root root 162, 4 Apr 23 11:52 raw4
    crw------- 1 root root 162, 5 Apr 23 11:52 raw5
    crw------- 1 root root 162, 6 Apr 23 11:52 raw6

    At this point, you should have raw devices bound to multipathed device partitions using user-defined names.

    4. Test Raw Device Accessibility

    Test read/write accessibility to and from raw devices from and between cluster nodes, for example:
    # dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=100
    100+0 records in
    100+0 records out
    102400 bytes (102 kB) copied, 0.762352 seconds, 134 kB/s

    # su - oracle
    $ dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=100
    dd: opening `/dev/raw/raw1': Permission denied

    # dd if=/dev/zero of=/dev/mapper/ocr1p1 bs=1024 count=100
    100+0 records in
    100+0 records out
    102400 bytes (102 kB) copied, 0.0468961 seconds, 2.2 MB/s

    ...

    Once testing is complete, unbind all raw devices, for example:
    # raw /dev/raw/raw1 0 0
    /dev/raw/raw1:  bound to major 0, minor 0
    # raw /dev/raw/raw2 0 0
    /dev/raw/raw2:  bound to major 0, minor 0
    # raw /dev/raw/raw3 0 0
    /dev/raw/raw3:  bound to major 0, minor 0
    # raw /dev/raw/raw4 0 0
    /dev/raw/raw4:  bound to major 0, minor 0
    # raw /dev/raw/raw5 0 0
    /dev/raw/raw5:  bound to major 0, minor 0
    # raw /dev/raw/raw6 0 0
    /dev/raw/raw6:  bound to major 0, minor 0

    5. Script Creation of Raw Bindings and Permissions

    Once raw devices have been created and their accessibility and usability established, configure raw device bindings and permissions. Factoring the undeprecation of raw devices from EL5 Update 4 (initscripts-8.45.30-2), depending on your versioning, configure raw devices accordingly.

    For >= EL5U4, configure raw devices via /etc/sysconfig/rawdevices in conjunction with the rawdevices service.

    For < EL5U4, use a custom or existing script such as /etc/rc.local to configure raw devices, for example:

    # cat /etc/rc.local
    #!/bin/sh
    #
    # This script will be executed *after* all the other init scripts.
    # You can put your own initialization stuff in here if you don't
    # want to do the full Sys V style init stuff.

    touch /var/lock/subsys/local

    #####
    # Oracle Cluster Registry (OCR) devices
    #####
    chown root:oinstall /dev/mapper/ocr**
    chmod 660 /dev/mapper/ocr*
    raw /dev/raw/raw1 /dev/mapper/ocr1p1
    raw /dev/raw/raw2 /dev/mapper/ocr2p2
    raw /dev/raw/raw3 /dev/mapper/ocr3p3
    sleep 2
    chown root:oinstall /dev/raw/raw1
    chown root:oinstall /dev/raw/raw2
    chown root:oinstall /dev/raw/raw3
    chmod 660 /dev/raw/raw1
    chmod 660 /dev/raw/raw2
    chmod 660 /dev/raw/raw3
    #####
    # Oracle Cluster Voting disks
    #####
    chown oracle:oinstall /dev/mapper/voting*
    chmod 660 /dev/mapper/voting*
    raw /dev/raw/raw4 /dev/mapper/voting1p1
    raw /dev/raw/raw5 /dev/mapper/voting2p1
    raw /dev/raw/raw6 /dev/mapper/voting3p1
    sleep 2
    chown oracle:oinstall /dev/raw/raw4
    chown oracle:oinstall /dev/raw/raw5
    chown oracle:oinstall /dev/raw/raw6
    chmod 660 /dev/raw/raw4
    chmod 660 /dev/raw/raw5
    chmod 660 /dev/raw/raw6

    Note: depending on the type and speed of the underlying storage, a sleep(1) of one or two seconds may be necessary between raw device creation and ownership/permission setting.

    6. Test the Raw Device Script

    Restart the rawdevices service and/or execute the /etc/rc.local script to test the proper creation and permission setting of both raw and multipath devices. Additionally, reboot the server(s) to further verify proper boot-time creation of devices, for example:

    # /etc/rc.local
    /dev/raw/raw1: bound to major 253, minor 11
    /dev/raw/raw2: bound to major 253, minor 8
    /dev/raw/raw3: bound to major 253, minor 10
    /dev/raw/raw4: bound to major 253, minor 5
    /dev/raw/raw5: bound to major 253, minor 4
    /dev/raw/raw6: bound to major 253, minor 7

    # ll /dev/mapper/
    brw-rw---- 1 root disk 253,   6 Apr 23 11:15 ocr1
    brw-rw---- 1 root disk 253,  11 Apr 23 11:15 ocr1p1
    brw-rw---- 1 root disk 253,   3 Apr 23 11:15 ocr2
    brw-rw---- 1 root disk 253,   8 Apr 23 11:15 ocr2p1
    brw-rw---- 1 root disk 253,   9 Apr 23 11:15 ocr3
    brw-rw---- 1 root disk 253,  10 Apr 23 11:15 ocr3p1
    brw-rw---- 1 root disk 253,   0 Apr 23 11:15 voting1
    brw-rw---- 1 root disk 253,   5 Apr 23 11:15 voting1p1
    brw-rw---- 1 root disk 253,   1 Apr 23 11:15 voting2
    brw-rw---- 1 root disk 253,   4 Apr 23 11:15 voting2p1
    brw-rw---- 1 root disk 253,   2 Apr 23 11:15 voting3
    brw-rw---- 1 root disk 253,   7 Apr 23 11:15 voting3p1

    # ls -l /dev/raw/
    crw-rw---- 1 root   oinstall 162, 1 Apr 23 11:57 raw1
    crw-rw---- 1 root   oinstall 162, 2 Apr 23 11:57 raw2
    crw-rw---- 1 root   oinstall 162, 3 Apr 23 11:57 raw3
    crw-rw---- 1 oracle oinstall 162, 4 Apr 23 11:57 raw4
    crw-rw---- 1 oracle oinstall 162, 5 Apr 23 11:57 raw5
    crw-rw---- 1 oracle oinstall 162, 6 Apr 23 11:57 raw6

    7. Install Oracle 10gR2 Clusterware

    Proceed to install Oracle Clusterware 10g Release 2 (10.2.0), ensuring to specify the appropriate raw devices (/dev/raw/rawN) for OCR and voting disks. OCR devices are initialised (formatted) as part of running the root.sh script. Before running root.sh, be aware that several known issues exist that will cause the Clusterware installation to fail, namely:

    • FAILED TO FORMAT OCR DISK USING CLSFMT
    • 10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA Failures)

    Due to , initialisation of multipathed OCR devices will fail. Therefore, before running root.sh, download and apply patch for . If root.sh was already run without first having applied patch for , remove (null) the failed, partially initialised OCR structures from all OCR devices, for example:

    # dd if=/dev/zero of=/dev/raw/raw1 bs=1M count=25
    25+0 records in
    25+0 records out

    # dd if=/dev/zero of=/dev/raw/raw2 bs=1M count=25
    25+0 records in
    25+0 records out

    Before re-running root.sh, review to proactively address several known (vipca) issues that would otherwise need to be separately resolved later. With the above complete, the running (or re-running) of root.sh should result in proper initialisation of multipathed OCR/voting devices and successful completion of Oracle Clusterware i.e.:

    [oracle@oel5a crs]$ sudo ./root.sh
    WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/u01/app/oracle/product' is not owned by root
    WARNING: directory '/u01/app/oracle' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    Checking to see if Oracle CRS stack is already configured

    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/u01/app/oracle/product' is not owned by root
    WARNING: directory '/u01/app/oracle' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    assigning default hostname oel5a for node 1.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node :
    node 1: oel5a oel5a-int oel5a
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /dev/raw/raw4
    Now formatting voting device: /dev/raw/raw5
    Now formatting voting device: /dev/raw/raw6
    Format of 3 voting devices complete.
    Startup will be queued to init within 90 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    CSS is active on these nodes.
    oel5a
    CSS is active on all nodes.
    Waiting for the Oracle CRSD and EVMD to start
    Waiting for the Oracle CRSD and EVMD to start
    Oracle CRS stack installed and running under init(1M)
    Running vipca(silent) for configuring nodeapps
    ...

    Upon completion of Oracle 10gR2 Clusterware installation, the Clusterware should be up and running, making use of raw devices bound to multipathed devices i.e.:

    [oracle@oel5a crs]$ ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 262144
    Used space (kbytes) : 1164
    Available space (kbytes) : 260980
    ID : 1749049955
    Device/File Name : /dev/raw/raw1
    Device/File integrity check succeeded
    Device/File Name : /dev/raw/raw2
    Device/File integrity check succeeded

    Cluster registry integrity check succeeded

    [root@orl5a /]# crsctl query css votedisk
    0.     0    /dev/raw/raw4
    1.     0    /dev/raw/raw5
    2.     0    /dev/raw/raw6

    located 3 votedisk(s).

    [root@oel5a /]# crsctl check crs
    CSS appears healthy
    CRS appears healthy
    EVM appears healthy

    Refer to for any issues, such as -16 EBUSY [Device or resource busy], arising from the continued use of raw devices being bound to multipathed devices.

    The requirement to use raw devices for OCR and voting devices solely applies to the initial installation of Oracle 10gR2 Clusterware. Once the installation is complete, OCR and voting devices can be switched to use multipath devices directly - refer to for further details.

    References

    - Install Of CRS Gets "Specified Partition May Not Have Correct Permission"
    - How to install Oracle Clusterware with shared storage on block devices
    - 10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI Failures)
    - Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5
    - Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0) on RHEL5/OEL5



    显示相关信息 相关的


    产品
    • Unbreakable Linux and Virtualization > Unbreakable Linux > Operating System > Linux OS
    关键字
    OUI; STORAGE; RESOURCE BUSY; RAW DEVICE; CLUSTERWARE; OCR

    返回页首返回页首

    文章评级
    为此文档评级
    本文档是否对您有所帮助?
    仅浏览
    找到本文档的容易程度如何?
    非常简单
    有些简单
    不简单
    注释
    阅读(7088) | 评论(0) | 转发(0) |
    给主人留下些什么吧!~~