Chinaunix首页 | 论坛 | 博客
  • 博客访问: 630827
  • 博文数量: 244
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 130
  • 用 户 组: 普通用户
  • 注册时间: 2016-06-27 09:53
个人简介

记录学习,记录成长

文章分类

全部博文(244)

我的朋友

分类: LINUX

2015-11-11 14:22:02

接着上次配置
有关DRBD的安装配置请看:
http://blog.chinaunix.net/uid-30212356-id-5378086.html
其他的安装过程:
heartbeat:
http://blog.chinaunix.net/uid-30212356-id-5333727.html
pacemaker+corosync+crmsh:
http://blog.chinaunix.net/uid-30212356-id-5349960.html

环境:
node1/mysql1:192.168.85.144
node2/mysql2:192.168.85.145
VIP:192.168.85.128
测试主机:192.168.85.143

一.配置pacemaker+corosync+DRBD
1.将节点都切换为从节点并停止DRBD服务,禁止DRBD开机自启动
[root@node1 ~]# drbd-overview
  0:mydrbd  Connected Secondary/Secondary UpToDate/UpToDate C r----- 
  
[root@node2 ~]# drbd-overview 
  0:mydrbd  Connected Secondary/Secondary UpToDate/UpToDate C r----- 

[root@node1 ~]# service drbd stop
Stopping all DRBD resources: .

[root@node2 ~]# service drbd stop
Stopping all DRBD resources: .

[root@node1 ~]# chkconfig drbd off
[root@node2 ~]# chkconfig drbd off

2.安装heartbeat,corosync,pacemaker和crmsh
之前已经都安装了,这步省略;
yum  install heartbeat heartbeat-libs  cluster-glue  cluster-glue-libs  resource-agents 

3.编辑corosync配置文件为
[root@node1 corosync]# cat corosync.conf
compatibility: whitetank
totem {
        version: 2
        secauth: on
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 192.168.85.0
                mcastaddr: 239.245.3.1
                mcastport: 5405
                ttl: 1
        }
}
logging {
        fileline: off
        to_stderr: no
        to_logfile: yes 
        logfile: /var/log/cluster/corosync.log
        to_syslog: no
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}
service {
                name: pacemaker
                ver: 1
}

4.生成认证密钥文件
[root@node1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey.

5.将密钥文件和配置文件拷贝一份到node2上
[root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/
authkey                                               100%  128     0.1KB/s   00:00    
corosync.conf                                         100%  609     0.6KB/s   00:00   

6.启动corosync和pacemaker服务并检查集群状态(因为corosync的配置文件中将ver设为了1,所以pacemaker需要手动启动)
[root@node1 ~]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [  OK  ]
[root@node1 ~]# ssh node2 '/etc/init.d/corosync start'
Starting Corosync Cluster Engine (corosync): [  OK  ]
[root@node1 ~]# /etc/init.d/pacemaker  start
Starting Pacemaker Cluster Manager[  OK  ]
[root@node1 ~]# ssh node2 '/etc/init.d/pacemaker start'
Starting Pacemaker Cluster Manager[  OK  ]

[root@node1 ~]# crm status #此时已经禁用了stonith设备并且设置法定票数小于总票数一半时策略为ignore
Last updated: Mon Nov  9 20:59:29 2015
Last change: Mon Nov  9 20:59:20 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [ node1.a.com node2.a.com ] #两个节点在线

7.查看drbd的RA及其meta信息
crm(live)ra# providers drbd
linbit

crm(live)ra# meta ocf:linbit:drbd
Manages a DRBD device as a Master/Slave resource (ocf:linbit:drbd)
This resource agent manages a DRBD resource as a master/slave resource.
DRBD is a shared-nothing replicated storage device.
Note that you should configure resource level fencing in DRBD,
this cannot be done from this resource agent.
See the DRBD User's Guide for more information.

Parameters (*: required, []: default):
drbd_resource* (string): drbd resource name
    The name of the drbd resource from the drbd.conf file.
Manages a DRBD device as a Master/Slave resource (ocf:linbit:drbd)
This resource agent manages a DRBD resource as a master/slave resource.
DRBD is a shared-nothing replicated storage device.
Note that you should configure resource level fencing in DRBD,
this cannot be done from this resource agent.
See the DRBD User's Guide for more information.

Parameters (*: required, []: default):
drbd_resource* (string): drbd resource name
    The name of the drbd resource from the drbd.conf file.
drbdconf (string, [/etc/drbd.conf]): Path to drbd.conf
    Full path to the drbd.conf file.
stop_outdates_secondary (boolean, [false]): outdate a secondary on stop
    Recommended setting: until pacemaker is fixed, leave at default (disabled).
    Note that this feature depends on the passed in information in
    OCF_RESKEY_CRM_meta_notify_master_uname to be correct, which unfortunately is
    not reliable for pacemaker versions up to at least 1.0.10 / 1.1.4.   
    If a Secondary is stopped (unconfigured), it may be marked as outdated in the
    drbd meta data, if we know there is still a Primary running in the cluster.
    Note that this does not affect fencing policies set in drbd config,
    but is an additional safety feature of this resource agent only.
    You can enable this behaviour by setting the parameter to true.  
    If this feature seems to not do what you expect, make sure you have defined
    fencing policies in the drbd configuration as well.
Operations' defaults (advisory minimum):
    start         timeout=240
    promote       timeout=90
    demote        timeout=90
    notify        timeout=90
    stop          timeout=100
    monitor_Slave timeout=20 interval=20
    monitor_Master timeout=20 interval=10

8.配置DRBD为集群资源
drbd需要同时运行在两个节点上,但只能有一个节点(primary/secondary模型)是Master;因此,它是一种比较特殊的集群资源,其资源类型为多态(Multi-state)clone类型,即主机节点有Master和Slave之分,且要求服务刚启动时两个节点都处于slave状态。
8.1创建主从资源
crm(live)# configure
crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240s op stop timeout=100s op monitor role=Master interval=20s timeout=30s op monitor role=Slave interval=30s timeout=30s

crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=yes
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Last updated: Mon Nov  9 21:27:05 2015
Last change: Mon Nov  9 21:27:01 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
2 Resources configured

Online: [ node1.a.com node2.a.com ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ node1.a.com ]
     Slaves: [ node2.a.com ]

[root@node1 ~]# drbd-overview 
  0:mydrbd  Connected Primary/Secondary UpToDate/UpToDate C r----- 

[root@node2 ~]# drbd-overview 
  0:mydrbd  Connected Secondary/Primary UpToDate/UpToDate C r----- 
  
9.配置store资源
此时只能保证主从资源的切换,并不能实现自动挂载卸载,所以还需配置Filesystem资源
crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mysqldata fstype=ext3 op start timeout=60s op stop timeout=60s
crm(live)configure# verify
crm(live)configure# 此时先不要提交,因为该资源只能定义在主节点上,所以还需定义排列约束,将Master与Filesystem在一起;
crm(live)configure# colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master
crm(live)configure# 然后再定义顺序约束,先启动master再挂载
crm(live)configure# order ms_mysqldrbd_before_mystore mandatory: ms_mysqldrbd:promote mystore:start
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Last updated: Mon Nov  9 21:44:29 2015
Last change: Mon Nov  9 21:44:24 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
3 Resources configured

Online: [ node1.a.com node2.a.com ]
 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ node1.a.com ]
     Slaves: [ node2.a.com ]
 mystore        (ocf::heartbeat:Filesystem):    Started node1.a.com #mystore在主节点node1上

[root@node1 ~]# ll /mysqldata/
-rw-r--r-- 1 root root     6 Nov  9 19:40 a.txt
drwx------ 2 root root 16384 Nov  9 19:38 lost+found

此时让node1下线,主节点切换到node2上,node1重新上线后,node1为从节点,主节点是node2
[root@node2 ~]# ll /mysqldata/
total 20
-rw-r--r-- 1 root root     6 Nov  9 19:41 a.txt
drwx------ 2 root root 16384 Nov  9 19:39 lost+found

二.配置mysql
此时的资源状态为:
crm(live)# status
Last updated: Tue Nov 10 19:30:37 2015
Last change: Tue Nov 10 19:28:52 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
3 Resources configured

Online: [ node1.a.com node2.a.com ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ node1.a.com ]
     Slaves: [ node2.a.com ]
 mystore        (ocf::heartbeat:Filesystem):    Started node1.a.com
node1上:
1.解压mysql到/usr/local/下并创建一个链接,名为mysql
[root@node1 ~]# tar xf mysql-5.5.45-linux2.6-i686.tar.gz -C /usr/local/

[root@node1 local]# ln -s mysql-5.5.45-linux2.6-i686 mysql

2.创建mysql用户mysql组
[root@node1 mysql]# groupadd -r -g 306 mysql
[root@node1 mysql]# useradd -g 306 -r -u 306 -M -s /sbin/nologin mysql
[root@node1 mysql]# id mysql
uid=306(mysql) gid=306(mysql) groups=306(mysql)

3.创建数据存储目录并将其属主属组改为mysql
[root@node1 mysql]# mkdir /mysqldata/data
[root@node1 mysql]# chown -R mysql.mysql /mysqldata/data
[root@node1 mysql]# chown -R root.mysql ./*

4.mysql初始化
[root@node1 mysql]# scripts/mysql_install_db --user=mysql --datadir=/mysqldata/data/

5.编辑配置文件
[root@node1 mysql]# cp support-files/my-huge.cnf /etc/my.cnf
[root@node1 mysql]# cp support-files/mysql.server /etc/init.d/mysqld
[root@node1 mysql]# vim /etc/my.cnf #在mysqld段添加
datadir = /mysqldata/data

6.启动mysql服务并创建一个测试数据库
[root@node1 mysql]# service mysqld start
Starting MySQL...... SUCCESS! 

[root@node1 mysql]# /usr/local/mysql/bin/mysql
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mydb               |
| mysql              |
| performance_schema |
| test               |
+--------------------+

7.停止mysql服务并设置禁止mysql开机启动
[root@node1 mysql]# service mysqld stop
Shutting down MySQL. SUCCESS! 
[root@node1 mysql]# chkconfig mysqld off

node2上:
1.将node2改为主节点然后再配置mysql
[root@node1 ~]# crm node standby
[root@node1 ~]# crm node online
[root@node1 ~]# crm status
Last updated: Tue Nov 10 19:42:35 2015
Last change: Tue Nov 10 19:42:32 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
3 Resources configured

Online: [ node1.a.com node2.a.com ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ node2.a.com ]
     Slaves:  [ node1.a.com ]
 mystore        (ocf::heartbeat:Filesystem):    Started node2.a.com

2.创建mysql用户mysql组(和node1上创建的一样)
[root@node2 ~]# groupadd -r -g 306 mysql
[root@node2 ~]# useradd -g 306 -r -u 306 -M -s /sbin/nologin mysql
[root@node2 ~]# id mysql
uid=306(mysql) gid=306(mysql) groups=306(mysql)

3.解压mysql到/usr/local/下并创建一个链接,名为mysql
[root@node2 ~]# tar xf mysql-5.5.45-linux2.6-i686.tar.gz -C /usr/local/
[root@node2 local]# ln -sv mysql-5.5.45-linux2.6-i686 mysql
`mysql' -> `mysql-5.5.45-linux2.6-i686'
[root@node2 mysql]# chown -R root.mysql ./*

4.node2不需要初始化,只需直接配置主配置文件即可
[root@node2 mysql]# cp support-files/my-large.cnf /etc/my.cnf
[root@node2 mysql]# cp support-files/mysql.server /etc/init.d/mysqld
[root@node2 mysql]# vim /etc/my.cnf #添加如下一行
datadir = /mysqldata/data

5.直接启动mysql服务测试
[root@node2 mysql]# /etc/init.d/mysqld start
Starting MySQL.... SUCCESS! 

[root@node2 ~]# /usr/local/mysql/bin/mysql
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mydb               |
| mysql              |
| performance_schema |
| test               |
+--------------------+

6.停止mysql服务并设置禁止mysql开机启动
[root@node2 ~]# service mysqld stop
Shutting down MySQL. SUCCESS! 
[root@node2 ~]# chkconfig mysqld off

三.配置pacemaker+corosync+drbd+mysql实现MYSQL高可用
1.配置mysql资源
crm(live)configure# primitive mysqld lsb:mysqld 
crm(live)configure# verify
crm(live)configure#接着定义排列约束使mysqld与mystore资源在一起
crm(live)configure# colocation mysqld_with_mystore inf: mysqld mystore
crm(live)configure# 然后定义顺序约束使启动顺序为mystore  mysqld(mysqldrbd和mystore的启动顺序在前面已经定义了)
crm(live)configure# order mystore_before_mysqld mandatory: mystore mysqld
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Last updated: Tue Nov 10 20:03:54 2015
Last change: Tue Nov 10 20:03:50 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
4 Resources configured

Online: [ node1.a.com node2.a.com ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ node2.a.com ]
     Slaves: [ node1.a.com ]
 mystore        (ocf::heartbeat:Filesystem):    Started node2.a.com

2.测试node2的mysql是否可用
2.1  node2上创建测试数据库
[root@node2 ~]# /usr/local/mysql/bin/mysql
mysql> create database testdb;
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mydb               |
| mysql              |
| performance_schema |
| test               |
| testdb             |
+--------------------+

2.2  将node1切换为主节点
crm(live)node# standby node2.a.com
crm(live)# status
Last updated: Tue Nov 10 20:08:11 2015
Last change: Tue Nov 10 20:08:07 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
4 Resources configured

Node node2.a.com: standby
Online: [ node1.a.com ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Slaves: [ node1.a.com node2.a.com ]

crm(live)node# online node2.a.com
crm(live)# status
Last updated: Tue Nov 10 20:09:03 2015
Last change: Tue Nov 10 20:09:00 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
4 Resources configured

Online: [ node1.a.com node2.a.com ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ node1.a.com ]
     Slaves: [ node2.a.com ]
 mystore        (ocf::heartbeat:Filesystem):    Started node1.a.com 
 mysqld (lsb:mysqld):   Started node1.a.com

2.3  node1上测试
[root@node1 ~]# /usr/local/mysql/bin/mysql
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mydb               |
| mysql              |
| performance_schema |
| test               |
| testdb             |
+--------------------+

3.配置VIP资源
crm(live)configure# primitive mysqlip ocf:heartbeat:IPaddr params ip=192.168.85.128
crm(live)configure# verify
crm(live)configure# 接着配置排列约束让VIP和mysqld(主节点)在一起
crm(live)configure# colocation mysqld_with_mysqlip inf: ms_mysqldrbd:Master  mysqlip 
crm(live)configure# verify
crm(live)configure# commit
crm(live)# status
Last updated: Tue Nov 10 20:18:48 2015
Last change: Tue Nov 10 20:18:45 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
5 Resources configured

Online: [ node1.a.com node2.a.com ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ node1.a.com ]
     Slaves: [ node2.a.com ]
 mystore        (ocf::heartbeat:Filesystem):    Started node1.a.com 
 mysqld (lsb:mysqld):   Started node1.a.com 
 mysqlip        (ocf::heartbeat:IPaddr):        Started node1.a.com

4.在主节点上添加一个mysql用户测试
[root@node1 ~]# /usr/local/mysql/bin/mysql
mysql> grant all on *.* to 'root'@'%' IDENTIFIED BY 'redhat';

5.在另外一台主机上测试
[root@nfs ~]# ip -o -f inet addr show
1: lo    inet 127.0.0.1/8 scope host lo
2: eth1    inet 192.168.85.143/24 brd 192.168.85.255 scope global eth1

[root@nfs ~]# mysql -u root -h 192.168.85.128 -p
Enter password: 
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mydb               |
| mysql              |
| performance_schema |
| test               |
| testdb             |
+--------------------+

让node1下线再上线(切换为从节点),此时node2为主节点
[root@node1 ~]# crm node standby
[root@node1 ~]# crm node online
crm(live)# status
Last updated: Tue Nov 10 20:18:48 2015
Last change: Tue Nov 10 20:18:45 2015
Stack: classic openais (with plugin)
Current DC: node1.a.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
5 Resources configured

Online: [ node1.a.com node2.a.com ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ node2.a.com ]
     Slaves: [ node1.a.com ]
 mystore        (ocf::heartbeat:Filesystem):    Started node2.a.com 
 mysqld (lsb:mysqld):   Started node2.a.com 
 mysqlip        (ocf::heartbeat:IPaddr):        Started node2.a.com

再次测试:
[root@nfs ~]# mysql -uroot -h 192.168.85.128 -p
Enter password: 
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mydb               |
| mysql              |
| performance_schema |
| test               |
| testdb             |
+--------------------+
至此,基于pacemaker+corosync+DRBD+mysql的高可用mysql完成了

阅读(985) | 评论(0) | 转发(0) |
0

上一篇:DRBD安装配置

下一篇:RHCS集群理论

给主人留下些什么吧!~~