Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2159089
  • 博文数量: 317
  • 博客积分: 5670
  • 博客等级: 大校
  • 技术积分: 3677
  • 用 户 组: 普通用户
  • 注册时间: 2008-08-10 17:51
文章分类

全部博文(317)

文章存档

2016年(2)

2015年(44)

2014年(68)

2013年(42)

2012年(23)

2011年(51)

2010年(67)

2009年(17)

2008年(3)

分类: Mysql/postgreSQL

2011-04-14 12:39:21

MMM (Master-Master Replication Manager for MySQL) is a set of flexible scripts to perform monitoring/failover and management of MySQL Master-Master replication configurations (with only one node writable at any time). The toolset also has the ability to readbalance standard master/slave configurations with any number of slaves, so you can use it tomove virtual IP addresses around a group of servers depending on whether they are behind in replication. In addition to that, it also has scripts for data backups, resynchronization between nodes etc.

MMM可用对 Two masters 或 Two masters + one/many slaves 实现监控和故障迁移。
主要是利用 vip,当主 Master failed 时,将主Master 上的 write vip 和 read vip 迁移到从Master上。
注意:其中只能有一个 node 为 write(即主Master),其它都为 read。
同时也可用对 read 实现负载均衡,并且利用 mmm tools 实现数据的备份(数据库要建立在lvm上)。

If you want to use the MMM tools (mmm backup, mmm restore, mmm clone) you have to
use LVM for the partition on which your MySQL databases and logs reside. Note: You’ll need free physical extends for the snapshots undo space (see Estimating Undo Space needed for LVM Snapshot).

        /----master---\
app ----      ||       -----monitor
        \----master---/


        /----master---\-----------\
app ----      ||       slave-------monitor
        \----master---/-----------/

Requirements
  n + 1 hosts:One host for each MySQL server; one host for the MMM monitor.
  2 * (n + 1) IPs:One IP for each host (see above); one IP for the writer role; n IPs for one reader role per host.

monitor: 192.168.8.1
master1: 192.168.8.2
master2: 192.168.8.3

write vip: 192.168.8.50
read  vip: 192.168.8.51
read  vip: 192.168.8.51

1. install mysql
2. edit /usr/local/mysql/etc/my.cnf be sure to use different server ids for all hosts:
  

server_id = 1
log_bin = /var/log/mysql/mysql-bin.log
log_bin_index = /var/log/mysql/mysql-bin.log.index
relay_log = /var/log/mysql/mysql-relay-bin
relay_log_index = /var/log/mysql/mysql-relay-bin.index
expire_logs_days = 10
max_binlog_size = 100M
log_slave_updates = 1
read_only = 1


3. /etc/init.d/mysql restart
4. Create users(on two masters)

GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.8.%' IDENTIFIED BY 'monitor_password';
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.8.%' IDENTIFIED BY 'agent_password';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.8.%' IDENTIFIED BY 'replication_password';

FLUSH PRIVILEGES;


5. ( monitor+host )

useradd --comment "MMM Script owner" --shell /sbin/nologin mmmd


9. install MMM( monitor+host )
  

wget http://mysql-mmm.org/_media/:mmm2:mysql-mmm-2.2.0.tar.gz
tar xvf mysql-mmm-2.2.0.tar.gz -C /usr/src
cd /usr/src/mysql-mmm-2.2.0
make instll


10. Configure MMM(This file will be the same on all hosts in the system)

shell> cat /etc/mysql-mmm/mmm_common.conf

active_master_role writer

<host default>
        cluster_interface eth0

        pid_path /var/run/mmm_agentd.pid
        bin_path /usr/lib/mysql-mmm/

        replication_user replication
        replication_password replication_password

        agent_user mmm_agent
        agent_password agent_password
</host>

<host db1>
        ip 192.168.8.2
        mode master
        peer db2
</host>

<host db2>
        ip 192.168.8.3
        mode master
        peer db1
</host>

<role writer>
        hosts db1, db2
        ips 192.168.8.50
        mode exclusive
</role>


<role reader>
        hosts db1, db2
        ips 192.168.8.51, 192.168.8.52
        mode balanced
</role>


on monitor

shell> cat /etc/mysql-mmm/mmm_mon.conf
include mmm_common.conf

<monitor>
        ip 127.0.0.1
        pid_path /var/run/mmm_mond.pid
        bin_path /usr/lib/mysql-mmm/
        status_path /var/lib/misc/mmm_mond.status
        ping_ips 192.168.8.1, 192.168.8.2, 192.168.8.3
</monitor>

<host default>
        monitor_user mmm_monitor
        monitor_password monitor_password
</host>

debug 0


on hosts( Change “db1” accordingly on the other hosts )

shell> cat /etc/mysql-mmm/mmm_agent.conf
include mmm_common.conf
this db1


11. Start MMM
    
start agent
      

shell> cat /etc/default/mysql-mmm-agent
ENABLED=1


shell> /etc/init.d/mysql-mmm-agent start

     
start monitor

shell> cat /etc/default/mysql-mmm-monitor
ENABLED=1


shell> /etc/init.d/mysql-mmm-monitor start


12. show status(monitor)

shell> mmm_control show
  db1(192.168.8.2) master/AWAITING_RECOVERY. Roles:
  db2(192.168.8.3) master/AWAITING_RECOVERY. Roles:


13. log messages

shell> tail /var/log/mysql-mmm/mmm_mond.log
2010/03/17 14:03:46 FATAL State of host 'db1' changed from HARD_OFFLINE to AWAITING_RECOVERY
2010/03/17 14:03:46 INFO Check 'rep_backlog' on 'db1' is
2010/03/17 14:03:55 INFO Check 'mysql' on 'db2' is
2010/03/17 14:03:56 INFO Check 'rep_backlog' on 'db2' is
2010/03/17 14:03:56 INFO Check 'rep_threads' on 'db2' is
2010/03/17 14:03:56 ERROR Check 'rep_threads' on 'db1' has failed for 10 Message: ERROR: Replication is broken
2010/03/17 14:03:58 FATAL State of host 'db2' changed from HARD_OFFLINE to AWAITING_RECOVERY
2010/03/17 14:04:20 FATAL Agent on host 'db1' is reachable again
2010/03/17 14:04:27 FATAL Agent on host 'db2' is reachable again
2010/03/17 14:04:57 INFO Check 'rep_threads' on 'db1' is


14. set node online

shell> mmm_control set_online db1
OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new
shell> mmm_control set_online db2
OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new


15. show status

shell> mmm_control show
  db1(192.168.8.2) master/ONLINE. Roles: reader(192.168.8.51), writer(192.168.8.50)
  db2(192.168.8.3) master/ONLINE. Roles: reader(192.168.8.52)


16. show vips (hosts)

shell> ip add list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 08:00:27:9a:70:dd brd ff:ff:ff:ff:ff:ff
    inet 192.168.8.2/24 brd 192.168.8.255 scope global eth0
    inet 192.168.8.51/32 scope global eth0
    inet 192.168.8.50/32 scope global eth0
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 08:00:27:cf:38:80 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.6/24 brd 192.168.1.255 scope global eth1


17. test

db1:

mysql> grant all on test.test to 'test'@'192.168.8.%' identified by '123';

mysql> flush privileges;


monitor:

shell> mysql -h 192.168.8.50 -utest -p123 -e "insert into test.test values (1,'test')"

shelL> mysql -h 192.168.8.50 -utest -p123 -e "select * from test.test"

shelL> mysql -h 192.168.8.51 -utest -p123 -e "select * from test.test"

shelL> mysql -h 192.168.8.52 -utest -p123 -e "select * from test.test"


18. failover

shell> watch mmm_control show
Every 2.0s: mmm_control show Wed Mar 17 14:18:25 2010

  db1(192.168.8.2) master/ONLINE. Roles: reader(192.168.8.51), writer(192.168.8.50)
  db2(192.168.8.3) master/ONLINE. Roles: reader(192.168.8.52)


reboot db1

shell> init 6

Every 2.0s: mmm_control show Wed Mar 17 14:20:14 2010

# Warning: agent on host db1 is not reachable
  db1(192.168.8.2) master/ONLINE. Roles: reader(192.168.8.51), writer(192.168.8.50)
  db2(192.168.8.3) master/ONLINE. Roles: reader(192.168.8.52)


Every 2.0s: mmm_control show                                                                         Wed Mar 17 14:20:20 2010

# Warning: agent on host db1 is not reachable
  db1(192.168.8.2) master/HARD_OFFLINE. Roles:
  db2(192.168.8.3) master/ONLINE. Roles: reader(192.168.8.51), reader(192.168.8.52), writer(192.168.8.50)


Every 2.0s: mmm_control show                                                                         Wed Mar 17 14:25:45 2010

  db1(192.168.8.2) master/AWAITING_RECOVERY. Roles:
  db2(192.168.8.3) master/ONLINE. Roles: reader(192.168.8.51), reader(192.168.8.52), writer(192.168.8.50)


shell> mmm_control set_online db1


Every 2.0s: mmm_control show                                                                         Wed Mar 17 14:27:16 2010

  db1(192.168.8.2) master/ONLINE. Roles: reader(192.168.8.52)
  db2(192.168.8.3) master/ONLINE. Roles: reader(192.168.8.51), writer(192.168.8.50)


注意:If its downtime was shorter than 60 seconds and it wasn’t rebooted or auto_set_online is > 0 it will be switched back to ONLINE automatically,unless it is flapping.


19. about status
                              
ONLINE:                                Host is running without any problems.                 
ADMIN_OFFLINE:               host was set to offline manually.                   
HARD_OFFLINE:                  Host is offline (Check ping and/or mysql failed)
AWAITING_RECOVERY:    Host is awaiting recovery                          
REPLICATION_DELAY:      replication backlog is too big (Check rep_backlog failed)
REPLICATION_FAIL:          replication threads are not running (Check rep_threads failed)

* Only hosts with state ONLINE may have roles. When a host switches from ONLINE to
any other state, all roles will be removed from it.
* A host that was in state REPLICATION_DELAY or REPLICATION_FAIL will be switched
back to ONLINE if everything is OK again, unless it is flapping
* A host that was in state HARD_OFFLINE will be switched to AWAITING_RECOVERY if
everything is OK again. If its downtime was shorter than 60 seconds and it wasn’t
rebooted or auto_set_online is > 0 it will be switched back to ONLINE automatically,unless it is flapping
* Replication backlog or failure on the active master isn’t considered to be a problem, so the active master will never be in state REPLICATION_DELAY or REPLICATION_FAIL.
* Replication backlog or failure will be ignored on hosts whos peers got ONLINE less than 60 seconds ago (That’s the default value of master-connect-retry).
* If both checks rep_backlog and rep_threads fail, the state will change to
REPLICATION_FAIL.

参考资


阅读(2429) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~