Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2908554
  • 博文数量: 199
  • 博客积分: 1400
  • 博客等级: 上尉
  • 技术积分: 4126
  • 用 户 组: 普通用户
  • 注册时间: 2008-07-06 19:06
个人简介

半个PostgreSQL DBA,热衷于数据库相关的技术。我的ppt分享https://pan.baidu.com/s/1eRQsdAa https://github.com/chenhuajun https://chenhuajun.github.io

文章分类

全部博文(199)

文章存档

2020年(5)

2019年(1)

2018年(12)

2017年(23)

2016年(43)

2015年(51)

2014年(27)

2013年(21)

2011年(1)

2010年(4)

2009年(5)

2008年(6)

分类: Mysql/postgreSQL

2016-04-12 21:00:28


前段时间我的同事沈龙星整理了一下MHA故障切换和在线切换的代码流程,在征得其同意后,在此转发。以下是正文 

本文是以MySQL5.5为基础的,因此没有涉及到gtid相关内容。MHA的主从切换过程分为failover和rotate两种,前者适用于原Master down的情况,后者是在在线切换的情况下使用。下面分别讲解



failover的处理流程


点击(此处)折叠或打开

  1. MHA::MasterFailover::main()
  2.     ->do_master_failover
  3.         Phase 1: Configuration Check Phase
  4.             -> check_settings:
  5.                 check_node_version:查看MHA的版本信息
  6.                 connect_all_and_read_server_status:确认各个node的MySQL实例是否可以连接
  7.                 get_dead_servers/get_alive_servers/get_alive_slaves:double check各个node的死活状态
  8.                 start_sql_threads_if:查看Slave_SQL_Running是否为Yes,若不是则启动SQL thread
  9.              
  10.         Phase 2: Dead Master Shutdown Phase:对于我们来说,唯一的作用就是stop IO thread
  11.             -> force_shutdown($dead_master)
  12.                 stop_io_thread:所有slave的IO thread stop掉(将stop掉master)
  13.                 force_shutdown_internal(实际上就是执行配置文件中的master_ip_failover_script/shutdown_script,若无则不执行)
  14.                     master_ip_failover_script:如果设置了VIP,则首先切换VIP
  15.                     shutdown_script:如果设置了shutdown脚本,则执行
  16.  
  17.         Phase 3: Master Recovery Phase
  18.             -> Phase 3.1: Getting Latest Slaves Phase(取得latest slave)
  19.                 read_slave_status:取得各个slave的binlog file/position
  20.                     check_slave_status:调用"SHOW SLAVE STATUS"来取得slave的如下信息:
  21.                          Slave_IO_State, Master_Host,
  22.                          Master_Port, Master_User,
  23.                          Slave_IO_Running, Slave_SQL_Running,
  24.                          Master_Log_File, Read_Master_Log_Pos,
  25.                          Relay_Master_Log_File, Last_Errno,
  26.                          Last_Error, Exec_Master_Log_Pos,
  27.                          Relay_Log_File, Relay_Log_Pos,
  28.                          Seconds_Behind_Master, Retrieved_Gtid_Set,
  29.                          Executed_Gtid_Set, Auto_Position
  30.                          Replicate_Do_DB, Replicate_Ignore_DB, Replicate_Do_Table,
  31.                          Replicate_Ignore_Table, Replicate_Wild_Do_Table,
  32.                          Replicate_Wild_Ignore_Table
  33.                 identify_latest_slaves:
  34.                     通过比较各个slave中的Master_Log_File/Read_Master_Log_Pos,来找到latest的slave
  35.                 identify_oldest_slaves:
  36.                     通过比较各个slave中的Master_Log_File/Read_Master_Log_Pos,来找到oldest的slave
  37.  
  38.             -> Phase 3.2: Saving Dead Master's Binlog Phase:
  39.                 save_master_binlog:
  40.                     如果dead master可以ssh连接,则走如下分支:
  41.                         save_master_binlog_internal:(使用node节点的save_binary_logs脚本在dead master上做拷贝)
  42.                             save_binary_logs --command=save --start_file=mysql-bin.000281 --start_pos=107 --binlog_dir=/opt/mysql/data/binlog --output_file=/opt/mha/log/saved_master_binlog_from_10.27.177.245_3306_20160108211857.binlog --handle_raw_binlog=1 --disable_log_bin=0 --manager_version=0.55
  43.                                 generate_diff_binary_log:
  44.                                     concat_all_binlogs_from:
  45.                                         dump_binlog:就是将binlog文件dump到target文件中,用的就是binmode read
  46.                                             dump_binlog_header_fde:从0读到position-1
  47.                                             dump_binlog_from_pos:从position开始,dump binlog file到target file
  48.                             file_copy:
  49.                                 文件拷贝,是将上述生成的binlog文件拷贝到manage节点的manager_workdir目录下
  50.                     如果dead master无法ssh登录,则master上未同步到slave的txn丢失
  51.  
  52.             -> Phase 3.3: Determining New Master Phase
  53.                 find_latest_base_slave:
  54.                     find_latest_base_slave_internal:
  55.                         pos_cmp( $oldest_mlf, $oldest_mlp, $latest_mlf, $latest_mlp )
  56.                             判断latest/oldest slave的binlog位置是不是相同,若相同则不需要同步relay log
  57.                         apply_diff_relay_logs --command=find --latest
  58.                             查看latest slave中是否有oldest缺少的relay log,若无则继续,否则failover失败
  59.                             查找的方法很简单,就是逆序的读latest slave的relay log文件,一直找到file/position为止
  60.                      
  61.                     select_new_master:选出新的master节点
  62.                         If preferred node is specified, one of active preferred nodes will be new master.
  63.                         If the latest server behinds too much (i.e. stopping sql thread for online backups),
  64.                         we should not use it as a new master, we should fetch relay log there. Even though preferred
  65.                         master is configured, it does not become a master if it's far behind.
  66.                         get_candidate_masters:
  67.                             就是配置文件中配置了candidate_master>0的节点
  68.                         get_bad_candidate_masters:
  69.                             # The following servers can not be master:
  70.                             # - dead servers
  71.                             # - Set no_master in conf files (i.e. DR servers)
  72.                             # - log_bin is disabled
  73.                             # - Major version is not the oldest
  74.                             # - too much replication delay(slave与master的binlog position差距大于100000000)
  75.                         Searching from candidate_master slaves which have received the latest relay log events
  76.                         if NOT FOUND:
  77.                             Searching from all candidate_master slaves
  78.                                 if NOT FOUND:
  79.                                     Searching from all slaves which have received the latest relay log events
  80.                                         if NOT FOUND:
  81.                                             Searching from all slaves
  82.                              
  83.             -> Phase 3.4: New Master Diff Log Generation Phase
  84.                 recover_relay_logs:
  85.                     判断new master是不是latest slave,若不是则使用apply_diff_relay_logs --命令生成差分log,
  86.                         并发送到新new master
  87.                     recover_master_internal:
  88.                         将3.2中生成的daed master上的binlog发送到new master
  89.  
  90.             -> Phase 3.5: Master Log Apply Phase
  91.                 recover_slave:
  92.                     apply_diff:
  93.                         0. wait_until_relay_log_applied,等待new master将relaylog执行完
  94.                         1. 判断Exec_Master_Log_Pos == Read_Master_Log_Pos,
  95.                         如果不相等则使用save_binary_logs --command=save生成差分log
  96.                         2. 调用apply_diff_relay_logs命令,让new master进行recover.其中:
  97.                             2.1 recover的log分为三部分:
  98.                                 exec_diff:Exec_Master_Log_Pos和Read_Master_Log_Pos的差分
  99.                                 read_diff:new master与lastest slave的relay log的差分
  100.                                 binlog_diff:lastest slave与daed master之间的binlog差分
  101.                         实际上apply_diff_relay_logs就是调用mysqlbinlog command进行recover
  102.                 //如果设置了vip,则需要调用master_ip_failover_script进行vip的failover
  103.  
  104.         Phase 4: Slaves Recovery Phase
  105.             -> Phase 4.1: Starting Parallel Slave Diff Log Generation Phase
  106.                 生成Slave与New Slave之间的差异日志,并将该日志拷贝到各Slave的工作目录下。
  107.                  
  108.             -> Phase 4.2: Starting Parallel Slave Log Apply Phase
  109.                 recover_slave:
  110.                     对各个slave进行恢复,同Phase3.5
  111.                 change_master_and_start_slave:
  112.                     通过CHANGE MASTER TO命令将这些Slave指向新的New Master,最后开始复制(start slave)
  113.  
  114.         Phase 5: New master cleanup phase
  115.             reset_slave_on_new_master
  116.                 清理New Master其实就是重置slave info,即取消原来的Slave信息。至此整个Master故障切换过程完成


rotate的处理过程

点击(此处)折叠或打开

  1. MHA::MasterRotate::main()
        -> do_master_online_switch:
            Phase 1: Configuration Check Phase
            -> identify_orig_master
                connect_all_and_read_server_status:
                    connect_check:首先进行connect check,确保各个server的MySQL服务都正常
                    connect_and_get_status:获取MySQL实例的server_id/mysql_version/log_bin..等信息
                        这一步还有一个重要的作用,是获取当前的master节点。通过执行show slave status,
                        如果输出为空,说明当前节点是master节点。
                    validate_current_master:取得master节点的信息,并判断配置的正确性
                check是否有server down,若有则退出rotate
                check master alive or not,若dead则退出rotate
                check_repl_priv:
                    查看用户是否有replication的权限
                获取monitor_advisory_lock,以保证当前没有其他的monitor进程在master上运行
                    执行:SELECT GET_LOCK('MHA_Master_High_Availability_Monitor', ?) AS Value
                获取failover_advisory_lock,以保证当前没有其他的failover进程在slave上运行
                    执行:SELECT GET_LOCK('MHA_Master_High_Availability_Failover', ?) AS Value
                check_replication_health:
                    执行:SHOW SLAVE STATUS来判断如下状态:current_slave_position/has_replication_problem
                    其中,has_replication_problem具体check如下内容:IO线程/SQL线程/Seconds_Behind_Master(1s)
                get_running_update_threads:
                    使用show processlist来查询当前有没有执行update的线程存在,若有则退出switch
            -> identify_new_master
                set_latest_slaves:当前的slave节点都是latest slave
                select_new_master:选出新的master节点
                    If preferred node is specified, one of active preferred nodes will be new master.
                    If the latest server behinds too much (i.e. stopping sql thread for online backups),
                    we should not use it as a new master, we should fetch relay log there. Even though preferred
                    master is configured, it does not become a master if it's far behind.
                    get_candidate_masters:
                        就是配置文件中配置了candidate_master>0的节点
                    get_bad_candidate_masters:
                        # The following servers can not be master:
                        # - dead servers
                        # - Set no_master in conf files (i.e. DR servers)
                        # - log_bin is disabled
                        # - Major version is not the oldest
                        # - too much replication delay(slave与master的binlog position差距大于100000000)
                    Searching from candidate_master slaves which have received the latest relay log events
                    if NOT FOUND:
                        Searching from all candidate_master slaves
                            if NOT FOUND:
                                Searching from all slaves which have received the latest relay log events
                                    if NOT FOUND:
                                        Searching from all slaves
     
            Phase 2: Rejecting updates Phase
                reject_update:lock table来reject write binlog
                    如果MHA的配置文件中设置了"master_ip_online_change_script"参数,则执行该脚本来disable writes on the current master
                    该脚本在使用了vip的时候才需要设置
                    reconnect:确保当前与master的连接正常
                    lock_all_tables:执行FLUSH TABLES WITH READ LOCK,来lock table
                    check_binlog_stop:连续两次show master status,来判断写binlog是否已经停止
                     
            read_slave_status:
                get_alive_slaves:
                check_slave_status:调用"SHOW SLAVE STATUS"来取得slave的如下信息:
                             Slave_IO_State,        Master_Host,
                             Master_Port,           Master_User,
                             Slave_IO_Running,      Slave_SQL_Running,
                             Master_Log_File,       Read_Master_Log_Pos,
                             Relay_Master_Log_File, Last_Errno,
                             Last_Error,            Exec_Master_Log_Pos,
                             Relay_Log_File,        Relay_Log_Pos,
                             Seconds_Behind_Master, Retrieved_Gtid_Set,
                             Executed_Gtid_Set,     Auto_Position
                             Replicate_Do_DB, Replicate_Ignore_DB, Replicate_Do_Table,
                             Replicate_Ignore_Table, Replicate_Wild_Do_Table,
                             Replicate_Wild_Ignore_Table
            switch_master:
                switch_master_internal:
                    master_pos_wait:调用select master_pos_wait函数,等待主从同步完成
                    get_new_master_binlog_position:执行'show master status'
                Allow write access on the new master:
                    调用master_ip_online_change_script --command=start ...,将vip指向new master
                disable_read_only:
                    在新master上执行:SET GLOBAL read_only=0
            switch_slaves:
                switch_slaves_internal:
                    change_master_and_start_slave
                        change_master:
                        start_slave:
                unlock_tables:在orig master上执行unlock table
            Phase 5: New master cleanup phase
                reset_slave_on_new_master
                release_failover_advisory_lock
阅读(24293) | 评论(1) | 转发(1) |
给主人留下些什么吧!~~

skykiker2016-07-19 23:25:41

基于GTID的MHA处理逻辑可以参考 http://www.68idc.cn/help/mysqldata/mysql/20150116174551.html