主备LD之间的状态同步:
公司的一业务使用的是长连接的tcp协议(就算在没有业务数据包交互的情况下, 每隔n秒c/s间都会有keepalive包传递), 使用lvs作接入层的负载均衡, ld通过heartbeat做双机热备, 主备ld之间因为只涉及到公网ip的漂移, 所以没有使用ldirectd来重构ipvs表, 而是直接使用自己写的脚本来重构备机上的ipvs表。 所以haresoucre的配置很简单:master_node_name public_ip ipvs_shell。
在实际的运营过程中, 由于是长连接业务, 且为了保证更好的用户体验, 我们决定把主备ld之间的状态同步也加上, 所谓的状态同步, 就是ld会维护一张连接跟踪表(connection tracking table), 把每个连接的实时状态都保存在这张表里面。 要做到主备ld之间的这张连接跟踪表的同步, 需要在主备ld上分别启动一个内核守护线程, 通过广播的方式来同步数据。
主ld:
ipvsadm --start-daemon=master --mcast-interface=eth1
备ld:
ipvsadm --start-daemon=backup --mcast-interface=eth1
在主备ld上分别执行这两个命令后, 可以在发现备ld上有个224.0.0.81:8848 listen的内核守护线程, 用来接收从主ld发过来的广播包。 所以, 要记得把防火墙的udp协议的8848端口打开, 不然同步不了, 可以在备ld上通过ipvsadm -Lnc看是否有连接同步过来。
另外, 在实际的运营过程中还发现, 主备ld之间是怎么触发这种同步的呢?
一是新的连接, 不管怎么样, 都会同步过去,
二是对于已经存在的连接, 如果有足够的数据量在这个连接上传送, 在备ld上的expire time即将到期/或者已经到期的时候, 主ld会把这份已经存在的连接再次同步到备ld
另外的是, 一个老的连接, 一直存在, 但是备机上的expire time已经到期了(主备上的expire time分别有主备机维护)还发现跟这个连接上的传递数据量有关系(就是要数据量要达到一定的阀值, 才会触发), 如果c/s之间一直没有业务数据, 仅仅只有每n秒传递的心跳数据(也就是说没有达到触发的阀值), 会发现备ld上的连接跟踪表里已经没有这份数据。
在正常业务数据的情况:
idle的情况(只有keepalive包)
lvs官网介绍状态同步的url:
《Linux Enterprise Cluster.chm》中介绍状态同步的节选
Stateful Failover of the IPVS Table
As we've just seen, when the primary Director crashes and ldirectord needs to rebuild the IPVS table on the backup Director it can do so because you have placed all of the ipvsadm configuration rules into an ldirectord configuration file. However, the active client connections (the connection tracking records) are not re-created by ldirectord on the backup Director at failover time. All client computer connections to the cluster nodes are lost during a failover using the recipe we have just built.
These connections (entries in the connection tracking table) change rapidly on a heavily loaded cluster as client connections come and go, so we need a method of sending connection tracking records from a primary Director to a backup Director as they change. The LVS programers developed a technique to do this (replicate the connection tracking table to the backup director) using multicast packets. This technique was originally called the server sync state daemon, and even though it was implemented inside the kernel (the server sync state daemon does not run in userland) the name stuck. To turn on the server sync state daemon, as it is called, inside the kernel run the following command on the primary Director:
/sbin/ipvsadm --start-daemon master
Then, on the backup Director, run the command:
/sbin/ipvsadm --start-daemon backup
The primary and backup Directors must be able to communicate with each other using multicast packets on multicast address 224.0.0.81 for the master server sync state daemon to announce changes to the connection tracking records, and for the backup server sync state daemon to hear about these changes and insert them into its idle connection tracking table. To find out if your cluster nodes support multicast, see the output of the ifconfig command and look for the word MULTICAST. It should be present for each interface that will be using multicast; if it isn't you'll need to recompile your kernel and provide support for multicast (see Chapter 3).[23]
Note To stop the sync state daemon (either the master or the backup) you can issue the command ipvsadm --stop-daemon.
Once you have issued these commands on the primary and backup Director (you'll need to add these commands to an init script so the system will issue the command each time it boots—see Chapter 1), your primary Director can crash, and then when ldirectord rebuilds IPVS table on the backup Director all active connections[24] to cluster nodes will survive the failure of the primary Director.
Note The method just described will failover active connections from the primary Director to the backup Director, but it does not failback the active connections when the primary Director has been restored to normal operation. To failover and failback active IPVS connections you will need (as of this writing) to apply a patch to the kernel. See the LVS HOWTO () for details.
阅读(1383) | 评论(0) | 转发(0) |