小桥河西

postgres=#\?

  • 博客访问: 1686995
  • 博文数量: 194
  • 博客积分: 1400
  • 博客等级: 上尉
  • 技术积分: 3819
  • 用 户 组: 普通用户
  • 注册时间: 2008-07-06 19:06
  • 认证徽章:
个人简介

就职于某电商数据库研发团队,热衷于数据库相关的技术。我的ppt分享https://pan.baidu.com/s/1eRQsdAa https://github.com/chenhuajun

文章分类

全部博文(194)

文章存档

2017年(19)

2016年(43)

2015年(52)

2014年(35)

2013年(25)

2011年(1)

2010年(4)

2009年(5)

2008年(8)

微信关注

IT168企业级官微



微信号:IT168qiye



系统架构师大会



微信号:SACC2013

订阅
热词专题

分类: Mysql/postgreSQL

基于Pacemaker的PostgreSQL一主多从读负载均衡集群搭建

简介

PostgreSQL的HA方案有很多种,本文演示基于Pacemaker的PostgreSQL一主多从读负载均衡集群搭建。 搭建过程并不是使用原始的Pacemaker pgsql RA脚本,而使用以下我修改和包装的脚本集pha4pgsql。

目标集群特性

  1. 秒级自动failover
  2. failover零数据丢失(防脑裂)
  3. 支持在线主从切换
  4. 支持读写分离
  5. 支持读负载均衡
  6. 支持动态增加和删除只读节点

环境

  • OS:CentOS 7.3
  • 节点1:node1(192.168.0.231)
  • 节点2:node2(192.168.0.232)
  • 节点2:node3(192.168.0.233)
  • writer_vip:192.168.0.236
  • reader_vip:192.168.0.237

依赖软件

  • pacemaker
  • corosync
  • pcs
  • ipvsadm

安装与配置

环境准备

  1. 所有节点设置时钟同步

    cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    ntpdate time.windows.com && hwclock -w 
  2. 所有节点设置独立的主机名(node1,node2,node3)

    hostnamectl set-hostname node1 
  3. 设置对所有节点的域名解析

    $ vi /etc/hosts
    ...
    192.168.0.231 node1
    192.168.0.232 node2
    192.168.0.233 node3 
  4. 在所有节点上禁用SELINUX

    $ setenforce 0
    $ vi /etc/selinux/config
    ...
    SELINUX=disabled 
  5. 在所有节点上禁用防火墙

    systemctl disable firewalld.service
    systemctl stop firewalld.service 

    如果开启防火墙需要开放postgres,pcsd和corosync的端口。参考CentOS 7防火墙设置示例

    • postgres:5432/tcp
    • pcsd:2224/tcp
    • corosync:5405/udp

安装和配置Pacemaker+Corosync集群软件

安装Pacemaker和Corosync及相关软件包

在所有节点执行:

yum install -y pacemaker corosync pcs ipvsadm 

启用pcsd服务

在所有节点执行:

systemctl start pcsd.service
systemctl enable pcsd.service 

设置hacluster用户密码

在所有节点执行:

echo hacluster | passwd hacluster --stdin 

集群认证

在任何一个节点上执行:

pcs cluster auth -u hacluster -p hacluster node1 node2 node3 

同步配置

在任何一个节点上执行:

pcs cluster setup --last_man_standing=1 --name pgcluster node1 node2 node3 

启动集群

在任何一个节点上执行:

pcs cluster start --all 

安装和配置PostgreSQL

安装PostgreSQL

安装9.2以上的PostgreSQL,本文通过PostgreSQL官方yum源安装CentOS 7.3对应的PostgreSQL 9.6

在所有节点执行:

yum install -y https://yum.postgresql.org/9.6/redhat/rhel-7.3-x86_64/pgdg-centos96-9.6-3.noarch.rpm

yum install -y postgresql96 postgresql96-contrib postgresql96-libs postgresql96-server postgresql96-devel

ln -sf /usr/pgsql-9.6 /usr/pgsql
echo 'export PATH=/usr/pgsql/bin:$PATH' >>~postgres/.bash_profile 

创建Master数据库

在node1节点执行:

  1. 创建数据目录

    mkdir -p /pgsql/data
    chown -R postgres:postgres /pgsql/
    chmod 0700 /pgsql/data 
  2. 初始化db

    su - postgres
    initdb -D /pgsql/data/ 
  3. 修改postgresql.conf

    listen_addresses = '*'
    wal_level = hot_standby
    wal_log_hints = on
    synchronous_commit = on
    max_wal_senders=5
    wal_keep_segments = 32
    hot_standby = on
    wal_sender_timeout = 5000
    wal_receiver_status_interval = 2
    max_standby_streaming_delay = -1
    max_standby_archive_delay = -1
    restart_after_crash = off
    hot_standby_feedback = on 

    注:设置"wal_log_hints = on"可以使用pg_rewind修复旧Master。

  4. 修改pg_hba.conf

    local   all                 all                              trust
    host    all                 all     192.168.0.0/24           md5
    host    replication         all     192.168.0.0/24           md5 
  5. 启动postgres

    pg_ctl -D /pgsql/data/ start 
  6. 创建复制用户

    createuser --login --replication replication -P -s 

    注:加上“-s”选项可支持pg_rewind。

创建Slave数据库

在node2和node3节点执行:

  1. 创建数据目录

    mkdir -p /pgsql/data
    chown -R postgres:postgres /pgsql/
    chmod 0700 /pgsql/data 
  2. 创建基础备份

    su - postgres
    pg_basebackup -h node1 -U replication -D /pgsql/data/ -X stream -P 

停止PostgreSQL服务

在node1上执行: pg_ctl -D /pgsql/data/ stop

安装和配置pha4pgsql

在任意一个节点上执行:

  1. 下载pha4pgsql

    cd /opt
    git clone git://github.com/Chenhuajun/pha4pgsql.git 
  2. 拷贝config.ini

    cd /opt/pha4pgsql
    cp template/config_muti_with_lvs.ini.sample  config.ini 

注:如果不需要配置基于LVS的负载均衡,可使用模板config_muti.ini.sample

  1. 修改config.ini

    pcs_template=muti_with_lvs.pcs.template
    OCF_ROOT=/usr/lib/ocf
    RESOURCE_LIST="msPostgresql vip-master vip-slave"
    pha4pgsql_dir=/opt/pha4pgsql
    writer_vip=192.168.0.236
    reader_vip=192.168.0.237
    node1=node1
    node2=node2
    node3=node3
    othernodes=""
    vip_nic=ens37
    vip_cidr_netmask=24
    pgsql_pgctl=/usr/pgsql/bin/pg_ctl
    pgsql_psql=/usr/pgsql/bin/psql
    pgsql_pgdata=/pgsql/data
    pgsql_pgport=5432
    pgsql_restore_command=""
    pgsql_rep_mode=sync
    pgsql_repuser=replication
    pgsql_reppassord=replication 
  2. 安装pha4pgsql

    sh install.sh
    ./setup.sh 

    执行install.sh使用了scp拷贝文件,中途会多次要求输入其它节点的root账号。 install.sh执行会生成Pacemaker的配置脚本/opt/pha4pgsql/config.pcs,可以根据情况对其中的参数进行调优后再执行setup.sh。

  3. 设置环境变量

    export PATH=/opt/pha4pgsql/bin:$PATH
    echo 'export PATH=/opt/pha4pgsql/bin:$PATH' >>/root/.bash_profile 
  4. 启动集群

    cls_start 
  5. 确认集群状态

    cls_status 

    cls_status的输出如下:

    [root@node1 pha4pgsql]# cls_status
    Stack: corosync
    Current DC: node1 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
    Last updated: Wed Jan 11 00:53:58 2017      Last change: Wed Jan 11 00:45:54 2017 by root via crm_attribute on node1
    
    3 nodes and 9 resources configured
    
    Online: [ node1 node2 node3 ]
    
    Full list of resources:
    
     vip-master (ocf::heartbeat:IPaddr2):   Started node1
     vip-slave  (ocf::heartbeat:IPaddr2):   Started node2
     Master/Slave Set: msPostgresql [pgsql]
         Masters: [ node1 ]
         Slaves: [ node2 node3 ]
     lvsdr  (ocf::heartbeat:lvsdr): Started node2
     Clone Set: lvsdr-realsvr-clone [lvsdr-realsvr]
         Started: [ node2 node3 ]
         Stopped: [ node1 ]
    
    Node Attributes:
    * Node node1:
        + master-pgsql                      : 1000      
        + pgsql-data-status                 : LATEST    
        + pgsql-master-baseline             : 00000000050001B0
        + pgsql-status                      : PRI       
    * Node node2:
        + master-pgsql                      : 100       
        + pgsql-data-status                 : STREAMING|SYNC
        + pgsql-status                      : HS:sync   
    * Node node3:
        + master-pgsql                      : -INFINITY 
        + pgsql-data-status                 : STREAMING|ASYNC
        + pgsql-status                      : HS:async  
    
    Migration Summary:
    * Node node2:
    * Node node3:
    * Node node1:
    
    pgsql_REPL_INFO:node1|1|00000000050001B0 

    检查集群的健康状态。完全健康的集群需要满足以下条件:

    1. msPostgresql在每个节点上都已启动
    2. 在其中一个节点上msPostgresql处于Master状态,其它的为Salve状态
    3. Salve节点的data-status值是以下中的一个
      • STREAMING|SYNC 
        同步复制Slave
      • STREAMING|POTENTIAL 
        候选同步复制Slave
      • STREAMING|ASYNC 
        异步复制Slave

    pgsql_REPL_INFO的3段内容分别指当前master,上次提升前的时间线和xlog位置。

    pgsql_REPL_INFO:node1|1|00000000050001B0 

    LVS配置在node2上

    [root@node2 ~]#  ipvsadm -L
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  node2:postgres rr
      -> node2:postgres               Route   1      0          0         
      -> node3:postgres               Route   1      0          0 

故障测试

Master故障

  1. 停止Master上的网卡模拟故障

    [root@node1 pha4pgsql]# ifconfig ens37 down 
  2. 检查集群状态

    Pacemaker已经将Master和写VIP切换到node2上

    [root@node2 ~]# cls_status
    resource msPostgresql is NOT running
    Stack: corosync
    Current DC: node2 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
    Last updated: Wed Jan 11 01:25:08 2017      Last change: Wed Jan 11 01:21:26 2017 by root via crm_attribute on node2
    
    3 nodes and 9 resources configured
    
    Online: [ node2 node3 ]
    OFFLINE: [ node1 ]
    
    Full list of resources:
    
     vip-master (ocf::heartbeat:IPaddr2):   Started node2
     vip-slave  (ocf::heartbeat:IPaddr2):   Started node3
     Master/Slave Set: msPostgresql [pgsql]
         Masters: [ node2 ]
         Slaves: [ node3 ]
         Stopped: [ node1 ]
     lvsdr  (ocf::heartbeat:lvsdr): Started node3
     Clone Set: lvsdr-realsvr-clone [lvsdr-realsvr]
         Started: [ node3 ]
         Stopped: [ node1 node2 ]
    
    Node Attributes:
    * Node node2:
        + master-pgsql                      : 1000      
        + pgsql-data-status                 : LATEST    
        + pgsql-master-baseline             : 00000000050008E0
        + pgsql-status                      : PRI       
    * Node node3:
        + master-pgsql                      : 100       
        + pgsql-data-status                 : STREAMING|SYNC
        + pgsql-status                      : HS:sync   
    
    Migration Summary:
    * Node node2:
    * Node node3:
    
    pgsql_REPL_INFO:node2|2|00000000050008E0 

    LVS和读VIP被移到了node3上

    [root@node3 ~]# ipvsadm -L
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  node3:postgres rr
      -> node3:postgres               Route   1      0          0 
  3. 修复旧Master的网卡

    在旧Master node1上,postgres进程还在(注1)。但是由于配置的是同步复制,数据无法写入不会导致脑裂。

    [root@node1 pha4pgsql]# ps -ef|grep postgres
    root      20295   2269  0 01:35 pts/0    00:00:00 grep --color=auto postgres
    postgres  20556      1  0 00:45 ?        00:00:01 /usr/pgsql-9.6/bin/postgres -D /pgsql/data -c config_file=/pgsql/data/postgresql.conf
    postgres  20566  20556  0 00:45 ?        00:00:00 postgres: logger process   
    postgres  20574  20556  0 00:45 ?        00:00:00 postgres: checkpointer process   
    postgres  20575  20556  0 00:45 ?        00:00:00 postgres: writer process   
    postgres  20576  20556  0 00:45 ?        00:00:00 postgres: stats collector process   
    postgres  22390  20556  0 00:45 ?        00:00:00 postgres: wal writer process   
    postgres  22391  20556  0 00:45 ?        00:00:00 postgres: autovacuum launcher process 

    启动网卡后,postgres进程被停止

    [root@node1 pha4pgsql]# ifconfig ens37 up
    [root@node1 pha4pgsql]# ps -ef|grep postgres
    root      21360   2269  0 01:36 pts/0    00:00:00 grep --color=auto postgres
    [root@node1 pha4pgsql]# cls_status
    resource msPostgresql is NOT running
    Stack: corosync
    Current DC: node2 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
    Last updated: Wed Jan 11 01:36:20 2017      Last change: Wed Jan 11 01:36:00 2017 by hacluster via crmd on node2
    
    3 nodes and 9 resources configured
    
    Online: [ node1 node2 node3 ]
    
    Full list of resources:
    
     vip-master (ocf::heartbeat:IPaddr2):   Started node2
     vip-slave  (ocf::heartbeat:IPaddr2):   Started node3
     Master/Slave Set: msPostgresql [pgsql]
         Masters: [ node2 ]
         Slaves: [ node3 ]
         Stopped: [ node1 ]
     lvsdr  (ocf::heartbeat:lvsdr): Started node3
     Clone Set: lvsdr-realsvr-clone [lvsdr-realsvr]
         Started: [ node3 ]
         Stopped: [ node1 node2 ]
    
    Node Attributes:
    * Node node1:
        + master-pgsql                      : -INFINITY 
        + pgsql-data-status                 : DISCONNECT
        + pgsql-status                      : STOP      
    * Node node2:
        + master-pgsql                      : 1000      
        + pgsql-data-status                 : LATEST    
        + pgsql-master-baseline             : 00000000050008E0
        + pgsql-status                      : PRI       
    * Node node3:
        + master-pgsql                      : 100       
        + pgsql-data-status                 : STREAMING|SYNC
        + pgsql-status                      : HS:sync   
    
    Migration Summary:
    * Node node2:
    * Node node3:
    * Node node1:
       pgsql: migration-threshold=3 fail-count=1000000 last-failure='Wed Jan 11 01:36:08 2017'
    
    Failed Actions:
    * pgsql_start_0 on node1 'unknown error' (1): call=278, status=complete, exitreason='The master's timeline forked off current database system timeline 2 before latest checkpoint location 0000000005000B80, REPL_INF',
        last-rc-change='Wed Jan 11 01:36:07 2017', queued=0ms, exec=745ms
    
    
    pgsql_REPL_INFO:node2|2|00000000050008E0 

    注1:这是通过ifconfig ens37 down停止网卡模拟故障的特殊现象(或者说是corosync的bug),Pacemkaer的日志中不停的输出以下警告。在实际的物理机宕机或网卡故障时,故障节点会由于失去quorum,postgres进程会被Pacemaker主动停止。

    [43260] node3 corosyncwarning [MAIN  ] Totem is unable to form a cluster because of an operating system or network fault. The most common cause of this message is that the local firewall is configured improperly. 
  4. 修复旧Master(node1)并作为Slave加入集群

    通过pg_rewind修复旧Master

    [root@node1 pha4pgsql]# cls_repair_by_pg_rewind 
    resource msPostgresql is NOT running
    resource msPostgresql is NOT running
    resource msPostgresql is NOT running
    connected to server
    servers diverged at WAL position 0/50008E0 on timeline 2
    rewinding from last common checkpoint at 0/5000838 on timeline 2
    reading source file list
    reading target file list
    reading WAL in target
    need to copy 99 MB (total source directory size is 117 MB)
    102359/102359 kB (100%) copied
    creating backup label and updating control file
    syncing target data directory
    Done!
    pg_rewind complete!
    resource msPostgresql is NOT running
    resource msPostgresql is NOT running
    Waiting for 1 replies from the CRMd. OK
    wait for recovery complete
    .....
    slave recovery of node1 successed 

    检查集群状态

    [root@node1 pha4pgsql]# cls_status
    Stack: corosync
    Current DC: node2 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
    Last updated: Wed Jan 11 01:39:30 2017      Last change: Wed Jan 11 01:37:35 2017 by root via crm_attribute on node2
    
    3 nodes and 9 resources configured
    
    Online: [ node1 node2 node3 ]
    
    Full list of resources:
    
     vip-master (ocf::heartbeat:IPaddr2):   Started node2
     vip-slave  (ocf::heartbeat:IPaddr2):   Started node3
     Master/Slave Set: msPostgresql [pgsql]
         Masters: [ node2 ]
         Slaves: [ node1 node3 ]
     lvsdr  (ocf::heartbeat:lvsdr): Started node3
     Clone Set: lvsdr-realsvr-clone [lvsdr-realsvr]
         Started: [ node1 node3 ]
         Stopped: [ node2 ]
    
    Node Attributes:
    * Node node1:
        + master-pgsql                      : -INFINITY 
        + pgsql-data-status                 : STREAMING|ASYNC
        + pgsql-status                      : HS:async  
    * Node node2:
        + master-pgsql                      : 1000      
        + pgsql-data-status                 : LATEST    
        + pgsql-master-baseline             : 00000000050008E0
        + pgsql-status                      : PRI       
    * Node node3:
        + master-pgsql                      : 100       
        + pgsql-data-status                 : STREAMING|SYNC
        + pgsql-status                      : HS:sync   
        + pgsql-xlog-loc                    : 000000000501F118
    
    Migration Summary:
    * Node node2:
    * Node node3:
    * Node node1:
    
    pgsql_REPL_INFO:node2|2|00000000050008E0 

Slave故障

LVS配置在node3上,2个real server

[root@node3 ~]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  node3:postgres rr
  -> node1:postgres               Route   1      0          0         
  -> node3:postgres               Route   1      0          0 

在其中一个Slave(node1)上停止网卡

[root@node1 pha4pgsql]# ifconfig ens37 down 

Pacemaker已自动修改LVS的real server配置

[root@node3 ~]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  node3:postgres rr
  -> node3:postgres               Route   1      0          0 

添加Slave扩容读负载均衡

目前配置的是1主2从集群,2个Slave通过读VIP+LVS做读负载均衡,如果读负载很高可以添加额外的Slave扩展读性能。 把更多的Slave直接添加到Pacemaker集群中可以达到这个目的,但过多的节点数会增加Pacemaker+Corosync集群的复杂性和通信负担(Corosync的通信是一个环路,节点数越多,时延越大)。所以不把额外的Slave加入Pacemaker集群,仅仅加到LVS的real server中,并让lvsdr监视Slave的健康状况,动态更新LVS的real server列表。方法如下:

创建额外的Slave数据库

准备第4台机器node4(192.168.0.234),并在该机器上执行以下命令创建新的Slave

  1. 禁用SELINUX

    $ setenforce 0
    $ vi /etc/selinux/config
    ...
    SELINUX=disabled 
  2. 禁用防火墙

    systemctl disable firewalld.service
    systemctl stop firewalld.service 
  3. 安装PostgreSQL

    yum install -y https://yum.postgresql.org/9.6/redhat/rhel-7.3-x86_64/pgdg-centos96-9.6-3.noarch.rpm
    
    yum install -y postgresql96 postgresql96-contrib postgresql96-libs postgresql96-server postgresql96-devel
    
    ln -sf /usr/pgsql-9.6 /usr/pgsql
    echo 'export PATH=/usr/pgsql/bin:$PATH' >>~postgres/.bash_profile 
  4. 创建数据目录

    mkdir -p /pgsql/data
    chown -R postgres:postgres /pgsql/
    chmod 0700 /pgsql/data 
  5. 创建Salve备份

    从当前的Master节点(即写VIP 192.168.0.236)拉取备份创建Slave

    su - postgres
    pg_basebackup -h 192.168.0.236 -U replication -D /pgsql/data/ -X stream -P 
  6. 编辑postgresql.conf

    将postgresql.conf中的下面一行删掉

    ¥vi /pgsql/data/postgresql.conf
    ...
    #include '/var/lib/pgsql/tmp/rep_mode.conf' # added by pgsql RA 
  7. 编辑recovery.conf

    $vi /pgsql/data/recovery.conf
    standby_mode = 'on'
    primary_conninfo = 'host=192.168.0.236 port=5432 application_name=192.168.0.234 user=replication password=replication keepalives_idle=60 keepalives_interval=5 keepalives_count=5'
    restore_command = ''
    recovery_target_timeline = 'latest' 

    上面的application_name设置为本节点的IP地址192.168.0.234

  8. 启动Slave

    pg_ctl -D /pgsql/data/ start 

    在Master上检查postgres wal sender进程,新建的Slave(192.168.0.234)已经和Master建立了流复制。

    [root@node1 pha4pgsql]# ps -ef|grep '[w]al sender'
    postgres  32387 111175  0 12:15 ?        00:00:00 postgres: wal sender process replication 192.168.0.234(47894) streaming 0/7000220
    postgres 116675 111175  0 12:01 ?        00:00:00 postgres: wal sender process replication 192.168.0.233(33652) streaming 0/7000220
    postgres 117079 111175  0 12:01 ?        00:00:00 postgres: wal sender process replication 192.168.0.232(40088) streaming 0/7000220 

配置LVS real server

  1. 设置系统参数

    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce 
  2. 在lo网卡上添加读VIP

    ip a add 192.168.0.237/32 dev lo:0 

将新建的Slave加入到LVS中

现在LVS的配置中还没有把新的Slave作为real server加入

[root@node3 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  node3:postgres rr
  -> node2:postgres               Route   1      0          0         
  -> node3:postgres               Route   1      0          0 

在Pacemaker集群的任意一个节点(node1,node2或node3)上,修改lvsdr RA的配置,加入新的real server。

[root@node2 ~]# pcs resource update lvsdr realserver_get_real_servers_script="/opt/pha4pgsql/tools/get_active_slaves /usr/pgsql/bin/psql \"host=192.168.0.236 port=5432 dbname=postgres user=replication password=replication connect_timeout=5\"" 

设置realserver_get_real_servers_script参数后,lvsdr会通过脚本获取LVS的real server列表,这里的get_active_slaves会通过写VIP连接到Master节点获取所有以连接到Master的Slave的application_name作为real server。设置后新的Slave 192.168.0.234已经被加入到real server 列表中了。

[root@node2 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  node2:postgres rr
  -> node2:postgres               Route   1      0          0         
  -> node3:postgres               Route   1      0          0         
  -> 192.168.0.234:postgres       Route   1      0          0 

测试读负载均衡

在当前的Master节点(node1)上通过读VIP访问postgres,可以看到psql会轮询连接到3个不同的Slave上。

[root@node1 pha4pgsql]# psql "host=192.168.0.237 port=5432 dbname=postgres user=replication password=replication" -tAc "select pg_postmaster_start_time()"
2017-01-14 12:01:48.068455+08
[root@node1 pha4pgsql]# psql "host=192.168.0.237 port=5432 dbname=postgres user=replication password=replication" -tAc "select pg_postmaster_start_time()"
2017-01-14 12:01:12.222412+08
[root@node1 pha4pgsql]# psql "host=192.168.0.237 port=5432 dbname=postgres user=replication password=replication" -tAc "select pg_postmaster_start_time()"
2017-01-14 12:15:19.614782+08
[root@node1 pha4pgsql]# psql "host=192.168.0.237 port=5432 dbname=postgres user=replication password=replication" -tAc "select pg_postmaster_start_time()"
2017-01-14 12:01:48.068455+08
[root@node1 pha4pgsql]# psql "host=192.168.0.237 port=5432 dbname=postgres user=replication password=replication" -tAc "select pg_postmaster_start_time()"
2017-01-14 12:01:12.222412+08
[root@node1 pha4pgsql]# psql "host=192.168.0.237 port=5432 dbname=postgres user=replication password=replication" -tAc "select pg_postmaster_start_time()"
2017-01-14 12:15:19.614782+08 

下面测试Salve节点发生故障的场景。 先连接到其中一台Slave

[root@node1 pha4pgsql]# psql "host=192.168.0.237 port=5432 dbname=postgres user=replication password=replication"
psql (9.6.1)
Type "help" for help. 

当前连接在node4上

[root@node4 ~]# ps -ef|grep postgres
postgres  11911      1  0 12:15 pts/0    00:00:00 /usr/pgsql-9.6/bin/postgres -D /pgsql/data
postgres  11912  11911  0 12:15 ?        00:00:00 postgres: logger process   
postgres  11913  11911  0 12:15 ?        00:00:00 postgres: startup process   recovering 000000090000000000000007
postgres  11917  11911  0 12:15 ?        00:00:00 postgres: checkpointer process   
postgres  11918  11911  0 12:15 ?        00:00:00 postgres: writer process   
postgres  11920  11911  0 12:15 ?        00:00:00 postgres: stats collector process   
postgres  11921  11911  0 12:15 ?        00:00:04 postgres: wal receiver process   streaming 0/7000CA0
postgres  12004  11911  0 13:19 ?        00:00:00 postgres: replication postgres 192.168.0.231(42116) idle
root      12006   2230  0 13:19 pts/0    00:00:00 grep --color=auto postgres 

强制杀死node4上的postgres进程

[root@node4 ~]# killall postgres 

lvsdr探测到node4挂了后会自动将其从real server列表中摘除

[root@node2 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  node2:postgres rr
  -> node2:postgres               Route   1      0          0         
  -> node3:postgres               Route   1      0          0 

psql执行下一条SQL时就会自动连接到其它Slave上。

postgres=# select pg_postmaster_start_time();
FATAL:  terminating connection due to administrator command
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
postgres=# select pg_postmaster_start_time();
   pg_postmaster_start_time    
-------------------------------
 2017-01-14 12:01:48.068455+08
(1 row) 

指定静态的real server列表

有时候不希望将所有连接到Master的Slave都加入到LVS的real server中,比如某个Slave可能实际上是pg_receivexlog。 这时可以在lvsdr上指定静态的real server列表作为白名单。

方法1:

通过default_weight和weight_of_realservers指定各个real server的权重,将不想参与到负载均衡的Slave的权重设置为0。 并且还是通过在Master上查询Slave一览的方式监视Slave健康状态。

下面在Pacemaker集群的任意一个节点(node1,node2或node3)上,修改lvsdr RA的配置,设置有效的real server列表为node,node2和node3。

pcs resource update lvsdr default_weight="0"
pcs resource update lvsdr weight_of_realservers="node1,1 node2,1 node3,1"
pcs resource update lvsdr realserver_get_real_servers_script="/opt/pha4pgsql/tools/get_active_slaves /usr/pgsql/bin/psql \"host=192.168.0.236 port=5432 dbname=postgres user=replication password=replication connect_timeout=5\"" 

在lvsdr所在节点上检查LVS的状态,此时node4(192.168.0.234)的权重为0,LVS不会往node4上转发请求。

[root@node2 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  node2:postgres rr
  -> node2:postgres               Route   1      0          0         
  -> node3:postgres               Route   1      0          0         
  -> 192.168.0.234:postgres       Route   0      0          0 

方法2:

通过default_weight和weight_of_realservers指定real server一览,并通过调用check_active_slave脚本,依次连接到real server中的每个节点上检查其是否可以连接并且是Slave。

pcs resource update lvsdr default_weight="1"
pcs resource update lvsdr weight_of_realservers="node1 node2 node3 192.168.0.234"
pcs resource update lvsdr realserver_dependent_resource=""
pcs resource update lvsdr realserver_get_real_servers_script=""
pcs resource update lvsdr realserver_check_active_real_server_script="/opt/pha4pgsql/tools/check_active_slave /usr/pgsql/bin/psql \"port=5432 dbname=postgres user=replication password=replication connect_timeout=5\" -h" 

推荐采用方法1,因为每次健康检查只需要1次连接。

参考



阅读(10795) | 评论(3) | 转发(0) |
给主人留下些什么吧!~~

skykiker2017-03-12 11:43:12

在我目前熟悉的几个HA方案里,Pacemaker+Corosync算是功能最完备,也是非常可靠的HA方案。但如果配置不当或测试不充分导致的问题并不能归罪于HA方案。
Pacemaker+Corosync最大的问题在于学习成本高,管理复杂(其它一些问题都是可以通过适当手段回避的,比如高负载,网络不稳定)。我包装pha4pgsql脚本的目的也就是简化Pacemaker+Corosync+pgsql的学习和维护(如上生产请自行测试,责任自负)。最后,在未充分掌握相关技术之前,部署流复制+手动切换其实更合适;新手急着上自动切换HA是不可取的,这只适合老司机玩(现实情况貌似正好相反)。一些生产环境部署自动切换的HA方案后貌似工作得很不错,等出一次脑裂故障就现出原形了!这不仅对PG,对其它数据库同样适用,比如即使是被广泛使用并且非常可靠的MySQL的HA方案MHA,发生脑裂的生产案例也多了去了。

skykiker2017-03-01 00:36:21

babyyellow:请问投产了吗? 


pgsql  + pacemaker + corosync    我在生产环境中跑了一年, 最后放弃了. 


各种坑,  我感觉主要问题是在心跳 对网络环境的要求太高了, 我最后不得已搭建单独的心跳网络, 还是会出现因为心跳问题导致的集群崩溃. 


这个我的一些总结, http://blog.itpub.net/133735/cid-90337-list-1/   楼主可以参考一下. 

测试详尽在考虑投产.  我测试用了一年, 上线跑了一年, 最后下线了. 伺候不理他. 最后自己开发了一套.

pacemaker + corosync确实难伺候,遇到问题排查费时间。仔细测试过,也小规模部署过,系统高负载,网络不稳定都可能导致集群出故障,但都是比较极端的情况,很少发生。pacemaker本身确实太复杂了,后面准备搞个简单点的方案。

回复 | 举报

babyyellow2017-02-28 15:40:00

请问投产了吗? 


pgsql  + pacemaker + corosync    我在生产环境中跑了一年, 最后放弃了. 


各种坑,  我感觉主要问题是在心跳 对网络环境的要求太高了, 我最后不得已搭建单独的心跳网络, 还是会出现因为心跳问题导致的集群崩溃. 


这个我的一些总结, http://blog.itpub.net/133735/cid-90337-list-1/   楼主可以参考一下. 

测试详尽在考虑投产.  我测试用了一年, 上线跑了一年, 最后下线了. 伺候不理他. 最后自己开发了一套.

评论热议
请登录后评论。

登录 注册