博客是我工作的好帮手,遇到困难就来博客找资料
分类: 系统运维
2015-08-04 21:24:47
说明:
操作系统:CentOS 5.X 64位
Web服务器:192.168.21.127、192.168.21.128
站点:bbs.osyunwei.com和sns.osyunwei.com部署在两台Web服务器上
实现目的:
增加两台服务器(主主模式),通过Nginx+Keepalived实现Web服务器负载均衡
架构规划:
负载均衡服务器:192.168.21.129、192.168.21.130
虚拟服务器(VIP):192.168.21.252、192.168.21.253
部署完成之后:
1、VIP:192.168.21.253指向192.168.21.129;VIP:192.168.21.252指向192.168.21.130;
2、当192.168.21.129宕机时,VIP:192.168.21.253漂移到192.168.21.130上;
3、当192.168.21.130宕机时,VIP:192.168.21.252漂移到192.168.21.129上;
这样的主主模式好处是,两台服务器在提供服务的同时,又互为对方的备份服务器。
具体操作:
第一部分:在两台Nginx服务器上分别操作
一、关闭SElinux、配置防火墙
1、vi /etc/selinux/config
#SELINUX=enforcing #注释掉
#SELINUXTYPE=targeted #注释掉
SELINUX=disabled #增加
:wq! #保存退出
setenforce 0 #使配置立即生效
2、vi /etc/sysconfig/iptables #编辑
-A RH-Firewall-1-INPUT -d 224.0.0.18 -j ACCEPT #允许组播地址通信
-A RH-Firewall-1-INPUT -p vrrp -j ACCEPT #允许VRRP(虚拟路由器冗余协)通信
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT #允许80端口通过防火墙
:wq! #保存退出
/etc/init.d/iptables restart #重启防火墙使配置生效
二、安装Nginx
1、安装编译工具包(使用CentOS yum命令安装,安装的包比较多,方便以后配置lnmp环境)
yum install -y make apr* autoconf automake curl curl-devel gcc gcc-c++ gtk+-devel zlib-devel openssl openssl-devel pcre-devel gd kernel keyutils patch perl kernel-headers compat* cpp glibc libgomp libstdc++-devel keyutils-libs-devel libsepol-devel libselinux-devel krb5-devel libXpm* freetype freetype-devel freetype* fontconfig fontconfig-devel libjpeg* libpng* php-common php-gd gettext gettext-devel ncurses* libtool* libxml2 libxml2-devel patch policycoreutils bison
2、下载软件包
(1)#下载Nginx
(2)ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.34.tar.gz #下载pcre (支持nginx伪静态)
(3)#下载ngx_cache_purge(方便以后扩展配置nginx缓存服务器)
上传以上软件包到/usr/local/src 目录
3、安装pcre
cd /usr/local/src
mkdir /usr/local/pcre #创建安装目录
tar zxvf pcre-8.34.tar.gz
cd pcre-8.34
./configure --prefix=/usr/local/pcre #配置
make
make install
4、安装Nginx
cd /usr/local/src
groupadd www #添加www组
useradd -g www www -s /bin/false #创建nginx运行账户www并加入到www组,不允许www用户直接登录系统
cd /usr/local/src #进入安装目录
tar zxvf ngx_cache_purge-2.1.tar.gz #解压
tar zxvf nginx-1.4.7.tar.gz #解压
cd nginx-1.4.7
./configure --prefix=/usr/local/nginx --without-http_memcached_module --user=www --group=www --with-http_stub_status_module --with-openssl=/usr/ --with-pcre=/usr/local/src/pcre-8.34 --add-module=../ngx_cache_purge-2.1 #配置
注意:--with-pcre=/usr/local/src/pcre-8.34指向的是源码包解压的路径,而不是安装的路径,否则会报错
make #编译
make install #安装
/usr/local/nginx/sbin/nginx #启动nginx
设置nginx开启启动
vi /etc/rc.d/init.d/nginx #编辑启动文件添加下面内容
=======================================================
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: - 85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /usr/local/nginx/conf/nginx.conf
# pidfile: /usr/local/nginx/logs/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)
NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/lock/subsys/nginx
make_dirs() {
# make required directories
user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
if [ -z "`grep $user /etc/passwd`" ]; then
useradd -M -s /bin/nologin $user
fi
options=`$nginx -V 2>&1 | grep 'configure arguments:'`
for opt in $options; do
if [ `echo $opt | grep '.*-temp-path'` ]; then
value=`echo $opt | cut -d "=" -f 2`
if [ ! -d "$value" ]; then
# echo "creating" $value
mkdir -p $value && chown -R $user $value
fi
fi
done
}
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
make_dirs
echo -n $"Starting $prog: "
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
#configtest || return $?
stop
sleep 1
start
}
reload() {
#configtest || return $?
echo -n $"Reloading $prog: "
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit 2
esac
=======================================================
:wq! #保存退出
chmod 775 /etc/rc.d/init.d/nginx #赋予文件执行权限
chkconfig nginx on #设置开机启动
/etc/rc.d/init.d/nginx restart #重新启动Nginx
service nginx restart
=======================================================
三、配置Nginx
cp /usr/local/nginx/conf/nginx.conf /usr/local/nginx/conf/nginx.confbak #备份nginx配置文件
1、设置nginx运行账
vi /usr/local/nginx/conf/nginx.conf #编辑,修改
找到user nobody;修改为
user www www; #在第一行
:wq! #保存退出
2、禁止nginx空主机头
vi /usr/local/nginx/conf/nginx.conf #编辑
找到server,在上面一行添加如下内容:
##############################
server {
listen 80 default;
server_name _;
location / {
root html;
return 404;
}
location ~ /.ht {
deny all;
}
}
##############################
:wq! #保存退出
/etc/rc.d/init.d/nginx restart #重启nginx
这样设置之后,空主机头访问会直接跳转到nginx404错误页面。
3、添加nginx虚拟主机包含文件
cd /usr/local/nginx/conf/ #进入nginx安装目录
mkdir vhost #建立虚拟目录
vi /usr/local/nginx/conf/nginx.conf #编辑
找到上一步添加的代码,在最后添加如下内容:
include vhost/*.conf;
:wq! #保存退出
例如:
##############################
server {
listen 80 default;
server_name _;
location / {
root html;
return 404;
}
location ~ /.ht {
deny all;
}
}
include vhost/*.conf;
##############################
4、添加Web服务器列表文件
cd /usr/local/nginx/conf/ #进入目录
touch mysvrhost.conf #建立文件
vi /usr/local/nginx/conf/nginx.conf #编辑
找到上一步添加的代码,在下面添加一行
include mysvrhost.conf;
:wq! #保存退出
5、设置nginx全局参数
vi /usr/local/nginx/conf/nginx.conf #编辑
worker_processes 2; # 工作进程数,为CPU的核心数或者两倍
events
{
use epoll; #增加
worker_connections 65535; #修改为65535,最大连接数。
}
#############以下代码在http { 部分增加与修改##############
server_names_hash_bucket_size 128; #增加
client_header_buffer_size 32k; #增加
large_client_header_buffers 4 32k; #增加
client_max_body_size 300m; #增加
tcp_nopush on; #修改为on
keepalive_timeout 60; #修改为60
tcp_nodelay on; #增加
server_tokens off; #增加,不显示nginx版本信息
gzip on; #修改为on
gzip_min_length 1k; #增加
gzip_buffers 4 16k; #增加
gzip_http_version 1.1; #增加
gzip_comp_level 2; #增加
gzip_types text/plain application/x-javascript text/css application/xml; #增加
gzip_vary on; #增加
6、设置Web服务器列表
cd /usr/local/nginx/conf/ #进入目录
vi mysvrhost.conf #编辑,添加以下代码
upstream osyunweihost {
server 192.168.21.127:80 weight=1 max_fails=2 fail_timeout=30s;
server 192.168.21.128:80 weight=1 max_fails=2 fail_timeout=30s;
ip_hash;
}
7、新建虚拟主机配置文件
cd /usr/local/nginx/conf/vhost #进入虚拟主机目录
touch osyunwei.conf #建立虚拟主机配置文件
vi osyunwei.conf #编辑
log_format access '$remote_addr - $remote_user [$time_local] $request '
'"$status" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
server
{
listen 80;
server_name bbs.osyunwei.com sns.osyunwei.com;
location /
{
proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_pass
#proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /NginxStatus {
stub_status on;
access_log on;
auth_basic "NginxStatus";
#auth_basic_user_file pwd;
}
access_log /usr/local/nginx/logs/access.log access;
}
:wq! #保存配置
service nginx restart #重启nginx
四、安装keepalived
下载keeplived:
上传keepalived-1.2.12.tar.gz到/usr/local/src目录
cd /usr/local/src
tar zxvf keepalived-1.2.12.tar.gz
cd keepalived-1.2.12
./configure --prefix=/usr/local/keepalived #配置,必须看到以下提示,说明配置正确,才能继续安装
Use IPVS Framework : Yes
IPVS sync daemon support : Yes
Use VRRP Framework : Yes
make #编辑
make install #安装
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/keepalived
mkdir /etc/keepalived
ln -s /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/
chmod +x /etc/rc.d/init.d/keepalived #添加执行权限
chkconfig keepalived on #设置开机启动
service keepalived start #启动
service keepalived stop #关闭
service keepalived restart #重启
五、配置keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /usr/local/keepalived/etc/keepalived/keepalived.conf-bak
vi /usr/local/keepalived/etc/keepalived/keepalived.conf #编辑,修改为以下代码
#########################################################
#以下为192.168.21.129服务器
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_nginx.sh" #Nginx服务监控脚本
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx #监测nginx进程状态
}
virtual_ipaddress {
192.168.21.253
}
notify_master "/etc/keepalived/clean_arp.sh 192.168.21.253" #更新虚拟服务器(VIP)地址的arp记录到网关
}
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 52
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.21.252
}
notify_master "/etc/keepalived/clean_arp.sh 192.168.21.252" #更新虚拟服务器(VIP)地址的arp记录到网关
}
#########################################################
:wq! #保存退出
#########################################################
#以下为192.168.21.130服务器
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_nginx.sh" #Nginx服务监控脚本
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx #监测nginx进程状态
}
virtual_ipaddress {
192.168.21.253
}
notify_master "/etc/keepalived/clean_arp.sh 192.168.21.253" #更新虚拟服务器(VIP)地址的arp记录到网关
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.21.252
}
notify_master "/etc/keepalived/clean_arp.sh 192.168.21.252" #更新虚拟服务器(VIP)地址的arp记录到网关
}
#########################################################
:wq! #保存退出
六、设置nginx服务监控脚本
touch /usr/local/keepalived/check_nginx.sh
ln -s /usr/local/keepalived/check_nginx.sh /etc/keepalived/check_nginx.sh
vi /etc/keepalived/check_nginx.sh #编辑,添加以下代码
#########################################################
#!/bin/sh
if [ $(ps -C nginx --no-header | wc -l) -eq 0 ]; then
/etc/rc.d/init.d/nginx start
fi
sleep 2
if [ $(ps -C nginx --no-header | wc -l) -eq 0 ]; then
/etc/rc.d/init.d/keepalived stop
fi
#########################################################
:wq! #保存退出
chmod +x /usr/local/keepalived/check_nginx.sh #添加执行权限
七、设置更新虚拟服务器(VIP)地址的arp记录到网关脚本
touch /usr/local/keepalived/clean_arp.sh
ln -s /usr/local/keepalived/clean_arp.sh /etc/keepalived/clean_arp.sh
vi /etc/keepalived/clean_arp.sh #编辑,添加以下代码
#!/bin/sh
VIP=$1
GATEWAY=192.168.21.2 #网关地址
/sbin/arping -I eth0 -c 5 -s $VIP $GATEWAY &>/dev/null
:wq! #保存退出
chmod +x /usr/local/keepalived/clean_arp.sh #添加脚本执行权限
service nginx restart #重启nginx
service keepalived restart #重启keepalived
第二部分:测试Nginx+Keepalived是否正常运行
一、bbs.osyunwei.com 解析到192.168.21.253;sns.osyunwei.com 解析到192.168.21.252;
在两台Nginx服务器:192.168.21.129、192.168.21.130上执行命令:ip addr
如下图所示:
系统运维 温馨提醒:qihang01原创内容©版权所有,转载请注明出处及原文链接
可以看出现在VIP:192.168.21.253指向192.168.21.129;VIP:192.168.21.252指向192.168.21.130;
在浏览器中打开
如下图所示:
此时,bbs和sns域名都被均衡到192.168.21.127上面
二、停止192.168.21.127上面的nginx服务
service nginx stop
继续打开上面的两个网址,如下图所示:
此时,bbs和sns域名都被均衡到192.168.21.128上面(由于192.168.21.127服务器nginx服务被关闭,实现了故障转移)
三、关闭192.168.21.129上面的Keepalived服务
service keepalived stop
此时,在两台Keepalived服务器:192.168.21.129、192.168.21.130上执行命令:ip addr
如下图所示:
可以看出VIP:192.168.21.253和192.168.21.252均指向到192.168.21.130;
此时,打开如下图所示:
可以正常访问
四、恢复192.168.21.129上面的keepalived服务,恢复192.168.21.127上面的nginx服务,停止192.168.21.130上面的Keepalived服务
service keepalived stop
在两台Keepalived服务器:192.168.21.129、192.168.21.130上执行命令:ip addr
如下图所示:
可以看出VIP:192.168.21.253和192.168.21.252均指向到192.168.21.129;
此时,打开如下图所示:
可以正常访问
至此,Nginx+Keepalived实现Web服务器负载均衡配置完成。
Keepalived + Nginx 实 现 高 可 用 Web 负 载 均 衡
一、场景需求:
二、Keepalived 简要介绍
Keepalived 是一种高性能的服务器高可用或热备解决方案,Keepalived 可以用来防止服务器单点故障的发生,通过配合 Nginx 可以实现 web 前端服务的高可用。
Keepalived 以 VRRP 协议为实现基础,用 VRRP 协议来实现高可用性(HA)。VRRP(Virtual RouterRedundancy Protocol)协议是用于实现路由器冗余的协议,VRRP 协议将两台或多台路由器设备虚拟成一个设备,对外提供虚拟路由器 IP(一个或多个),而在路由器组内部,如果实际拥有这个对外 IP 的路由器如果工作正常的话就是 MASTER, 或者是通过算法选举产生, MASTER 实现针对虚拟路由器 IP 的各种网络功能,如 ARP 请求,ICMP,以及数据的转发等;其他设备不拥有该虚拟 IP,状态是 BACKUP,除了接收 MASTER 的VRRP 状态通告信息外,不执行对外的网络功能。当主机失效时,BACKUP 将接管原先 MASTER 的网络功能。
VRRP 协议使用多播数据来传输 VRRP 数据, VRRP 数据使用特殊的虚拟源 MAC 地址发送数据而不是自身网卡的 MAC 地址,
VRRP 运行时只有 MASTER 路由器定时发送 VRRP 通告信息,表示 MASTER 工作正常以及虚拟路由器 IP(组),BACKUP 只接收 VRRP 数据,不发送数据,如果一定时间内没有接收到 MASTER 的通告信息,各 BACKUP 将宣告自己成为 MASTER,发送通告信息,重新进行 MASTER 选举状态。
三、方案规划
思路:
1. 配置双机互信(非必须)
2. 添加主机名解析
3. 设置时间同步
4. 实现主机的高可用
5. 实现web服务器的高可用
6. 测试
架构:
Master1: 172.16.16.16 node2.ja.com 软件: keepalived+nginx 网卡Vmnet2
Master2: 172.16.16.17 node3.ja.com 软件: keepalived+nginx 网卡Vmnet2
宿主机:仅作为测试使用的客户端
一、准备工作
1)编辑/etc/hosts文件,分别为node3,node4添加主机名称解析,添加内容如下:
172.16.16.16 node2.ja.com
172.16.16.17 node3.ja.com
2)配置双机互信,实现免密钥、免密码登录(方便以后的管理,如软件安装,文件分发)
node2:
ssh-keygen -t rsa -P ''
ssh-copy-id -i .ssh/id_rsa.pub node4.ja.com
node3:
ssh-keygen -t rsa -P ''
ssh-copy-id -i .ssh/id_rsa.pub node3.ja.com
3)时间同步
命令行同步,立即生效:
ssh node2.ja.com 'ntpdate 172.16.0.1';ntpdate 172.16.0.1
设置定时任务,永久有效:
echo '*/5 * * * * /usr/sbin/ntpdate 172.16.0.1 &>/dev/null;/sbin/hwclock -w' >>/var/spool/cron/root
4)分别为node2,node3安装keepalived和nginx
yum -y install keepalived nginx ipvsadm
二、编写外部脚本
1)主机维护脚本(这个配置在keepalived.conf中)
vrrp_script chk_mantaince { # chk_mantaince定义脚本的名称,可随意取
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" #命令(其实这里可以是自己定义好的脚本路径也可以是判断命令)#这里的意思是如果在这个文件下有down这个文件就表示期望这 个节点为备用状态。
interval 1 #每隔1秒钟执行一次
weight -2 #一旦命令执行失败,权重降低2个
}
2)keepalived状态转换通知脚本(这个放在/etc/keepalived/命目录下)
这个脚本2个节点都要有
[root@node2 keepalived]# cat notify.sh
#!/bin/bash
# Author: liuyuan <
# description: An example of notify script
#
vip=172.16.16.10
contact='root@localhost'
notify() {
mailsubject="`hostname` to be $1: $vip floating"
mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
echo $mailbody | mail -s "$mailsubject" $contact
}
case "$1" in
master)
notify master
/etc/rc.d/init.d/nginx start
exit 0
;;
backup)
notify backup
/etc/rc.d/init.d/nginx stop
exit 0
;;
fault)
notify fault
/etc/rc.d/init.d/nginx stop
exit 0
;;
*)
echo 'Usage: `basename $0` {master|backup|fault}'
exit 1
;;
esac
添加可执行权限
chmod +x notify.sh
3)nginx web服务健康检测脚本及邮件通知脚本(这个放在/etc/keepalived/命目录下)
这个脚本2个节点都要有
[root@node2 keepalived]# cat monitor_nginx.sh
#!/bin/bash
#Author: liuyuan
Contact='root@localhost'
Subject="Web server is bad"
Mailbody="Date: `date +"%F %T"` Event: 'nginx is down' Host: `uname -n`"
while true;do
nginx_status=`killall -0 nginx &>/dev/null`
if [ `echo $?` -ne 0 ];then
echo $Mailbody|mail -s $Subject $Contact
/etc/init.d/nginx start &> /dev/null
fi
sleep 5
done
添加可执行权限
chmod +x
monitor_nginx.sh
三、配置keepalived
node2上keepalived的完整配置如下:
[root@node2 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_maintaince {
script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
172.16.16.10
}
track_script {
chk_maintaince
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 52
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 3333
}
virtual_ipaddress {
172.16.16.11
}
track_script {
chk_maintaince
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
real_server 172.16.16.16 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 172.16.16.17 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
}
node3节点上keepalived的配置,与上面大部分相同,仅需作出如下修改即可
vrrp_instance VI_1中
state BACKUP
priority 99
vrrp_instance VI_1中
state MASTER
priority 100
注: 相同实例通信时的密码要相同;每个实例都要有一个独立的VIP地址
定义完检测脚本还需在实例中调用检测跟踪机制,这样检测脚本才会生效;
主、备节点的priority大小和 weight的大小一定要合适,不然vip会转移失败的;
将nginx web服务的守护进程调入后台运行
[root@node2 keepalived]# nohup sh monitor_nginx.sh &
[1] 32071
[root@node3 keepalived]# nohup sh monitor_nginx.sh &
[1] 30106
查看正在后台运行的nginx web服务的守护进程
[root@node2 keepalived]# ps -elf|grep "sh monitor_nginx.sh"|grep -v grep
0 S root 32071 30943 0 80 0 - 26523 wait 13:29 pts/4 00:00:00 sh monitor_nginx.sh
[root@node3 keepalived]# ps -elf|grep "sh monitor_nginx.sh"|grep -v grep
0 S root 30106 27606 0 80 0 - 26523 wait 02:13 pts/0 00:00:00 sh monitor_nginx.sh
查看nginx的80端口是否启用
[root@node2 keepalived]# lsof -i:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 31880 root 6u IPv4 160257 0t0 TCP *:http (LISTEN)
nginx 31882 nginx 6u IPv4 160257 0t0 TCP *:http (LISTEN)
[root@node3 keepalived]# lsof -i:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 29815 root 6u IPv4 310255 0t0 TCP *:http (LISTEN)
nginx 29817 nginx 6u IPv4 310255 0t0 TCP *:http (LISTEN)
四、主机和服务高可用测试
测试双主高可用前的测试:确保2个web服务器均可正常提供服务
分别在浏览器输入2个VIP的地址看是否可以看到各自对应的web页面
主节点1 VIP: 参考截图:双主-node2
主节点2 VIP: 参考截图:双主-node3
测试主机(node2,node3)的高可用
思路:
(1)停止node2的keepalived服务或者直接将node2的虚拟机挂起,模拟keepalived应用程序或服务器故障
验证node2上的VIP地址是否已经成功转移到node3
(2)在浏览器分别使用和访问web资源,验证是否可以看到node3上提供的web服务
(3)当node2的keepalived再次启动后,又会把原有的VIP资源抢夺回来,web页面也会是自己提供的
停掉node2上的keepalived服务
[root@node2 keepalived]# service keepalived stop
查看node2上的VIP是否成功转移
[root@node2 keepalived]# ip addr show|grep 'eth0'
2: eth0:
inet 172.16.16.16/16 brd 172.16.255.255 scope global eth0
查看node3是否成功接收了从node2转移来的VIP
[root@node3 keepalived]# ip addr show|grep 'eth0'
2: eth0:
inet 172.16.16.17/16 brd 172.16.255.255 scope global eth0
inet 172.16.16.11/32 scope global eth0
inet 172.16.16.10/32 scope global eth0
如上,可以证明VIP资源已成功转移.
当node2的keepalived再次启动后,又会把原有的VIP资源抢夺回来,web页面也会是自己提供的
[root@node2 keepalived]# service keepalived start
[root@node2 keepalived]# ip addr show|grep 'eth0'
2: eth0:
inet 172.16.16.16/16 brd 172.16.255.255 scope global eth0
inet 172.16.16.10/32 scope global eth0
访问看页面是否是node2自己提供的
参考截图12
在node3上执行如上的操作,那么node3上的VIP将会转移到node2,另外在访问web时,会看到node2提供的页面,这个操作就交给你了
至此,模拟keepalived应用程序故障后,实现主机的高可用已成功实现
下面我们将模拟nginx 提供的web的高可用
思路:
由于我在前面写的是一个守护进程形式的脚本,所以当服务器上nginx web服务,停掉的时候,守护进程脚本就会尝试去启动nginx,
这可以应对nginx服务意外终止的情况;除非,监控nginx的守护进程脚本被停止了
[root@node2 ~]# jobs -l
[1]+ 32071 Running nohup sh monitor_nginx.sh &
[root@node3 ~]# jobs -l
[1]+ 30106 Running nohup sh monitor_nginx.sh &
停掉node2上的nginx服务
[root@node2 ~]# service nginx stop
Stopping nginx: [ OK ]
[root@node2 ~]# service nginx stop
[root@node2 ~]# lsof -i:80
[root@node2 ~]# lsof -i:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 7598 root 6u IPv4 173882 0t0 TCP *:http (LISTEN)
nginx 7600 nginx 6u IPv4 173882 0t0 TCP *:http (LISTEN)
为了看到守护进程尝试启动nginx web服务的瞬间,我们在停掉nginx后,要立即执行端口反查命令,看nginx的web服务是否在线
至此,nginx web服务的高可用和主机的高可用已成功实现。
###################################################################################################