Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1811056
  • 博文数量: 636
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 3950
  • 用 户 组: 普通用户
  • 注册时间: 2014-08-06 21:58
个人简介

博客是我工作的好帮手,遇到困难就来博客找资料

文章分类

全部博文(636)

文章存档

2024年(5)

2022年(2)

2021年(4)

2020年(40)

2019年(4)

2018年(78)

2017年(213)

2016年(41)

2015年(183)

2014年(66)

我的朋友

分类: 系统运维

2017-03-21 22:15:25

1、state.highstate会读取所有环境(包括base环境)的top.sls文件,并且执行top.sls文件内容里面定义的sls文件,不在top.sls文件里面记录的sls则不会被执行;


2、state.sls默认读取base环境,但是它并不会读取top.sls文件。你可以指定state.sls执行哪个sls文件,只要这个sls文件在base环境下存在;


3、state.sls也可以指定读取哪个环境:state.sls  salt_env='prod' xxxx.sls,这个xxxx.sls可以不在top.sls中记录。




saltstack实现yum安装httpd

vim /etc/salt/master


#打开下面三行

416 file_roots:

417   base:

418     - /srv/salt


保存退出


mkdir /srv/salt


重启服务


/etc/init.d/salt-master restart


cd /srv/salt


vim apache.sls

apache-install:

  pkg.installed:

    - names:

      - httpd

      - httpd-devel


apache-service:

  service.running:

    - name: httpd

    - enable: True

    - reload: True


保存退出


执行:

    salt '*' state.sls apache


或者高级状态执行:

从入口文件开始读,top.sls 必须放在base环境下:

比如所有机器都要让apache,nginx的情况下建议这样做

vim top.sls

base:

  '*':

    - apache


保存退出


salt '*' state.highstate




1. saltstack安装

前期准备:

准备两台机器,写hostname

172.7.15.106  server.test.com   

172.7.15.111    client.test.com


server上:

yum install -y epel-release

yum install -y salt-master  salt-minion


client上:

yum install -y epel-release

yum install -y salt-minion


启动

server上:

/etc/init.d/salt-master start

/etc/init.d/salt-minion start


client上:

vim  /etc/salt/minion   //指定server的ip

大概是16行,修改或增加

master: 172.7.15.106

id: client

说明,这个id虽然不定义也没有关系,如果不定义,那么master会以客户端的hostname来显示,一定定义了就按id定义的名字来显示了,这个id可以不和hostname一致,但定义了id后,也应该在/etc/hosts里面定义记录


/etc/init.d/salt-minion start


2. 认证

server上:

salt-key -a  client.test.com 


此时我们在client的 /etc/salt/pki/minion 目录下面会多出一个minion_master.pub 文件


可以使用 salt-key 命令查看到已经签名的客户端


salt-key  可以使用-A签名所有主机,也可以使用-d删除指定主机的key


3.  远程执行

示例1: salt '*' test.ping    这里的*表示所以已经签名的客户端,也可以指定其中一个

示例2:  salt '*' cmd.run   'df -h'    


说明1: 这里的*必须是在master上已经被接受过的客户端,可以通过salt-key查到,通常是我们已经设定的id值。关于这部分内容,它支持通配、列表以及正则。 比如两台客户端  web10  web11, 那我们可以写成  salt 'web*'    salt 'web1[02]'  salt -L 'web10,web11'   salt -E 'web(10|11) 等形式,使用列表,即多个机器用逗号分隔,而且需要加-L,使用正则必须要带-E选项。 它还支持grains,加-G选项,下面会介绍到。


3. 配置管理

server上:

vim  /etc/salt/master   //搜索找到 file_roots

打开如下内容的注释:

file_roots:

  base:

    - /srv/salt


mkdir  /srv/salt

cd /srv/salt

vim /srv/salt/top.sls  //加入如下内容

base:

  '*':

    - apache

意思是,在所有的客户端上执行 apache模块


vim  /srv/salt/apache.sls  //加入如下内容,这个就是apache模块的内容

apache-service:

  pkg.installed:

    - names:

      - httpd

      - httpd-devel

  service.running:

    - name: httpd

    - enable: True


说明,该模块调用了pkg.installed 函数,下面是要安装的包的名字。service.running也是一个函数,来保证指定的服务启动,enable表示开机启动。

执行: salt 'client.test.com' state.highstate


4. grains 

grains是在minion启动时收集到的一些信息,比如操作系统类型、网卡ip等。 使用命令

salt 'client.test.com' grains.ls 列出所有的grains项目名字

salt 'client.test.com' grains.items 列出所有grains项目以及值

grains的信息并不是动态的,并不会时时变更,它只是在minion启动时收集到的。

grains可以做配置管理。


自定义grains

vim   /etc/salt/minion  

添加或更改:

grains:

  role:

    - nginx

  env:

    - test

或者

vim /etc/salt/grains

添加:

role: nginx

env: test


重启minion服务

获取grains:

salt '*' grains.item role env

salt 'client.test.com'  grains.get  role


grains其实在远程执行命令时,很方便。我们可以按照grains的一些指标来操作。比如把所有的web服务器的grains的role设置为nginx,那这样我们就可以批量对nginx的服务器进行操作了:

salt -G role:nginx cmd.run 'hostname'

salt -G os:CentOS cmd.run 'hostname'


5. pillar

pillar和grains不一样,是在master上定义的,并且是针对minion定义的一些信息。像一些比较重要的数据(密码)可以存在pillar里,还可以定义变量等。


查看指定minion的pillar值(我测试的版本为空):

salt 'client.test.com' pillar.items


配置自定义pillar

vim  /etc/salt/master

找到如下配置:

pillar_roots:

  base:

    - /srv/pillar

去掉前面的警号

mkdir /srv/pillar

vi /srv/pillar/test.sls  //内容如下

conf: /etc/123.conf


vi /srv/pillar/top.sls  //内容如下

base:

  'client.test.com': 

    - test


重启master

/etc/init.d/salt-master restart


当更改完pillar配置文件后,我们可以通过刷新pillar配置来获取新的pillar状态:

salt ‘*’ saltutil.refresh_pillar


验证:

salt  '*' pillar.itme test


pillar同样可以用来作为salt的匹配对象。比如

salt  -I 'conf:/etc/123.conf'  test.ping 


6. 深入salt配置

环境: base, dev(开发环境), test(测试环境), prod(生产环境)

vim  /etc/salt/master

file_roots:

  base:

    - /srv/salt/

  dev:

    - /srv/salt/dev

  test:

    - /srv/salt/test

  prod:

    - /srv/salt/prod


mkdir  /srv/salt/{dev,test,prod}


案例一:初始化配置

vim /srv/salt/top.sls  //内容如下

base:

  '*':

    - init.dns

如果写成带一个点的形式,那么点前面是目录名字,后面是sls文件名字。

mkdir  init 

cd init

vim dns.sls //内容

/etc/resolve.conf:

  file.managed:

    - source: salt://init/files/resolv.conf  //说明 salt://为当前环境(base)的主目录(/srv/salt/)下。

    - user: root

    - group: root

    - mode: 644

mkdir /srv/salt/init/files   //在此目录下放resolv.conf作为模板文件


运行: salt '*' state.highstate


案例: 自动化部署lamp环境


三个模块:

pkg 安装软件包

file  管理配置文件

service   服务管理


思路: lamp需要安装的软件包有  httpd, php, mysql, mysql-server, php-mysql, php-pdo

预设:我们把这个lamp的安装项目放到dev环境下


cd  /srv/salt/dev/

mkdir /srv/salt/dev/files/


vim lamp.sls  //加入如下内容

lamp-pkg-install:

  pkg.installed:

    - names:

      - php

      - mysql

      - php-cli

      - php-common

      - php-mysql

      - mysql-server

      - php-pdo


apache-service:

  pkg.installed:

    - name: httpd

  file.managed:

    - name: /etc/httpd/conf/httpd.conf

    - source: salt://files/httpd.conf

    - user: root

    - group: root

    - mode: 644

    - require:

      - pkg: apache-service

  service.running:

    - name: httpd

    - enable: True

    - reload: True

    - watch:

      - file: apache-service


mysql-service:

  file.managed:

    - name: /etc/my.cnf

    - source: salt://files/my.cnf

    - user: root

    - group: root

    - mode: 644

  service.running:

    - name: mysqld

    - enable: True


vim  /srv/salt/top.sls   //加入如下内容

dev:

  'client.test.com':

    - lamp


执行: salt '*' state.highstate


salt编译安装nginx  http://blog.cunss.com/?p=272


7. 目录管理


file_dir:

  file.recurse:   //文件使用 file.managed

    - name: /tmp/123

    - source: salt://test/123 

    - user: root

    - group: root

    - file_mode: 644

    - dir_mode: 755

    - mkdir: True

    - include_empty: True参考  


8. 远程命令管理

cat /srv/salt/ex.sls

cmd_test:

  cmd.run:

    - names:

      - touch /tmp/111.txt

      - mkdir /tmp/1233

    - user: root


cat /srv/salt/top.sls

base:

  '*':

    - ex


或者将所有的命令写道master的一个文件中,然后依次执行:


cat /srv/salt/test/exe.sls

cmd_test:

  cmd.script:

    - source: salt://test/1.sh

    - user: root


cat /srv/salt/test/1.sh

#!/bin/bash

touch /tmp/111.txt

if [ -d /tmp/1233 ]

then

rm -rf /tmp/1233

fi


cat /srv/salt/top.sls

base:

  '*':

    - test.exe


执行命令时,可以使用条件onlyif或者unless,两者正好相反

cmd_test:

  cmd.run:

    - unless: test -d /tmp/1233

    - name: mkdir /tmp/1233

    - user: root


或者:

cmd_test:

  cmd.run:

    - name: touch /tmp/111.txt

    - onlyif: test -f /tmp/111.txt


9. 任务计划

cron_test:

  cron.present:

    - name: /bin/touch /tmp/111.txt

    - user: root

    - minute: '*'

    - hoinur: 20

    - daymonth: '*'

    - month: '*'

    - dayweek: '*'


注意,*需要用单引号引起来。

当然我们还可以使用file.managed模块来管理cron,因为系统的cron都是以配置文件的形式存在的。

删除该cron:

cron.absent:

  - name: /bin/touch /tmp/111.txt


两者不能共存,要想删除一个cron,那之前的present就得删除掉。


10. 一些可能会用到的命令

cp.get_file  拷贝master上的文件到客户端

salt  '*' cp.get_file salt://test/1.txt   /tmp/123.txt

cp.get_dir 拷贝目录

salt '*' cp.get_dir salt://test/conf  /tmp/    //会自动在客户端创建conf目录,所以后面不要加conf,如果写成 /tmp/conf/  则会在/tmp/conf/目录下又创建conf

salt-run manage.up 显示存活的minion

salt '*' cmd.script salt://test/1.sh 命令行下执行master上的shell脚本



5、常用模块介绍

(1)、cp模块(实现远程文件、目录的复制,以及下载URL文件等操作)

## 将主服务器file_roots指定位置下的目录复制到被控主机

# salt '*' cp.get_dir salt://hellotest /data


##将主服务器file_roots指定位置下的文件复制到被控主机

# salt '*' cp.get_file salt://hellotest/rocketzhang /root/rocketzhang


## 下载指定URL内容到被控主机指定位置

# salt '*' cp.get_url /root/files.tgz


(2)、cmd模块(实现远程的命令行调用执行)

# salt '*' cmd.run 'netstat -ntlp'


(3)、cron模块(实现被控主机的crontab操作)

## 为指定的被控主机、root用户添加crontab信息

# salt '*' cron.set_job root '*/5' '*' '*' '*' '*' 'date >/dev/null 2>&1'

# salt '*' cron.raw_cron root


## 删除指定的被控主机、root用户的crontab信息

# salt '*' cron.rm_job root 'date >/dev/null 2>&1'

# salt '*' cron.raw_cron root


(4)、dnsutil模块(实现被控主机通用DNS操作)

## 为被控主机添加指定的hosts主机配置项

# salt '*' dnsutil.hosts_append /etc/hosts 127.0.0.1 rocketzhang.qq.com


(5)、file模块(被控主机文件常见操作,包括文件读写、权限、查找、校验等)

# salt '*' file.get_sum /etc/resolv.conf md5

# salt '*' file.stats /etc/resolv.conf

更多功能可以看文档哈 ^_^


(6)、network模块(返回被控主机网络信息)

# salt '*' network.ip_addrs

# salt '*' network.interfaces

更多功能可以看文档哈 ^_^


(7)、pkg包管理模块(被控主机程序包管理,如yum、apt-get等)

# salt '*' pkg.install nmap

# salt '*' pkg.file_list nmap


(8)、service 服务模块(被控主机程序包服务管理)

# salt '*' service.enable crond

# salt '*' service.disable crond

# salt '*' service.status crond

# salt '*' service.stop crond

# salt '*' service.start crond

# salt '*' service.restart crond

# salt '*' service.reload crond



saltstack 批量部署tomcat


saltstack 批量部署tomcat 服务:


[root@zabbix-server state]# salt -E '(jenkins|gitlab).saltstack.me' test.ping

jenkins.saltstack.me:

    True

gitlab.saltstack.me:

    True


[root@zabbix-server state]# cat /etc/salt/master.d/file_roots.conf 

file_roots:

  base:

    - /etc/salt/state

    ......


[root@zabbix-server state]# tree  /etc/salt/state/

/etc/salt/state/

├── jdk

│   ├── files

│   │   └── jdk-8u112-linux-x64.tar.gz

│   └── install.sls

├── tomcat

│   ├── files

│   │   └── apache-tomcat-7.0.64-1.tar.gz

│   └── install.sls

└── top.sls


[root@zabbix-server jdk]# cat  install.sls 

jdk-install:

  file.managed:

    - name: /usr/local/src/jdk-8u112-linux-x64.tar.gz

    - source: salt://jdk/files/jdk-8u112-linux-x64.tar.gz

    - user: root

    - group: root

    - mode: 755

  cmd.run:

    - name: cd /usr/local/src && tar xf jdk-8u112-linux-x64.tar.gz && mv jdk1.8.0_112 /usr/local/jdk && chown -R root:root /usr/local/jdk

    - unless: test -d /usr/local/jdk

    - require:

      - file: jdk-install


jdk-config:

  file.append:

    - name: /etc/profile

    - text:

      - export JAVA_HOME=/usr/local/jdk

      - export CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar

      - export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH


[root@zabbix-server tomcat]# cat install.sls 

include:

  - jdk.install


tomcat-install:

  file.managed:

    - name: /usr/local/src/apache-tomcat-7.0.64-1.tar.gz

    - source: sale://tomcat/files/apache-tomcat-7.0.64-1.tar.gz

    - user: root

    - group: root

    - mode: 755

  cmd.run:

    - name: cd /usr/loca/src &&  tar xf apache-tomcat-7.0.64-1.tar.gz && mv apache-tomcat-7.0.64-1 /usr/local/tomcat && chown -R root:root /usr/local/tomcat

    - unless: test -d /usr/local/tomcat

    - require:

      - file: tomcat-install


tomcat-config:

  file.managed:

    - name: /etc/profile

    - text: 

      - export: TOMCAT_HOME=/usr/local/tomcat


[root@zabbix-server state]# cat  top.sls 

base:

  '(jenkins|gitlab).saltstack.me':

    - match: pcre

    - tomcat.install


执行部署命令: 

[root@zabbix-server state]# salt -E '(jenkins|gitlab).saltstack.me' state.highstate 



[root@zabbix-server state]# salt -E '(jenkins|gitlab).saltstack.me' saltutil.running



saltstack 安装 tomcat8

jdk8 sls文件:cat /srv/salt/init/tomcat8.sls

tomcat-install:

  file.managed:

    - name: /tmp/apache-tomcat-8.5.4.tar.gz

    - source: salt://init/files/apache-tomcat-8.5.4.tar.gz

    - user: root

    - group: root

    - mod: 755

  cmd.run:

    - name: tar -zxf /tmp/apache-tomcat-8.5.4.tar.gz && mv apache-tomcat-8.5.4 /usr/local/tomcat && chown -R root:root /usr/local/tomcat

    - unless: test -d /usr/local/tomcat

    - require:

      - file: tomcat-install

tomcat-config:

  file.append:

    - name: /etc/profile

    - text:

      - export TOMCAT_HOME=/usr/local/tomcat

安装jdk8:salt-ssh '*' state.sls init.jdk8

tomcat8 sls文件:cat /srv/salt/init/jdk8.sls

tomcat-install:

  file.managed:

    - name: /tmp/apache-tomcat-8.5.4.tar.gz

    - source: salt://init/files/apache-tomcat-8.5.4.tar.gz

    - user: root

    - group: root

    - mod: 755

  cmd.run:

    - name: useradd -u 800 tomcat && tar -zxf /tmp/apache-tomcat-8.5.4.tar.gz && mv apache-tomcat-8.5.4 /usr/local/tomcat && chown -R tomcat:tomcat /usr/local/tomcat

    - unless: test -d /usr/local/tomcat

    - require:

      - file: tomcat-install

tomcat-config:

  file.append:

    - name: /etc/profile

    - text:

      - export TOMCAT_HOME=/usr/local/tomcat

start-config:

  cmd.run:

    - name: source /etc/profile && su - tomcat -c "/usr/local/tomcat/bin/startup.sh"

  file.append:

    - name: /etc/rc.local

    - text:

      - su - tomcat -c "/usr/local/tomcat/bin/startup.sh"

安装tomcat8:salt-ssh '*' state.sls init.tomcat8

配置tomcat 8管理用户(tomcat7只需第一步)

1、修改/usr/local/tomcat/conf/tomcat-users.xml文件,添加如下内容

 

 

 

 

 

 

 

2、创建/usr/local/tomcat/conf/Catalina/localhost/manager.xml文件,内容如下

         docBase="${catalina.home}/webapps/manager">

   


tomcat安全管理

1、修改telnet管理端口号8005

2、ajp连接端口8009保护

3、禁用tomcat manager

4、必须使用非root账号启动tomcat

部署jenkins.war(2.32)

1、上传jenkins.war到/usr/local/tomcat/webapps

2、重启tomcat:/usr/local/tomcat/bin/shutdown.sh && /usr/local/tomcat/bin/startup.sh

3、通过浏览器jenkins,密码:cat /root/.jenkins/secrets/initialAdminPassword

4、创建管理员账号jenkins,密码jenkins

5、修改管理员admin密码为jenkins

常用命令

查看java进程:jps -lvm

实例:处理jvm占用CPU高

1、使用jps -lvm获取进程PID

1250 org.apache.catalina.startup.Bootstrap start -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat -Djava.io.tmpdir=/usr/local/tomcat/temp

2534 sun.tools.jps.Jps -lvm -Denv.class.path=.:/usr/local/jdk/lib:/usr/local/jdk/jre/lib:/usr/local/jdk/lib/tools.jar -Dapplication.home=/usr/local/jdk -Xms8m

2、使用jstack 1250 > 17167.txt导出占用高进程的线程栈

3、使用top -H -p 1250查看哪个线程占用CPU高

4、使用echo "obase=16;1252" | bc 将PID转换为16进制

5、根据转换的16进制PID在17167.txt中查看相应的线程(16进制字母小写)

监控jvm

1、jvm开启远程连接

CATALINA_OPTS="$CATALINA_OPTS

-Dcom.sun.managemnet.jmxremote

-Dcom.sun.management.jmxremote.port=12345

-Dcom.sun.management.jmxremote.authenticate=false

-Dcom.sun.management.jmxremote.ssl=false

-Djava.rmi.server.hostname=192.168.8.21"(不对ip地址)

2、使用/usr/local/jdk/bin/jconsole远程连接,查看性能

3、使用/usr/local/jdk/bin/jvisualvm远程连接,查看性能



saltstack实践haproxy+keepalived

172.16.10.199 fonsview     作为minion

172.16.10.128  controller   作为master


[root@controller cluster]# vim /etc/salt/master

file_roots:

  base:

    - /srv/salt/base

  prod:

    - /srv/salt/prod


pillar_roots:

  base:

    - /srv/pillar/base

  prod:

    - /srv/pillar/prod


[root@controller cluster]# cd /srv/salt/

[root@controller salt]# ll

total 8

drwxr-xr-x 3 root root 4096 Mar  5 14:41 base

drwxr-xr-x 4 root root 4096 Mar  5 10:43 prod


[root@controller salt]# cat base/top.sls 

base:

  '*':

    - init.init


prod:

  '*':

#    - cluster.haproxy-outside

    - cluster.haproxy-outside-keepalived


[root@controller salt]# tree base/   #这里都是定义一些初始化的

base/

├── init

│   ├── audit.sls

│   ├── dns.sls

│   ├── epel.sls

│   ├── files

│   │   ├── resolv.conf

│   │   └── zabbix_agentd.conf

│   ├── history.sls

│   ├── init.sls

│   ├── sysctl.sls

│   └── zabbix-agent.sls

└── top.sls


2 directories, 10 files


[root@controller salt]# cat base/init/init.sls 

include:

  - init.dns

  - init.history

  - init.audit

  - init.sysctl

#  - init.epel

  - init.zabbix-agent


[root@controller salt]# cd prod/

[root@controller prod]# ll

total 8

drwxr-xr-x 3 root root 4096 Mar  5 12:05 cluster

drwxr-xr-x 8 root root 4096 Mar  5 10:43 modules

[root@controller prod]# tree

.

├── cluster

│   ├── files

│   │   ├── haproxy-outside.cfg

│   │   └── haproxy-outside-keepalived.conf

│   ├── haproxy-outside-keepalived.sls

│   └── haproxy-outside.sls

└── modules

    ├── haproxy

    │   ├── files

    │   │   ├── haproxy-1.6.3.tar.gz

    │   │   └── haproxy.init

    │   └── install.sls

    ├── keepalived

    │   ├── files

    │   │   ├── keepalived-1.2.17.tar.gz

    │   │   ├── keepalived.init

    │   │   └── keepalived.sysconfig

    │   └── install.sls

    ├── memecached

    ├── nginx

    ├── php

    └── pkg

        └── make.sls


11 directories, 12 files

[root@controller prod]# cat modules/pkg/make.sls 

make-pkg:

  pkg.installed:

    - pkgs:

      - make

      - gcc

      - gcc-c++

      - autoconf

      - openssl

      - openssl-devel

      - pcre

      - pcre-devel



[root@controller prod]# cat modules/haproxy/install.sls 

include:

  - modules.pkg.make


haproxy-install:

  file.managed:

    - name: /usr/local/src/haproxy-1.6.3.tar.gz

    - source: salt://modules/haproxy/files/haproxy-1.6.3.tar.gz

    - mode: 755

    - user: root

    - group: root

  cmd.run:

    - name: cd /usr/local/src && tar xf haproxy-1.6.3.tar.gz && cd haproxy-1.6.3 && make TARGET=linux26 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy 

    - unless: test -d /usr/local/haproxy

    - require:

      - pkg: make-pkg

      - file: haproxy-install


haproxy-init:

  file.managed:

    - name: /etc/init.d/haproxy

    - source: salt://modules/haproxy/files/haproxy.init

    - mode: 755

    - user: root

    - group: root

    - require_in:

      - file: haproxy-install

  cmd.run:

    - name: chkconfig --add haproxy

    - unless: chkconfig --list | grep haproxy


net.ipv4.ip_nonlocal_bind:

  sysctl.present:

    - value: 1


/etc/haproxy:

  file.directory:

    - user: root

    - group: root

    - mode: 755



定义keepalived安装


[root@controller prod]# cat modules/keepalived/install.sls 

{% set keepalived_tar = 'keepalived-1.2.17.tar.gz' %}

keepalived-install:

  file.managed:

    - name: /usr/local/src/{{ keepalived_tar }}

    - source: salt://modules/keepalived/files/{{ keepalived_tar }}

    - mode: 755

    - user: root

    - group: root

  cmd.run:

    - name: cd /usr/local/src && tar zxf keepalived-1.2.17.tar.gz && cd keepalived-1.2.17 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install

    - unless: test -d /usr/local/keepalived

    - require:

      - file: keepalived-install


/etc/sysconfig/keepalived:

  file.managed:

    - source: salt://modules/keepalived/files/keepalived.sysconfig

    - mode: 644

    - user: root

    - group: root


/etc/init.d/keepalived:

  file.managed:

    - source: salt://modules/keepalived/files/keepalived.init

    - mode: 755

    - user: root

    - group: root


keepalived-init:

  cmd.run:

    - name: chkconfig --add keepalived

    - unless: chkconfig --list | grep keepalived

    - require:

      - file: /etc/init.d/keepalived


/etc/keepalived:

  file.directory:

    - user: root

    - group: root



引入配置文件

[root@controller prod]# cat cluster/haproxy-outside-keepalived.sls 

include:

  - modules.keepalived.install

keepalived-server:

  file.managed:

    - name: /etc/keepalived/keepalived.conf

    - source: salt://cluster/files/haproxy-outside-keepalived.conf

    - mode: 644

    - user: root

    - group: root

    - template: jinja

    {% if grains['fqdn'] == 'controller' %}

    - ROUTEID: haproxy_ha

    - STATEID: MASTER

    - PRIORITYID: 150

    {% elif grains['fqdn'] == 'fonsview' %}

    - ROUTEID: haproxy_ha

    - STATEID: BACKUP

    - PRIORITYID: 100

    {% endif %}

  service.running:

    - name: keepalived

    - enable: True

    - watch:

      - file: keepalived-server


执行高级状态


[root@controller cluster]# salt '*' state.highstate  


验证结果

[root@controller prod]# salt '*' cmd.run 'ps -ef|grep haproxy'

fonsview:

    nobody     7097      1  0 00:16 ?        00:00:00 /usr/local/haproxy/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid

    root       8462   8461  0 01:10 ?        00:00:00 /bin/sh -c ps -ef|grep haproxy

    root       8464   8462  0 01:10 ?        00:00:00 grep haproxy

controller:

    nobody     3005      1  0 14:12 ?        00:00:01 /usr/local/haproxy/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid

    root       7316 124173 44 15:07 pts/1    00:00:00 /usr/bin/python /usr/bin/salt * cmd.run ps -ef|grep haproxy

    root       7334   7333  0 15:07 ?        00:00:00 /bin/sh -c ps -ef|grep haproxy

    root       7336   7334  0 15:07 ?        00:00:00 grep haproxy

[root@controller prod]# 

[root@controller prod]# salt '*' cmd.run 'ps -ef|grep keepali'

controller:

    root       7339 124173  0 15:07 pts/1    00:00:00 /usr/bin/python /usr/bin/salt * cmd.run ps -ef|grep keepali

    root       7357   7356  0 15:07 ?        00:00:00 /bin/sh -c ps -ef|grep keepali

    root       7359   7357  0 15:07 ?        00:00:00 grep keepali

fonsview:

    root       7560      1  0 00:46 ?        00:00:00 /usr/local/keepalived/sbin/keepalived -D

    root       7562   7560  0 00:46 ?        00:00:00 /usr/local/keepalived/sbin/keepalived -D

    root       7563   7560  0 00:46 ?        00:00:00 /usr/local/keepalived/sbin/keepalived -D

    root       8470   8469  0 01:10 ?        00:00:00 /bin/sh -c ps -ef|grep keepali

    root       8472   8470  0 01:10 ?        00:00:00 /bin/sh -c ps -ef|grep keepali


使用salt-ssh批量管理主机

安装salt-ssh:yum install -y salt-master salt-ssh


master配置文件:cat /etc/salt/master

file_roots:

  base:

    - /srv/salt/

  dev:

    - /srv/salt/dev/services

    - /srv/salt/dev/states

  prod:

    - /srv/salt/prod/services

    - /srv/salt/prod/states

top.sls文件:cat /srv/salt/top.sls

base:

  'roles:nginx':

    - match: grain

    - init.pkg

    - init.limit

limit文件:cat /srv/salt/init/limit.sls

limit-conf-config:

  file.managed:

    - name: /tmp/limits.conf

    - source: salt://init/files/limits.conf

    - user: root

    - group: root

    - mod: 644

    - name: /tmp/df.sh

    - source: salt://init/files/df.sh

    - user: root

    - group: root

    - mod: 644

files目录:ls /srv/salt/init/files

df.sh  limits.conf

df.sh文件:cat /srv/salt/init/files/df.sh

#!/bin/bash

hostname

roster文件:cat /etc/salt/roster

test1.discuz.com:

  host: test1.discuz.com

  user: root

  passwd: redhat

test2.discuz.com:

  host: test2.discuz.com

  user: root

  passwd: redhat

tomcat1.discuz.com:

  host: tomcat1.discuz.com

  user: root

  passwd: centos

tomcat2.discuz.com:

  host: tomcat2.discuz.com

  user: root

  passwd: centos


传送文件到各主机:salt-ssh '*' state.sls init.limit

在各主机执行脚本:salt-ssh '*' cmd.run 'bash /tmp/df.sh'

生成roster脚本

roster.sh文件:cat /root/roster.sh

#!/bin/bash

>/etc/salt/roster

IFS=' '

cat /root/hosts | while read line

do

   arr=($line)

   echo ${arr[0]}":">>/etc/salt/roster

   echo "  host: "${arr[0]}>>/etc/salt/roster

   echo "  user: "${arr[1]}>>/etc/salt/roster

   echo "  passwd: "${arr[2]}>>/etc/salt/roster

done

hosts文件:cat /root/hosts

test1.discuz.com root redhat

test2.discuz.com root redhat

tomcat1.discuz.com root centos

tomcat2.discuz.com root centos


slatstack 安装 zabbix_agent


1、创建sls文件

install-rpms:

  cmd.run:

    - name: yum install -y autoconf automake imake libxml2-devel expat-devel cmake gcc gcc-c++ libaio libaio-devel bzr bison libtool ncurses5-devel net-snmp\* java-1.7.0-openjdk.x86_64 java-1.7.0-openjdk-devel.x86_64 libxml2 libxml2-devel bzip2 libpng-devel freetype-devel bzip2-devel curl* curl-devel libjpeg\* openjpeg\*

install-zabbix_agent:

  file.managed:

    - name: /tmp/zabbix-3.0.3.tar.gz

    - source: salt://init/files/zabbix-3.0.3.tar.gz

  cmd.run:

    - name: (id zabbix || useradd -u 600 zabbix) && cd /tmp && tar zxf zabbix-3.0.3.tar.gz && cd zabbix-3.0.3 && ./configure --prefix=/usr/local/zabbix --enable-agent --enable-java && make && make install && mkdir /usr/local/zabbix/log && chown zabbix:zabbix /usr/local/zabbix/log

    - unless: test -d /usr/local/zabbix

config-zabbix_agent:

  file.managed:

    - name: /usr/local/zabbix/etc/zabbix_agentd.conf

    - source: salt://init/files/zabbix_agentd.conf

  cmd.run:

    - name: (grep zabbix_agentd /etc/rc.local || echo "/usr/local/zabbix/sbin/zabbix_agentd">>/etc/rc.local) && /usr/local/zabbix/sbin/zabbix_agentd

    - require:

       - file: install-zabbix_agent


2、安装zabbix agent:salt-ssh '*' state.sls init.zabbix_agent



saltstack-memcached的安装

{% set memory = salt['pillar.get']('initialization:memory','128') %}

{% set port = salt['pillar.get']('initialization:port', '11211') %}

{% set maxconnect = salt['pillar.get']('initialization:maxconnect', '1024') %}


groupadd:  

  group.present:   

     - name: memcached

     - gid: 1000


useradd:

  user.present:

    - name: memcached

    - fullname: memcached

    - shell: /sbin/nologin

    - uid: 1000

    - gid: 1000


memcached-datadir:

  cmd.run:

    - names:

       - mkdir -p /usr/local/memcached

    - unless: test -d /usr/local/memcached


libevent-datadir:

  cmd.run:

    - names:

       - mkdir -p /usr/local/libevent

    - unless: test -d /usr/local/libevent


libevent-source-install:

  file.managed:

    - name: /usr/local/src/libevent-2.0.22-stable.tar.gz

    - source: salt://memcached/files/libevent-2.0.22-stable.tar.gz

    - user: root

    - group: root

    - mode: 644

  cmd.run:

    - name: cd /usr/local/src && tar -zvxf libevent-2.0.22-stable.tar.gz  && cd libevent-2.0.22-stable && ./configure --prefix=/usr/local/libevent && make && make install


memcached-source-install:

  file.managed:

    - name: /usr/local/src/memcached-1.4.34.tar.gz

    - source: salt://memcached/files/memcached-1.4.34.tar.gz

    - user: root

    - group: root

    - mode: 644

  cmd.run:

    - name: cd /usr/local/src && tar -zvxf memcached-1.4.34.tar.gz && cd memcached-1.4.34 && ./configure --prefix=/usr/local/memcached --enable-64bit --with-libevent=/usr/local/libevent && make && make install


memcached-service:

  cmd.run:

    - name: /usr/local/memcached/bin/memcached -d -m {{ memory }} -p {{ port }} -c {{ maxconnect }} -u memcached

    - unless: netstat -lnpt |grep {{ port }}

    - require: 

     - cmd: memcached-source-install 

     - user: memcached


执行命令示例:salt '192.168.1.1' state.sls memcached.memcached-install saltenv="yeronghai-memcached"  pillar='{initialization:{"memory":"1024","port":"11200","maxconnect":"1024"}}'


memcached.memcached-install:sls执行的文件

saltenv="yeronghai-memcached" : 分支名称

pillar='{initialization:{"memory":"1024","port":"11200","maxconnect":"1024"}}':自定义的







saltstack-PHP5.4的安装


这里的安装包括了两个插件安装 memcache和zendopcache,但是没有更改php.ini的内容,到时候直接把修改过的文件直接替换就好了。


测试,可以自定义版本和安装路径,但是版本的话需要softurl里面有才能定义,不能只能用wget

sudo salt '192.168.1.1' state.sls php.install saltenv="ot" pillar='{php:{"installdir":"/usr/local/php--test","phpinidir":"/tmp/phpini-test/","apachedir":"/usr/local/apache-test","version":"5.4.45"}}'


{% set softurl =  "" %}

{% set installdir = salt['pillar.get']('php:installdir', '/usr/local/php') %}

{% set phpinidir = salt['pillar.get']('php:phpinidir', '/etc') %}

{% set apachedir = salt['pillar.get']('php:apachedir', '/usr/local/apache2') %}

{% set version = salt['pillar.get']('php:version', '5.4.45') %}


php_inidir:

  cmd.run:

    - names:

      - mkdir {{ phpinidir }}

    - unless: test -e {{ phpinidir }}


php_software:

  cmd.run:

    - cwd: /root/soft

    - name: wget {{ softurl }}/php-{{ version }}.tar.gz

    - unless: test -e /root/soft/php-{{ version }}.tar.gz


php_ln:

  cmd.run:

    - name: ln -s /usr/lib64/libjpeg.so /usr/lib/libjpeg.so && ln -s /usr/lib64/libpng.so /usr/lib/libpng.so && ln -s /usr/lib64/libgd.so /usr/lib/libgd.so

    - unless: test -e /usr/lib/libjpeg.so && test -e /usr/lib/libgd.so && test -e /usr/lib/libpng.so


php_install:

  cmd.run:

    - name: cd /root/soft && tar -zvxf php-{{ version }}.tar.gz &&  cd php-{{ version }} && ./configure --prefix={{ installdir }} --with-mysql=mysqlnd --with-mysqli=mysqlnd --with-pdo-mysql=mysqlnd --with-apxs2={{ apachedir }}/bin/apxs --with-gd --with-png-dir=/usr --with-jpeg-dir=/usr --with-freetype-dir=/usr --with-zlib --with-openssl --enable-sockets --enable-mbstring --with-bz2 --enable-ftp --with-gettext --enable-sysvshm --enable-shmop --enable-gd-native-ttf --enable-gd-jis-conv --with-curl --with-config-file-path={{ phpinidir }} && make -j 4 && make install && cp php.ini-development {{ phpinidir }}

    - unless: test -e {{ installdir }}


memcache_software:

  cmd.run:

    - cwd: /root/soft

    - name: wget {{ softurl }}/memcache-2.2.7.tgz

    - unless: test -e /root/soft/memcache-2.2.7.tgz


memcache_install:

  cmd.run:

    - name: cd /root/soft && tar -zvxf memcache-2.2.7.tgz &&  cd memcache-2.2.7 && {{ installdir }}/bin/phpize && ./configure --with-php-config={{ installdir  }}/bin/php-config && make && make install

    - unless: php_install


zendopcache_software:

  cmd.run:

    - cwd: /root/soft

    - name: wget {{ softurl }}/zendopcache-7.0.4.tgz

    - unless: test -e /root/soft/zendopcache-7.0.4.tgz


zendopcache_install:

  cmd.run:

    - name: cd /root/soft && tar -zvxf zendopcache-7.0.4.tgz &&  cd zendopcache-7.0.4 && {{ installdir }}/bin/phpize  && ./configure --with-php-config={{ installdir  }}/bin/php-config && make && make install

    - unless: php_install


1.4)分发Shell脚本和包并授权:


1.4.1)常用cp模块介绍:(其它模块可看我其它相关博客)

cp.get_file   从主服务器下载目录

cp.get_dir    从主服务器下载文件

cp.get_url    从服务器下载指定URL文件


[root@node2 ~]# salt 'node4' cp.get_file salt://mysql-5.6.21-linux-glibc2.5-x86_64.tar.gz /root/mysql-5.6.21-linux-glibc2.5-x86_64.tar.gz

node4:

    /root/mysql-5.6.21-linux-glibc2.5-x86_64.tar.gz

[root@node2 ~]#

[root@node2 ~]# salt 'node4' cp.get_file salt://MySQL_install.sh /root/MySQL_install.sh

node4:

    /root/MySQL_install.sh

[root@node2 ~]# salt 'node4' cp.get_file salt://MySQL_remove.sh /root/MySQL_remove.sh

node4:

    /root/MySQL_remove.sh

[root@node2 ~]#




saltstack批量安装zabbix agent


1、准备zabbix agent的配置文件


由于没有啥特别要求,这里我选择yum安装zabbix22-agent

[root@master init]# yum -y install zabbix22-agent

[root@master zabbix]# cp zabbix_agentd.conf /etc/salt/states/init/files/

2、创建zabbix_agent.sls


[root@master ~]# vim /etc/salt/states/init/zabbix_agent.sls

zabbix_agent:

  pkg.installed:

    - name: zabbix22-agent

  file.managed:

    - name: /etc/zabbix_agentd.conf

    - source: salt://init/files/zabbix_agentd.conf

    - user: root

    - group: root

    - mode: '0644'

  service.running:

    - name: zabbix-agent

    - enable: True

    - restart: True

说明:

pkg.installed:安装zabbix22-agent

file.managed: 管理并下发文件

service.running: 管理服务的状态

3、编辑top.sls文件


[root@master ~]# cd /etc/salt/states/

[root@master states]# ls

init  prod  top.sls

[root@master states]# cat top.sls 

base:

  '*':

    - init.pkg

    - init.limit

    - init.ntp-crontab

    - init.hosts

    - init.zabbix_agent

查看文件的目录结构

[root@master states]# tree init/

init/

├── files

│   ├── hosts.conf

│   ├── limits.conf

│   ├── ntp-crontab.conf

│   └── zabbix_agentd.conf

├── hosts.sls

├── limit.sls

├── ntp-crontab.sls

├── pkg.sls

└── zabbix_agent.sls

1 directory, 9 files

4、推送测试


[root@master states]# salt '*' state.highstate

中间步骤略:

----------

          ID: zabbix_agent

    Function: service.running

        Name: zabbix-agent

      Result: True

     Comment: Service zabbix-agent has been enabled, and is running

     Started: 14:04:45.625235

    Duration: 410.618 ms

     Changes:   

              ----------

              zabbix-agent:

                  True

Summary

------------

Succeeded: 9 (changed=1)

Failed:    0

------------

Total states run:     9

5、在客户端进行测试:


[root@master ~]# salt '*' cmd.run '/etc/init.d/zabbix-agentd status'

node01.saltstack.com:

    zabbix_agentd (pid  6084) is running...

node02.saltstack.com:

    zabbix_agentd (pid  5782) is running...

[root@master ~]# salt '*' cmd.run "egrep -v '^#|^$' /etc/zabbix_agentd.conf|grep -w Server"

node01.saltstack.com:

    Server=10.10.10.140

node02.saltstack.com:

    Server=10.10.10.140

6、变更zabbix Server后,进行测试与验证


如果zabbix server变更了IP地址(由10.10.10.140改为10.10.10.148):

[root@master ~]# egrep -v '^#|^$' /etc/salt/states/init/files/zabbix_agentd.conf | grep -w Server

Server=10.10.10.148

推送下,更新服务器的agent ip状态:

[root@master ~]# salt '*' state.highstate

----------

          ID: zabbix_agent

    Function: file.managed

        Name: /etc/zabbix_agentd.conf

      Result: True

     Comment: File /etc/zabbix_agentd.conf updated

     Started: 14:22:29.306875

    Duration: 16.102 ms

     Changes:   

              ----------

              diff:

                  ---  

                  +++  

                  @@ -79,7 +79,7 @@

                   # Server=


                   #Server=127.0.0.1

                  -Server=10.10.10.140

                  +Server=10.10.10.148


                   ### Option: ListenPort

                   #Agent will listen on this port for connections from the server.

----------

Summary

------------

Succeeded: 9 (changed=1)

Failed:    0

------------

Total states run:     9

检查下客户端,看agent的ip地址是否已经调整了:

[root@master ~]#  salt '*' cmd.run "egrep -v '^#|^$' /etc/zabbix_agentd.conf|grep -w Server"

node01.saltstack.com:

    Server=10.10.10.148

node02.saltstack.com:

    Server=10.10.10.148



SaltStack安装Tomcat



1. 基础环境

[root@linux-node1 ~]# cd /srv/salt/prod/modules

[root@linux-node1 modules]# ls

haproxy  keepalived  memcached  pcre  pkg     user

jdk      libevent    nginx      php   tomcat

[root@linux-node1 modules]# mkdir jdk && mkdir tomcat

modules是我存放功能模块的的目录,每个服务尽量能单独写一个状态文件,具有通用性。

2. Installing jdk-8u45-linux-x64

[root@linux-node1 modules]# cd jdk

[root@linux-node1 jdk]# mkdir files          #存放安装包

[root@linux-node1 jdk]# vim jdk-install.sls  #编写安装状态文件

jdk-installl:

  file.managed:

    - name: /server/tools/jdk-8u45-linux-x64.tar.gz

    - source: salt://modules/jdk/files/jdk-8u45-linux-x64.tar.gz

    - user: root

    - group: root

    - mode: 755

  cmd.run:

    - name: cd /server/tools/jdk-8u45-linux-x64.tar.gz && tar zxf jdk-8u45-linux-x64.tar.gz && mv jdk1.8.0_45 /application/jdk && chown -R root:root /application/jdk

    - unless: test -d /application/jdk

    - requrie:

      - file: jdk-install


jdk-config:

  file.append:

    - name: /etc/profile

    - text:

      - export JAVA_HOME=/application/jdk

      - export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

      - export CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar

上面安装了jdk环境

3. Installing Tomcat

[root@linux-node1 modules]# cd tomcat

[root@linux-node1 tomcat]# mkdir files      #存放tomcat的安装包

[root@linux-node1 tomcat]# vim install.sls  #编写安装状态文件

include:

   - modules.jdk.install

tomcta-install:

  file.managed:

    - name: /server/tools/apache-tomcat-8.0.23.tar.gz

    - source: salt://modules/tomcat/files/apache-tomcat-8.0.23.tar.gz

    - user: root

    - group: root

    - mode: 755

  cmd.run:

    - name: cd /server/tools/ && tar zxf apache-tomcat-8.0.23.tar.gz && mv apache-tomcat-8.0.23 /application/ && ln -s /application/apache-tomcat-8.0.23 /application/tomcat && chown -R root:root /application/tomcat8

    - unless: test -d /application/tomcat


tomcat-config:

  file.append:

    - name: /etc/profile

    - text:

      - export TOMCAT_HOME=/application/tomcat





Saltstack源码安装zabbix_agent客户端



安装和环境介绍略,直接上正题:

一,首先是树状图

[root@saltmaster salt]# pwd


/srv/salt

[root@saltmaster salt]# tree


.

├── init

│   └── init.sls

├── top.sls

└── zabbix

    ├── conf.sls

    ├── files

    │   ├── zabbix_agentd

    │   ├── zabbix_agentd.conf

    │   └── zabbix.tar.gz

    ├── init.sls

    └── install.sls

3 directories, 8 files


二,先系统初始化

这里目前只是告诉客户端安装vim-enhanced、lrzsz这2个软件,可以根据实际情况自行安装依赖软件,pkg安装模块目前支持apt与yum。

[root@saltmaster salt]# cat init/init.sls 


pkgs:

  pkg.installed:

    - names:            

      - vim-enhanced

        - lrzsz


三,入口文件top.sls

SLS(代表SaLt State文件)是Salt State系统的核心。SLS描述了系统的目标状态,由格式简单的数据构成。这经常被称作配置管理,其中top.sls文件是配置管理的入口文件,一切都是从这里开始,在master 主机上,默认存放在/srv/salt/目录. top.sls。

这里有2个配置项,一个是系统初始化,一个是zabbix客户端安装。

[root@saltmaster salt]# cat top.sls 


base:

  '*'

    - init.init

  '*':

    - zabbix.init


四,Zabbinx目录的init.sls

顺序执行zabbix目录下的install.sls与zabbix目录下的conf.sls

[root@saltmaster salt]# cat zabbix/init.sls 


include:

  - zabbix.install

    - zabbix.conf


五,具体安装配置

Install.sls具体操作是:

1,把zabbix/files/zabbix.tar.gz文件发送到客户端/tmp目录下,我这里的zabbix.tar.gz是编译好的zabbix客户端打包文件,默认解压缩后就能使用;

2,从/tmp/zabbix.tar.gz解压缩到/usr/local目录下;

3,添加zabbix用户


[root@saltmaster salt]# cat zabbix/install.sls 


zabbix_source:

  file.managed:

    - name: /tmp/zabbix.tar.gz

    - source: salt://zabbix/files/zabbix.tar.gz

    - user: root

    - group: root

    - mode: 644

extract_zabbix:

  cmd.run:

    - cwd: /tmp

    - names :

      - tar zxvf zabbix.tar.gz -C /usr/local

    - require:

      - file: zabbix_source

zabbix_user:

  user.present:

    - name: zabbix

    - createhome: False

    - gid_from_name: True

    - shell: /sbin/nologin


六,修改配置文件开机启动

1,先把配置文件下发到/usr/local/zabbix/etc/zabbix_agentd.conf,注意zabbix_agentd.conf有个配置Hostname={{Hostname}},这个可以更加客户端IP不同而修改成不同的IP。

2,下发自动启动zabbix_agentd服务脚本

3,添加到开机启动列表

4,启动zabbix_agentd服务


[root@saltmaster salt]# cat zabbix/conf.sls 


zabbix_conf:

  file.managed:

    - name: /usr/local/zabbix/etc/zabbix_agentd.conf

    - source: salt://zabbix/files/zabbix_agentd.conf

    - template: jinja

    - defaults:

      Hostname: {{ grains['ip_interfaces']['eth1'][0] }}

zabbix_service:

  file.managed:

    - name: /etc/init.d/zabbix_agentd

    - user: root

    - mode: 755

    - source: salt://zabbix/files/zabbix_agentd

  cmd.run:

    - names:

      - /sbin/chkconfig --add zabbix_agentd

      - /sbin/chkconfig zabbix_agentd on

  service.running:

    - name: zabbix_agentd

    - enable: True

    - watch:

         - file: /usr/local/zabbix/etc/zabbix_agentd.conf


七,测试验证

1,salt '*' state.highstate test=True  这个是测试两个sls功能

2,salt-call state.highstate -l debug  这个是调试

3,salt '*' state.sls init.init  分别下发各个sls功能

4,具体结果如下:

[root@saltmaster salt]# salt '*' state.sls zabbix.init

saltmaster:

----------

          ID: zabbix_source

    Function: file.managed

        Name: /tmp/zabbix.tar.gz

      Result: True

     Comment: File /tmp/zabbix.tar.gz is in the correct state

     Started: 15:24:20.158243

    Duration: 12.659 ms

     Changes:   

----------

          ID: extract_zabbix

    Function: cmd.run

        Name: tar zxvf zabbix.tar.gz -C /usr/local

      Result: True

     Comment: Command "tar zxvf zabbix.tar.gz -C /usr/local" run

     Started: 15:24:20.171608

    Duration: 42.115 ms

     Changes:   

              ----------

              pid:

                  30427

              retcode:

                  0

              stderr:

              stdout:

                  zabbix/

                  zabbix/bin/

                  zabbix/bin/zabbix_sender

                  zabbix/bin/zabbix_get

                  zabbix/lib/

                  zabbix/sbin/

                  zabbix/sbin/zabbix_agent

                  zabbix/sbin/zabbix_agentd

                  zabbix/etc/

                  zabbix/etc/zabbix_agent.conf.d/

                  zabbix/etc/zabbix_agent.conf

                  zabbix/etc/zabbix_agentd.conf.d/

                  zabbix/share/

                  zabbix/share/man/

                  zabbix/share/man/man1/

                  zabbix/share/man/man1/zabbix_get.1

                  zabbix/share/man/man1/zabbix_sender.1

                  zabbix/share/man/man8/

                  zabbix/share/man/man8/zabbix_agentd.8

----------

          ID: zabbix_user

    Function: user.present

        Name: zabbix

      Result: True

     Comment: User zabbix is present and up to date

     Started: 15:24:20.215402

    Duration: 14.994 ms

     Changes:   

----------

          ID: zabbix_conf

    Function: file.managed

        Name: /usr/local/zabbix/etc/zabbix_agentd.conf

      Result: True

     Comment: File /usr/local/zabbix/etc/zabbix_agentd.conf is in the correct state

     Started: 15:24:20.230479

    Duration: 13.879 ms

     Changes:   

----------

          ID: zabbix_service

    Function: file.managed

        Name: /etc/init.d/zabbix_agentd

      Result: True

     Comment: File /etc/init.d/zabbix_agentd is in the correct state

     Started: 15:24:20.244543

    Duration: 3.243 ms

     Changes:   

----------

          ID: zabbix_service

    Function: cmd.run

        Name: /sbin/chkconfig zabbix_agentd on

      Result: True

     Comment: Command "/sbin/chkconfig zabbix_agentd on" run

     Started: 15:24:20.247961

    Duration: 17.828 ms

     Changes:   

              ----------

              pid:

                  30429

              retcode:

                  0

              stderr:

              stdout:

----------

          ID: zabbix_service

    Function: cmd.run

        Name: /sbin/chkconfig --add zabbix_agentd

      Result: True

     Comment: Command "/sbin/chkconfig --add zabbix_agentd" run

     Started: 15:24:20.266112

    Duration: 25.019 ms

     Changes:   

              ----------

              pid:

                  30430

              retcode:

                  0

              stderr:

              stdout:

----------

          ID: zabbix_service

    Function: service.running

        Name: zabbix_agentd

      Result: True

     Comment: Service zabbix_agentd is already enabled, and is in the desired state

     Started: 15:24:20.296152

    Duration: 113.405 ms

     Changes:   


Summary

------------

Succeeded: 8 (changed=3)

Failed:    0

------------

Total states run:     8





使用saltstack编译安装nginx



1、安装前分析

主要内容:

    a、依赖包(使用yum安装);

    b、源码包(pcre也使用源码安装、nginx源码包);

    c、配置文件与启动脚本(使用file.managed模块方法分发到目标机相应目录);

    d、使用cmd.run模块方法对源码包编译安装;

    e、使用service.running模块方法,启动对应服务。


2、安装依赖

编译安装pcre


[root@localhost salt]# pwd

/srv/salt

[root@localhost salt]# cat pcre.sls 

pcre_install:

  file.managed:

    - name: /usr/local/src/pcre-8.30.tar.gz         //文件目标分发的位置

    - source: salt://pcre-8.30.tar.gz               //文件源头

    - user: root

    - group: root

    - mode: 755

  cmd.run:

    - cwd: /usr/local/src                          //模板方法执行初始路径

    - name: tar xf pcre-8.30.tar.gz &&cd pcre-8.30 &&./configure &&make &&make install

安装其他依赖


[root@localhost salt]# cat install.sls 

nginx_yum:

  pkg.installed:                             //yum安装

    - name: openssl

    - name: openssl-devel

    - name: pcre

    - name: pcre-devel

    - name: zlib

    - name: zlib-devel


3、nginx.sls状态文件全览


[root@localhost salt]# pwd

/srv/salt

[root@localhost salt]# cat nginx.sls 

include:              //加载其他状态文件

  - install

  - pcre

nginx_init:

  file.managed:

    - name: /etc/init.d/nginx   //nginx启动叫脚本文件

    - source: salt://nginx

    - user: root

    - group: root

    - mode: 755

nginx.tar.gz_file:

  file.managed:

    - name: /usr/local/src/nginx-1.8.1.tar.gz    //nginx安装包

    - source: salt://nginx-1.8.1.tar.gz

    - user: root

    - group: root

    - mode: 755

nginx_install:

  cmd.run:

    - name: cd /usr/local/src && useradd -s /sbin/nologin nginx && tar xf nginx-1.8.1.tar.gz && cd nginx-1.8.1 && ./configure --prefix=/usr/local/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --user=nginx --group=nginx --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client/ --http-proxy-temp-path=/var/tmp/nginx/proxy/ --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --http-scgi-temp-path=/var/tmp/nginx/scgi --with-pcre && make && make install && ln -s /lib64/libpcre.so.0.0.1 /lib64/libpcre.so.1

    - unless: test -d /usr/local/src/nginx-1.8.1.tar.gz  //判断文件是否存在,存在方可执行

  service.running:

     - name: nginx                       

     - enable: True                                      //启动nginx服务


4、补充

在master端执行安装任务时,我们可以使用-v选项查看到jid,也可以通过命令

salt '*'  saltutil.running 查看到当前的job id


[root@localhost salt]# salt '192.168.24.67' state.sls nginx -v

Executing job with jid 20160705132643772244

-------------------------------------------


[root@localhost ~]# salt '*'  saltutil.running

192.168.24.67:

    |_

      ----------

      arg:

          - nginx

      fun:

          state.sls

      jid:

          20160705132432763991

      pid:

          3712

      ret:

      tgt:

          192.168.24.67

      tgt_type:

          glob

      user:

          root

我们也可以使用如下命令结束一个job


[root@localhost ~]# salt '*' saltutil.kill_job 20160705132432763991


SaltStack安装Nginx




1.1 base环境规划

这里我展示一下我的目录规划。

在master配置文件中的"file_roots"配置:

[root@linux-node1 ~]# cd /etc/salt/

[root@linux-node1 ~]# vim master

······

file_roots:

  base:

    - /srv/salt/base

  prod:

    - /srv/salt/prod

······

我的所有salt项放在基础环境(base)及生产环境(prod)两个项目目录下。

然后再/srv/salt/下创建两个目录,base和prod

[root@linux-node1 ~]# cd /srv/salt/

[root@linux-node1 salt]# mkdir -pv base prod

[root@linux-node1 salt]# tree

.

├── base

│   ├── init

│   │   ├── audit.sls

│   │   ├── dns.sls

│   │   ├── epel.sls

│   │   ├── files

│   │   │   ├── resolv.conf

│   │   │   └── zabbix_agentd.conf

│   │   ├── history.sls

│   │   ├── init.sls

│   │   ├── sysctl.sls

│   │   └── zabbix-agent.sls

│   └── top.sls

└── prod

    ├── bbs

    │   ├── files

    │   │   └── nginx-bbs.conf

    │   ├── memcached.sls

    │   └── web.sls

    ├── cluster

    │   ├── files

    │   │   ├── haproxy-outside.cfg

    │   │   └── haproxy-outside-keepalived.conf

    │   ├── haproxy-outside-keepalived.sls

    │   └── haproxy-outside.sls

    └── modules

        ├── haproxy

        │   ├── files

        │   │   ├── haproxy-1.6.3.tar.gz

        │   │   └── haproxy.init

        │   └── install.sls

        ├── keepalived

        │   ├── files

        │   │   ├── keepalived-1.2.17.tar.gz

        │   │   ├── keepalived.init

        │   │   └── keepalived.sysconfig

        │   └── install.sls

        ├── libevent

        │   ├── files

        │   │   └── libevent-2.0.22-stable.tar.gz

        │   └── install.sls

        ├── memcached

        │   ├── files

        │   │   └── memcached-1.4.24.tar.gz

        │   └── install.sls

        ├── nginx

        │   ├── files

        │   │   ├── nginx-1.10.1.tar.gz

        │   │   ├── nginx.conf

        │   │   └── nginx-init

        │   ├── install.sls

        │   └── service.sls

        ├── pcre

        │   ├── files

        │   │   └── pcre-8.37.tar.gz

        │   └── install.sls

        ├── php

        │   ├── files

        │   │   ├── init.d.php-fpm

        │   │   ├── memcache-2.2.7.tgz

        │   │   ├── php-5.6.9.tar.gz

        │   │   ├── php-fpm.conf.default

        │   │   ├── php.ini-production

        │   │   └── redis-2.2.7.tgz

        │   ├── install.sls

        │   ├── php-memcache.sls

        │   └── php-redis.sls

        ├── pkg

        │   └── make.sls

        └── user

            ├── test.sls

            └──


25 directories, 47 files

值得注意的是:在写SLS文件的时候,尽量每个服务单独写一个SLS,将整个项目中的服务进行解耦,方便我们以后更好的引用。如果在别的项目中需要的时候,只需要include这个SLS就可以了!

2. 编写安装Nginx的SLS文件

2.1 安装依赖包的SLS

[root@linux-node1 prod]# cd modules/

[root@linux-node1 modules]# cd pkg/

[root@linux-node1 pkg]# vim make.sls

make-pkg:

  pkg.installed:

    - pkgs:

      - gcc

      - gcc-c++

      - glibc

      - make

      - autoconf

      - openssl

      - openssl-devel

      - pcre

      - pcre-devel

2.2 编写安装Nginx的SLS文件

需要提前下载源码包:

[root@linux-node1 files]# wget

Note:

需要放在/srv/salt/prod/modules/nginx/files/下。

编写SLS文件

[root@linux-node1 ~]# cd /srv/salt/prod/modules/nginx/

[root@linux-node1 ~]# vim install.sls

include:

  - module.pkg.make       

  - module.user.www


nginx-source-install:

  file.managed:

    - name: /usr/local/src/nginx-1.10.1.tar.gz

    - source: salt://modules/nginx/files/nginx-1.10.1.tar.gz

    - user: root

    - group: root

    - mode: 755

  cmd.run:

    - name: cd /usr/local/src && tar zxf nginx-1.10.1.tar.gz && cd nginx-1.10.1&& ./configure --prefix=/usr/local/nginx-1.10.1 --user=www --group=www --with-http_ssl_module --with-http_stub_status_module --with-file-aio --with-http_dav_module && make && make install && ln -s /usr/local/nginx-1.10.1 /usr/local/nginx && chown -R www:www /usr/local/nginx

    - unless: test -d /usr/local/nginx

    - requrie:

      - user: www-user-group

      - file: nginx-install

      - pkg: make-pkg

这样一个Nginx的安装SLS文件,就完成了!接下来要做的就是配置管理。在这里我们的安装文件和配置启动文件写成了两个,这样方便以后我们多次使用。

2.3 配置管理及启动SLS文件

[root@linux-node1 ~]# vim service.sls

include:

  - modules.nginx.install


nginx-init:

  file.managed:

    - name: /etc/init.d/nginx

    - source: salt://modules/nginx/files/nginx-init

    - mode: 755

    - user: root

    - group: root

  cmd.run:

    - name: chkconfig --add nginx

    - unless: chkconfig --list|grep nginx

    - require:

      - file: nginx-init


/usr/local/nginx/conf/nginx.conf:

  file.managed:

    - source: salt://modules/nginx/files/nginx.conf

    - user: www

    - group: www

    - mode: 644


#Starting Nginx Server

nginx-service:

  service.running:

    - name: nginx

    - enabled: True

    - reload: True

    - watch:

      - file: /usr/local/nginx/conf/nginx.conf

      - file: nginx-online


#下面的两个ID声明,一个是存放在线的服务主机,一个是存放已经从线上撤下来的服务主机。

nginx-online:

  file.directory:

    - name: /usr/local/nginx/conf/vhost_online


nginx-offline:

  file.directory:

    - name: /usr/local/nginx/conf/vhost_offline

Note:

在这个文件中,创建vhostonline和vhostoffline的目的是存放线上主机和线下服务主机的目录。可能有的人问,为什么要这么做?原因就是我们下线的主机最好不要删除,而是另存为。这样方便以后我们再次使用的时候能够拿到数据。

3. 在top file中指定哪些主机执行这个安装操作

注意:我的top.sls是放在我的/srv/salt/base/下的。

[root@linux-node1 base]# vim top.sls

base:

  '*':

    - init.env_init


prod:

  'linux-node*':

    - modules.nginx.install

    - modules.nginx.service

主要添加的是prod后面的内容,表示匹配满足主机名为"linux-node*"这种格式的Minion都可以执行按照Nginx操作。

这样我们就实现了用SaltStack安装Nginx了。

Daily sentence

No matter how far you may fly, never forget where you come from.



Saltstack配置管理-增加Zabbix


[root@linux-node1 init]# vim /etc/salt/master

536 pillar_roots:

537   base:

538     - /srv/pillar/base


在pillar环境下的top file文件:

[root@linux-node1 base]# cat /srv/pillar/base/top.sls 

base:

  '*':

    - zabbix


在pillar环境下的安装文件zabbix.sls

[root@linux-node1 base]# cat /srv/pillar/base/zabbix.sls 

zabbix-agent:                                                           ###############################对应salt中base环境下的file.managed模块中的zabbix-agent

  Zabbix_Server: 10.0.0.7                                               ###############################对应salt中base环境下的file.managed模块中的Zabbix_Server


在salt项目的base环境下的文件管理zabbix_agent.sls

[root@linux-node1 base]# cat /srv/salt/base/init/zabbix_agent.sls 

zabbix-agent-install:

  pkg.installed:

    - name: zabbix-agent


  file.managed:

    - name: /etc/zabbix/zabbix_agentd.conf

    - source: salt://init/files/zabbix_agentd.conf

    - template: jinja

    - defaults:

      Server: {{ pillar['zabbix-agent']['Zabbix_Server'] }}              #################################对应pillar中base环境下的zabbix.sls里面的

    - require:

      - pkg: zabbix-agent-install


  service.running:

    - enable: True

    - watch:

      - pkg: zabbix-agent-install

      - file: zabbix-agent-install


执行高级状态:

[root@linux-node1 ~]# salt '*' state.highstate

linux-node1.example.com:

................

Summary

-------------

Succeeded: 32 (changed=1)

Failed:     0

-------------

Total states run:     32

linux-node2.example.com:

................

Summary

-------------

Succeeded: 32 (changed=1)

Failed:     0

-------------

Total states run:     32



saltstack批量添加批量删除用户



批量添加用户

[root@linux-node1 init]# cat useradds.sls

{% set users = ['name1,name2'] %}

{% for user in users %}

{{ user }}:

user.present:

- shell: /bin/bash

- home: /home/{{ user }}

- password: ‘$1$sbvWg7.V$r/nWDs7g0YynB1CVsfUPA/’

- groups:

- {{ user }}

- require:

- group: {{ user }}

group.present:

- name: {{ user }}

{% endfor %}


password为hash后的密码

获取hash后密文密码openssl passwd -1

[root@linux-node1 init]# openssl passwd -1

Password:

Verifying – Password:

$1$bWsI2gYH$V.JqN/FE9J3yltwXCo.CQ/


批量删除用户

[root@linux-node1 init]# cat userdel.sls

{% set users = ['jerry','tom','sunday'] %}

{% for user in users %}

{{ user }}:

user.absent:

- purge: True

- force: True

{% endfor %}


- purge: True   ##Set purge to True to delete all of the user’s files as well as the user, Default is False.

- force: True  ##如果用户当前已登录,则absent state会失败. 设置force选项为True时,就算用户当前处于登录状态也会删除本用户.


SaltStack实践(一)-- 安装配置HAproxy


1、编写功能模块

        1)首先编写依赖安装模块   

[root@linux-node1 ~]# mkdir -p /srv/salt/prod/pkg /srv/salt/prod/haproxy /srv/salt/prod/haproxy/files

[root@linux-node1 pkg]# vim pkg-init.sls

pkg-init:

 pkg.installed:

   - names:

     - gcc

     - gcc-c++

     - glibc

     - make

     - autoconf

     - openssl

     - openssl-devel


        2)编写HAproxy状态模块


        如何写状态模块?1、安装一遍,将安装步骤记录;2、将配置文件,启动文件等cp到/srv/salt/prod/*/files下


            a)获取启动脚本,并copy到/srv/salt/prod/haproxy/files/

[root@linux-node1 ~]# mv haproxy-1.6.2.tar.gz  /srv/salt/prod/haproxy/files/


[root@linux-node1 ~]# cd /srv/salt/prod/haproxy/files/

[root@linux-node1 files]# tar zxf haproxy-1.6.2.tar.gz


[root@linux-node1 files]# cd haproxy-1.6.2/examples/

[root@linux-node1 examples]# vim haproxy.init

35 BIN=/usr/local/haporxy/sbin/$BASENAME


[root@linux-node1 examples]# cp haproxy.init  /srv/salt/prod/haproxy/files/


[root@linux-node1 examples]# cd /srv/salt/prod/haproxy/files


[root@linux-node1 files]# rm -rf haproxy-1.6.2


        b)编写install.sls


        不在这里写配置文件,是为了解耦。因为安装和启动时原子操作,在哪都必须,但是配置文件,在不同环境下是不一样的

[root@linux-node1 examples]# cd /srv/salt/prod/haproxy/

[root@linux-node1 haproxy]# vim install.sls    

include:

 - pkg.pkg-init

haproxy-install:    

 file.managed:    

   - name: /usr/local/src/haproxy-1.6.2.tar.gz

   - source: salt://haproxy/files/haproxy-1.6.2.tar.gz

   - user: root

   - group: root

   - mode: 755

 cmd.run:        

   - name: cd /usr/local/src && tar zxf haproxy-1.6.2.tar.gz && cd haproxy-1.6.2 && make TARGET=linux26 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy

   - unless: test -d /usr/local/haproxy   

   - require:    

     - pkg: pkg-init

     - file: haproxy-install  

/etc/init.d/haproxy:  

 file.managed:

   - source: salt://haproxy/files/haproxy.init

   - user: root

   - group: root

   - mode: 755

   - require:

     - cmd: haproxy-install

 cmd.run:

   - name: chkconfig --add haproxy

   - unless: chkconfig --list | grep haproxy

   - require:

     - file: /etc/init.d/haproxy

net.ipv4.ip_nonlocal_bind:

 sysctl.present:

   - value: 1

haproxy-config-dir:

 file.directory:

   - name: /etc/haproxy

   - user: root

   - group: root

   - mode: 755



[root@linux-node1 src]# salt 'linux-node1.*' state.sls haproxy.install env=prod

linux-node1.example.com:

----------

......


Summary

-------------

Succeeded: 13 (changed=3)

Failed:     0

-------------

Total states run:     13



    2、编写业务引用 - HAproxy配置文件


[root@linux-node1 files]# mkdir -p /srv/salt/prod/cluster/files

[root@linux-node1 files]# cd /srv/salt/prod/cluster/files/    

[root@linux-node1 files]# vim haproxy-outside.cfg

global

maxconn 100000

chroot /usr/local/haproxy

uid 99  

gid 99

daemon

nbproc 1

pidfile /usr/local/haproxy/logs/haproxy.pid

log 127.0.0.1 local3 info

defaults

option http-keep-alive

maxconn 100000

mode http

timeout connect 5000ms

timeout client  50000ms

timeout server 50000ms

listen stats

mode http

bind 0.0.0.0:8888

stats enable

stats uri     /haproxy-status

stats auth    haproxy:saltstack

frontend frontend_www_example_com

bind 10.0.0.11:80

mode http

option httplog

log global

   default_backend backend_www_example_com

backend backend_www_example_com

option forwardfor header X-REAL-IP

option httpchk HEAD / HTTP/1.0

balance source

server web-node1  10.0.0.7:8080 check inter 2000 rise 30 fall 15   #

server web-node2  10.0.0.8:8080 check inter 2000 rise 30 fall 15

[root@linux-node1 files]#cd ..

[root@linux-node1 cluster]# vim haproxy-outside.sls

include:

 - haproxy.install

haproxy-service:

 file.managed:

   - name: /etc/haproxy/haproxy.cfg

   - source: salt://cluster/files/haproxy-outside.cfg

   - user: root

   - group: root

   - mode: 644

 service.running:

   - name: haproxy

   - enable: True

   - reload: True

   - require:

     - cmd: haproxy-init

   - watch:

     - file: haproxy-service

[root@linux-node1 ~]# cd /srv/salt/base/

[root@linux-node1 base]# vim top.sls

base:

 '*':

   - init.env_init

prod:

 'linux-node[1-2].example.com':

   - cluster.haproxy-outside

[root@linux-node1 base]# salt '*' state.highstate

linux-node1.example.com:

----------

......


Summary

-------------

Succeeded: 21 (unchanged=2, changed=1)

Failed:     0

-------------

Total states run:     21

linux-node2.example.com:

----------

......  

Summary

-------------

Succeeded: 21 (unchanged=9, changed=3)

Failed:     0

-------------

Total states run:     21



SaltStack实践(二)-- 安装配置Keepalived


1、编写功能模块


#创建keepalived目录

[root@linux-node1 ~]#mkdir -p /srv/salt/prod/keepalived/files

[root@linux-node1 ~]#cd /srv/salt/prod/keepalived/files


#获取并解压keepalived

[root@linux-node1 files]#wget

[root@linux-node1 files]#tar xf keepalived-1.2.19.tar.gz

[root@linux-node1 files]#cd keepalived-1.2.19


#从源码包拷贝启动脚本、配置文件到files目录

[root@linux-node1 keepalived-1.2.19]#cp keepalived/etc/init.d/keepalived.init /srv/salt/prod/keepalived/files/

[root@linux-node1 keepalived-1.2.19]#cp keepalived/etc/init.d/keepalived.sysconfig  /srv/salt/prod/keepalived/files/

[root@linux-node1 keepalived-1.2.19]#cp keepalived/etc/keepalived/keepalived.conf /srv/salt/prod/keepalived/files/


#编写install.sls文件

[root@linux-node1 keepalived-1.2.19]# cd /srv/salt/prod/keepalived/

[root@linux-node1 keepalived]# vim install.sls

include:

 - pkg.pkg-init

keepalived-install:

 file.managed:

   - name: /usr/local/src/keepalived-1.2.19.tar.gz

   - source: salt://keepalived/files/keepalived-1.2.19.tar.gz

   - user: root

   - group: root

   - mode: 755

 cmd.run:

   - name: cd /usr/local/src && tar xf keepalived-1.2.19.tar.gz && cd keepalived-1.2.19 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install

   - unless: test -d /usr/local/keepalived

   - require:

     - pkg: pkg-init

     - file: keepalived-install

keepalived-init:

 file.managed:

   - name: /etc/init.d/keepalived

   - source: salt://keepalived/files/keepalived.init

   - user: root

   - group: root

   - mode: 755

 cmd.run:

   - name: chkconfig --add keepalived

   - unless: chkconfig --list |grep keepalived

   - require:

     - file: keepalived-init

/etc/sysconfig/keepalived:

 file.managed:

   - source: salt://keepalived/files/keepalived.sysconfig

   - user: root

   - group: root

   - mode: 644

/etc/keepalived:

 file.directory:

   - user: root

   - group: root

   - mode: 755



#测试

[root@linux-node1 keepalived]# salt '*' state.sls keepalived.install env=prod test=True 

linux-node2.example.com:

----------

....

Summary

-------------

Succeeded: 13 (changed=5)

Failed:     0

-------------

Total states run:     13

linux-node2.example.com:

----------

.....

-------------

Succeeded: 13 (changed=6)

Failed:     0

-------------

Total states run:     13


    2、编写业务模块


[root@linux-node1 keepalived]# cd ../cluster/



#编写keepalived配置文件

[root@linux-node1 cluster]# cd files/

[root@linux-node1 files]# vim haproxy-outside-keepalived.conf  

! Configuration File for keepalived

global_defs {

  notification_email {

    saltstack@example.com

  }

  notification_email_from keepalived@example.com

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id {{ROUTEID}}   # jinja模板变量

}

vrrp_instance haproxy_ha {

state {{STATEID}}   # jinja模板变量

interface eth0

   virtual_router_id 36

priority {{PRIORITYID}}  # jinja模板变量

   advert_int 1

authentication {

auth_type PASS

       auth_pass 1111

   }

   virtual_ipaddress {

      10.0.0.11

   }

}



#编写用于管理keepalived配置文件的SLS

[root@linux-node1 files]#cd ..

[root@linux-node1 cluster]# vim haproxy-outside-keepalived.sls

include:

 - keepalived.install

keepalived-serivce:

 file.managed:

   - name: /etc/keepalived/keepalived.conf

   - source: salt://cluster/files/haproxy-outside-keepalived.conf

   - user: root

   - group: root

   - mode: 644

   - template: jinja

   {% if grains['fqdn'] == 'linux-node1.example.com' %}

   - ROUTEID: haproxy_ha

   - STATEID: MASTER

   - PRIORITYID: 150

   {% elif grains['fqdn'] == 'linux-node2.example.com' %}

   - ROUTEID: haproxy_ha

   - STATEID: BACKUP

   - PRIORITYID: 100

   {% endif %}

 service.running:

   - name: keepalived

   - enable: True

   - watch:

     - file: keepalived-serivce



#测试

[root@linux-node1 cluster]# salt '*' state.sls cluster.haproxy-outside-keepalived env=prod test=True   

.....

Summary

-------------

Succeeded: 15 (changed=1)

Failed:     0

-------------

Total states run:     15



#在top.sls中加入keepalived

[root@linux-node1 cluster]#cd /srv/salt/base

[root@linux-node1 base]# vim top.sls

base:

 '*':

- init.env_init


prod:

'linux-node[1-2].example.com':

  - cluster.haproxy-outside

  - cluster.haproxy-outside-keepalived



#执行安装和配置keepalived

[root@linux-node1 base]# salt 'linux-node?.example.com' state.highstate

Summary

-------------

Succeeded: 29

Failed:     0

-------------

Total states run:     29


阅读(644) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~