分类: 大数据
2016-11-22 11:08:13
Docker-Shrike
docker etcd 介绍
etcd 用于服务发现的基础注册和通知,类似于zk
下载rpm包
我们需要安装etcd集群,分别是172.28.4.23,172.28.4.24,172.28.4.25
分别安装etcd服务
yum -y localinstall etcd-2.3.1-1.el7.centos.x86_64.rpm
修改配置文件
grep -v "^#" /etc/etcd/etcd.conf
#根据 ETCD_INITIAL_CLUSTER中本机节点的infra修改ETCD_NAME
ETCD_NAME=infra0
#手动创建ETCD_DATA_DIR
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS=""
ETCD_LISTEN_CLIENT_URLS=""
ETCD_INITIAL_ADVERTISE_PEER_URLS=""
ETCD_INITIAL_CLUSTER="infra0=,infra1=,infra2="
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="talkingdata"
ETCD_ADVERTISE_CLIENT_URLS=""
修改完配置文件后,运行命令
systemctl start etcd
#相应命令 (节点不同,参数不同)
/usr/bin/etcd --name=infra0 --data-dir=/var/lib/etcd/default.etcd --initial-advertise-peer-urls= --listen-peer-urls= --listen-client-urls= --advertise-client-urls= --initial-cluster-token=talkingdata --initial-cluster=infra0=,infra1=http://192.168.0.2:2380,infra2=http://192.168.0.3:2380 --initial-cluster-state=new
随便找其中一台物理机,运行
etcdctl member list
7e2823da4af95d3: name=infra0 peerURLs= clientURLs= isLeader=true
8527f1de344fb2e3: name=infra1 peerURLs= clientURLs= isLeader=false
87c8251404be0a8f: name=infra2 peerURLs= clientURLs= isLeader=false
这就证明安装成功了。
oam-docker-ipam插件用来分配IP (每个运行docker的节点必须安装)
#依赖包一起安装
yum -y localinstall oam-docker-ipam-1.0.0-1.el7.centos.x86_64.rpm
分配docker宿主机IP范围
oam-docker-ipam host-range --ip-start 172.28.4.23/16 --ip-end 172.28.4.25/16 --gateway 172.28.7.254
分配docker容器的IP范围
oam-docker-ipam ip-range --ip-start 172.28.4.200/24 --ip-end 172.28.4.253/24
查看ip分配情况
etcdctl ls --recursive
Swarm-Manager
ip:172.28.4.23,172.28.4.24,172.28.4.25
安装
yum localinstall -y swarm-1.2.2-1.el7.centos.x86_64.rpm
cat /etc/swarm/swarm-manager.conf
# [manager]
SWARM_HOST=:4000
#SWARM_ADVERTISE修改为本机ip
SWARM_ADVERTISE=172.28.4.23:4000
SWARM_DISCOVERY=etcd://172.28.4.23:2379,172.28.4.24:2379,172.28.4.25:2379
启动服务
systemctl start swarm-manager
相应命令
/usr/bin/swarm --debug manage --replication --host=:4000 --advertise=172.28.4.23:4000 etcd://172.28.4.23:2379,172.28.4.24:2379,172.28.4.25:2379
查看服务状态
ps -ef | grep swarm
root 31832 1 0 11:42 ? 00:00:00 /usr/bin/swarm --debug manage --replication --host=:4000 --advertise=172.28.4.25:4000 etcd://172.28.4.23:2379,172.28.4.24:2379,172.28.4.25:2379
root 31845 23037 0 11:42 pts/0 00:00:00 grep --color=auto swarm
docker
yum -y localinstall docker-engine-selinux-1.11.1-1.el7.centos.noarch.rpm docker-engine-1.11.1-1.el7.centos.x86_64.rpm
rpm -aq | grep docker
修改/usr/lib/systemd/system/docker.service
grep ExecStart /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon --debug -H=fd:// -H=tcp://0.0.0.0:2375 -H=unix:///var/run/docker.sock --insecure-registry=bj-xg-test-hadoop-vm-02.novalocal -s=overlay -g /ssdcache/docker
参数--insecure-registry是镜像管理地址 172.28.4.22 --> bj-xg-test-hadoop-vm-02.novalocal
systemctl daemon-reload
systemctl restart docker
相应命令:
/usr/bin/docker daemon --debug -H=fd:// -H=tcp://0.0.0.0:2375 -H=unix:///var/run/docker.sock --insecure-registry=bj-xg-test-hadoop-vm-02.novalocal -s=overlay -g /ssdcache/docker
ps -ef | grep docker
OAM-DOCKER-IPAM
ip:172.28.4.23,172.28.4.24,172.28.4.25
修改/etc/oam-docker-ipam/oam-docker-ipam.conf
IPAM_CLUSTER_STORE=,,
systemctl start oam-docker-ipam
相应命令:
/usr/bin/oam-docker-ipam --debug=true --cluster-store=,, server
节点上创建自定义网络br0
oam-docker-ipam --cluster-store=,, create-network --ip 172.28.4.24
[172.17.0.1
172.28.4.23:root@bj-xg-test-hadoop-vm-03:/root]# oam-docker-ipam --cluster-store=,, create-network --ip 172.28.4.23
INFO[0000] Allocated host 172.28.4.23
FATA[0000] exit status 1no matching subnet for aux-address 172.28.7.254
执行错误
172.28.7.254 不在 172.28.4.0/24 范围内
etcdctl get /talkingdata/hosts/config
Swarm-Agent
yum localinstall -y swarm-1.2.2-1.el7.centos.x86_64.rpm
cat /etc/swarm/swarm-agent.conf
# [agent]
SWARM_ADVERTISE=172.28.4.23:2375
SWARM_DISCOVERY=etcd://172.28.4.23:2379,172.28.4.24:2379,172.28.4.25:2379
systemctl start swarm-agent
相应命令:
/usr/bin/swarm --debug join --advertise=172.28.4.23:2375 etcd://172.28.4.23:2379,172.28.4.24:2379,172.28.4.25:2379
docker
yum -y localinstall docker-engine-selinux-1.11.1-1.el7.centos.noarch.rpm docker-engine-1.11.1-1.el7.centos.x86_64.rpm
rpm -aq | grep docker
修改/usr/lib/systemd/system/docker.service
grep ExecStart /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon --debug -H=fd:// -H=tcp://0.0.0.0:2375 -H=unix:///var/run/docker.sock --insecure-registry=bj-xg-test-hadoop-vm-02.novalocal -s=overlay -g /ssdcache/docker
参数--insecure-registry是镜像管理地址 这里我使用的本机hostname
--bip=192.168.0.1/18 指定容器的 网络范围
systemctl daemon-reload
systemctl restart docker
docker 搭建私有仓库 (vmware/harbor )
参考:
harbor
版本:harbor-offline-installer-0.4.5.tgz
Docker compose安装完docker后默认是没有的,使用以下命令安装:
pip install -U docker-compose
tar zxf harbor-offline-installer-0.4.5.tgz
grep "hostname" harbor.cfg
#The IP address or hostname to access admin UI and registry service.
#修改hostname=镜像服务器主机名
hostname = reg.mydomain.com
user/passwd
the default username/password are admin/Harbor12345
安装harbor
./install.sh
安装完成
? ----Harbor has been installed and started successfully.----
Now you should be able to visit the admin portal at .
For more details, please visit .
Pulling images
docker login bj-xg-test-hadoop-vm-02.novalocal
Username: admin
Password:
Login Succeeded
(admin/Harbor12345)
Pushing images
1. First, log in from Docker client:
docker login bj-xg-test-hadoop-vm-02.novalocal
2. Tag the image:
172.28.4.22:root@bj-xg-test-hadoop-vm-02:/root]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
vmware/harbor-log 0.4.5 3a5eda96a382 2 weeks ago 190.1 MB
vmware/harbor-jobservice 0.4.5 81b6e53f361d 2 weeks ago 169.3 MB
vmware/harbor-ui 0.4.5 9bd00ef99049 2 weeks ago 232.5 MB
vmware/harbor-db 0.4.5 8d3d80fbf798 2 weeks ago 326.8 MB
registry 2.5.0 c6c14b3960bd 3 months ago 33.28 MB
nginx 1.9 c8c29d842c09 5 months ago 182.7 MB
[172.28.4.22:root@bj-xg-test-hadoop-vm-02:/root]# docker tag nginx:1.9 bj-xg-test-hadoop-vm-02.novalocal/library/nginx:1.9
3.Push the image:
[172.28.4.22:root@bj-xg-test-hadoop-vm-02:/root]# docker push bj-xg-test-hadoop-vm-02.novalocal/library/nginx:1.9
The push refers to a repository [bj-xg-test-hadoop-vm-02.novalocal/library/nginx]
5f70bf18a086: Pushed
49027b789c92: Pushed
20f8e7504ae5: Pushed
4dcab49015d4: Pushed
1.9: digest: sha256:311e9840c68d889e74eefa18227d0a6f995bc7a74f5453fdcd49fe3c334feb24 size: 1978
镜像文件物理存储位置
[172.28.4.22:root@bj-xg-test-hadoop-vm-02:/data/registry/docker/registry/v2]# ls */*
blobs/sha256:
31 51 64 a3 a4 c8
repositories/library:
nginx
删除 br 网卡命令
ifconfig
ifconfig br-5675f805c311 down
brctl delbr br-5675f805c311
docker-compose up -d
Shipyard
我们选一台物理机作为Shipyard的运行机,此应用是一款开源项目,我们基于它做了一些小修改,加入了自定义网络以及CPU核数限制的修改。
接下来我们选择172.28.4.22这台机器安装
yum -y localinstall shipyard-3.0.4-1.el7.centos.x86_64.rpm
从dockerhub上下载rethinkdb
docker pull rethinkdb
下载完成后运行
docker run -d rethinkdb
获取该容器的IP地址
docker inspect 0e81f9d0b531 | grep 'IPAddress'
"SecondaryIPAddresses": null,
"IPAddress": "192.168.0.2",
"IPAddress": "192.168.0.2",
cat /etc/shipyard/shipyard.conf
# [shipyard]
SHIPYARD_LISTEN=:8080
SHIPYARD_DATABASE=192.168.0.2:28015
#其中一台swarm-manager的地址
SHIPYARD_DOCKER=tcp://172.28.4.23:4000
修改完配置文件后,运行命令
systemctl start shipyard
ps -ef | grep shipy
root 14332 1 0 16:09 ? 00:00:00 /usr/bin/shipyard/shipyard --debug server --listen :8080 --rethinkdb-addr 192.168.0.2:28015 --docker tcp://172.28.4.23:4000
root 14358 12134 0 16:09 pts/0 00:00:00 grep --color=auto shipy
打开浏览器访问
输入用户名:admin, 密码:shipyard