分享工作和学习中的点点滴滴,包括前端、后端、运维、产品等各个方面,欢迎您来关注订阅!
分类: LINUX
2020-06-11 15:48:32
系统:7.4
服务器金山云
[root@jsy-bj-test00 ~]# yum install -y ruby rubygems
[work@jsy-bj-test00 ~]$ cp -rp redis redis1 [work@jsy-bj-test00 ~]$ cp -rp redis redis2 [work@jsy-bj-test00 ~]$ cp -rp redis redis3
#六个节点需做如下更改 [work@jsy-bj-test00 ~]$ vim redis1/etc/redis.conf [work@jsy-bj-test00 ~]$ sed -i 's/port 6379/port 6380/g' redis5/etc/redis.conf #修改端口 port 6380 #打开注释,开启集群模式 cluster-enabled yes #集群的配置文件 cluster-config-file nodes-6380.conf [work@jsy-bj-test00 ~]$ sed -i 's/cluster-config-file nodes-6379.conf/cluster-config-file nodes-6380.conf/g' redis5/etc/redis.conf #pidfile文件 pidfile /var/run/redis_6380.pid [work@jsy-bj-test00 ~]$ sed -i 's/pidfile \/var\/run\/redis_6380.pid/pidfile \/var\/run\/redis_6380.pid/g' redis5/etc/redis.conf #日志文件 logfile "/home/work/logs/redis/6380.log" [work@jsy-bj-test00 ~]$ sed -i 's/logfile "\/home\/work\/logs\/redis\/6379.log"/logfile "\/home\/work\/logs\/redis\/6380.log"/g' redis5/etc/redis.conf #rdb持久化文件 dbfilename dump6380.rdb [work@jsy-bj-test00 ~]$ sed -i 's/dbfilename dump6379.rdb/dbfilename dump6380.rdb/g' redis5/etc/redis.conf #请求超时,单位毫秒 cluster-node-timeout 5000 #开启aof持久化方式 appendonly yes #配置持久化文件 appendfilename "appendonly6379.aof" [work@jsy-bj-test00 ~]$ sed -i 's/appendfilename "appendonly6379.aof"/appendfilename "appendonly6383.aof"/g' redis5/etc/redis.conf
启动脚本start_all.sh
/home/work/redis/bin/redis-server /home/work/redis/etc/redis.conf & /home/work/redis/bin/redis-server /home/work/redis/etc/redis6380.conf & /home/work/redis/bin/redis-server /home/work/redis/etc/redis6381.conf & /home/work/redis/bin/redis-server /home/work/redis/etc/redis6382.conf & /home/work/redis/bin/redis-server /home/work/redis/etc/redis6383.conf & /home/work/redis/bin/redis-server /home/work/redis/etc/redis6384.conf &
关闭脚本stop_all.sh
/home/work/redis/bin/redis-cli -p 6379 -a test shutdown /home/work/redis/bin/redis-cli -p 6380 -a test shutdown /home/work/redis/bin/redis-cli -p 6381 -a test shutdown /home/work/redis/bin/redis-cli -p 6382 -a test shutdown /home/work/redis/bin/redis-cli -p 6383 -a test shutdown /home/work/redis/bin/redis-cli -p 6384 -a test shutdown
有如下报错
[work@jsy-bj-test00 src]$ ./redis-trib.rb create --replicas 1 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- redis (LoadError) from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in `require' from ./redis-trib.rb:25:in `
[root@jsy-bj-test00 ~]# gem install redis Fetching: redis-4.1.3.gem (100%) ERROR: Error installing redis: redis requires Ruby version >= 2.3.0.
解决办法到官网下载最新稳定版源代码 进行编译安装
[work@jsy-bj-test00 soft]$ tar zxvf ruby-2.7.0.tar.gz [work@jsy-bj-test00 soft]$ cd ruby-2.7.0 [work@jsy-bj-test00 ruby-2.7.0]$ ./configure --prefix=/home/work/ruby && make && make install
安装redis接口
[work@jsy-bj-test00 bin]$ gem install redis
再次执行创建集群
[work@jsy-bj-test00 src]$ ./redis-trib.rb create --replicas 1 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... ...... >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join... >>> Performing Cluster Check (using node 127.0.0.1:6379) ...... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
创建成功,查看集群状态
[work@jsy-bj-test00 ~]$ ./redis/bin/redis-cli -h 127.0.0.1 -p 6379 -c 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_sent:627 cluster_stats_messages_received:627
查看集群节点信息
127.0.0.1:6379> cluster nodes 565246bf31d8e05e464db7455521b1a9f165a9cd 127.0.0.1:6380 master - 0 1578447776230 2 connected 5461-10922 1b99b2a1e4b530501476ab48422c75f30423fd19 127.0.0.1:6383 slave 565246bf31d8e05e464db7455521b1a9f165a9cd 0 1578447778233 5 connected 735ad5778458059316794b9378d4b81aaff20322 127.0.0.1:6379 myself,master - 0 0 1 connected 0-5460 757c2c11ecebfc607aa10a6877e348d0e2da484f 127.0.0.1:6381 master - 0 1578447777732 3 connected 10923-16383 f22efc2bcfcd11cee6487ebc9c75de3b59f5e1d0 127.0.0.1:6382 slave 735ad5778458059316794b9378d4b81aaff20322 0 1578447776230 4 connected b8bba94b9647caa8600363144fd7108082e45f56 127.0.0.1:6384 slave 757c2c11ecebfc607aa10a6877e348d0e2da484f 0 1578447777232 6 connected #这是很重要的命令,我们需要关心的信息有: #第一个参数:节点ID #第二个参数:IP:PORT@TCP 这里一个坑,jedis-2.9.0之前的版本解析@出错 #第三个参数:标志(Master,Slave,Myself,Fail...) #第四个参数:如果是从机则是主机的节点ID #最后两个参数:连接的状态和槽的位置。
#集群增加节点,先复制两个配置文件,并修改配置文件内容 [work@jsy-bj-test00 etc]$ cp -p redis.conf redis6385.conf [work@jsy-bj-test00 etc]$ cp -p redis.conf redis6386.conf [work@jsy-bj-test00 etc]$ sed -i 's/6379/6385/g' redis6385.conf [work@jsy-bj-test00 etc]$ sed -i 's/6379/6386/g' redis6386.conf
启动6385节点
[work@jsy-bj-test00 bin]$ ./redis-server /home/work/redis/etc/redis6385.conf &
将集群管理工具软连到redis的bin下
[work@jsy-bj-test00 bin]$ ln -s /home/work/soft/redis-3.2.11/src/redis-trib.rb /home/work/redis/bin/redis-trib.rb
将新节点加入集群master
[work@jsy-bj-test00 bin]$ ./redis-trib.rb add-node 127.0.0.1:6385 127.0.0.1:6379 >>> Adding node 127.0.0.1:6385 to cluster 127.0.0.1:6379 >>> Performing Cluster Check (using node 127.0.0.1:6379) ...... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:6385 to make it join the cluster. [OK] New node added correctly.
查看集群状态,未分配槽位
[work@jsy-bj-test00 bin]$ ./redis-cli -h 127.0.0.1 -p 6379 -c cluster nodes d00d05f601df0b69df0c2cc532b636d2c83347be 127.0.0.1:6385 master - 0 1578450280191 0 connected
给6385节点分配槽位
[work@jsy-bj-test00 bin]$ ./redis-trib.rb reshard 127.0.0.1:6379 >>> Performing Cluster Check (using node 127.0.0.1:6379) ...... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 500 What is the receiving node ID? d00d05f601df0b69df0c2cc532b636d2c83347be Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:all Do you want to proceed with the proposed reshard plan (yes/no)? yes #第一个参数:需要移动槽的个数, #第二个参数:接受槽的节点ID, #第三个参数:输入"all"表示从所有原节点中获取槽, #第四个参数:输入"yes"开始移动槽到目标结点id #查看6385节点信息,已经分配槽位 [work@jsy-bj-test00 bin]$ ./redis-cli -h 127.0.0.1 -p 6379 -c cluster nodes d00d05f601df0b69df0c2cc532b636d2c83347be 127.0.0.1:6385 master - 0 1578452422167 7 connected 0-165 5461-5627 10923-11088
添加从节点,启动6386节点并加入集群
[work@jsy-bj-test00 bin]$ ./redis-server /home/work/redis/etc/redis6386.conf & [work@jsy-bj-test00 bin]$ ./redis-trib.rb add-node --slave --master-id d00d05f601df0b69df0c2cc532b636d2c83347be 127.0.0.1:6386 127.0.0.1:6385 >>> Adding node 127.0.0.1:6386 to cluster 127.0.0.1:6385 >>> Performing Cluster Check (using node 127.0.0.1:6385) ...... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:6386 to make it join the cluster. Waiting for the cluster to join. >>> Configure node as replica of 127.0.0.1:6385. [OK] New node added correctly.
查看6386节点状态
[work@jsy-bj-test00 bin]$ ./redis-cli -h 127.0.0.1 -p 6379 -c cluster nodes 6845878cbef3fe25f19a70a8db3eb29abb1b9ea6 127.0.0.1:6386 slave d00d05f601df0b69df0c2cc532b636d2c83347be 0 1578452805439 7 connected
删除节点
[work@jsy-bj-test00 bin]$ ./redis-trib.rb del-node 127.0.0.1:6383 1b99b2a1e4b530501476ab48422c75f30423fd19 >>> Removing node 1b99b2a1e4b530501476ab48422c75f30423fd19 from cluster 127.0.0.1:6383 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.