Chinaunix首页 | 论坛 | 博客
  • 博客访问: 6683563
  • 博文数量: 1005
  • 博客积分: 8199
  • 博客等级: 中将
  • 技术积分: 13071
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-25 20:19
个人简介

脚踏实地、勇往直前!

文章分类

全部博文(1005)

文章存档

2020年(2)

2019年(93)

2018年(208)

2017年(81)

2016年(49)

2015年(50)

2014年(170)

2013年(52)

2012年(177)

2011年(93)

2010年(30)

分类: NOSQL

2019-03-05 17:13:41

环境:
OS:Centos 7
Redis: 3.2.11
主                    从
192.168.1.118:7001    192.168.1.118:8001
192.168.1.118:7002    192.168.1.118:8002
192.168.1.118:7003    192.168.1.118:8003


1.1 下载redis
直接到官网下载,我这里下载的是redis-3.2.11.tar.gz.
下载地址:


-----------------------------192.168.1.118机器上部署安装----------------------------
1.2 安装软件(先在192.168.1.118的机器上安装)

首先创建安装目录
mkdir -p /opt/redis_cluster

1.2.1 解压缩、安装
[root@localhost soft]# tar -xvf redis-3.2.11.tar.gz
[root@localhost soft]# cp -R redis-3.2.11 /opt/
[root@localhost opt]# cd /opt/redis-3.2.11/
[root@localhost redis-3.2.11]#make
[root@localhost redis-3.2.11]#make test
cd src && make test
make[1]: Entering directory `/opt/redis-3.2.9/src'
You need tcl 8.5 or newer in order to run the Redis test
make[1]: *** [test] Error 1
make[1]: Leaving directory `/opt/redis-3.2.9/src'
make: *** [test] Error 2

需要安装tcl
[root@localhost redis-3.2.11]# yum -y install tcl

安装redis
[root@localhost src]# cd /opt/redis-3.2.11/src
[root@localhost src]# make PREFIX=/opt/redis_cluster install

执行上面的命令后,会发现在/opt/redis_cluster/bin目录下有如下几个文件
[root@localhost bin]# ls
redis-benchmark  redis-check-aof  redis-check-rdb  redis-cli  redis-sentinel  redis-server

1.2.2 创建集群目录
[root@localhost redis_cluster]# cd /opt/redis_cluster
[root@localhost redis_cluster]# mkdir conf  ##配置文件目录
[root@localhost redis_cluster]# mkdir data  ##数据目录
[root@localhost redis_cluster]# mkdir log   ##日志目录,每个节点一个日志文件
[root@localhost redis_cluster]# mkdir run   ##存放pid文件
数据文件data下每个都有一个目录
[root@localhost redis_cluster]# mkdir -p ./data/c1
[root@localhost redis_cluster]# mkdir -p ./data/c2
[root@localhost redis_cluster]# mkdir -p ./data/c3

1.2.3 创建配置文件(集群第一个节点的配置文件c1.conf)
daemonize yes
pidfile /opt/redis_cluster/run/c1.pid
port 7001 ##这里另外的节点需要修改
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/opt/redis_cluster/log/c1.log" ##这里另外的节点需要修改
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump-c1.rdb ##这里另外的节点需要修改
dir /opt/redis_cluster/data/c1 ##这里另外的节点需要修改
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes  ##修改,这里设置为yes会采用aof文件进行恢复,若是设置为no,会采用rdb文件进行恢复
appendfilename "c1.aof" ##这里另外的节点需要修改
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes   ##开启集群
##cluster-config-file /opt/redis_cluster/conf/c1.conf ##这里另外的节点需要修改,发现这里不能使用绝对路径,使用了绝对路径会报Unrecoverable error: corrupted cluster config file
cluster-config-file "c1.conf"
cluster-node-timeout 15000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
# Generated by CONFIG REWRITE
#masterauth "richinfo123"
#requirepass "richinfo123"
protected-mode no
bind 192.168.1.118 ##根据机器ip进行修改 bind 192.168.1.85

1.2.4 创建其他配置文件
将步骤1.2.3上创建的配置文件拷贝一份出来,分别命名为c2.conf,c3.conf,相应修改对应的参数(找到c1关键字,分别替换为c2,c3,以及相应的端口)
[root@localhost conf]# cp c1.conf c2.conf
[root@localhost conf]# cp c1.conf c3.conf

然后修改相应对应的参数(找到c1关键字,分别替换为c2,c3,以及相应的端口)


-----------------------------192.168.1.85机器上部署安装--------------------------
步骤跟192.168.1.118操作一致,关键是配置文件的端口号要注意修改


-----------------------------启动------------------------------------------------
1.192.168.1.118机器上启动3个节点
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c1.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c2.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c3.conf


2.查看进程
[root@localhost conf]# ps -ef|grep redis
root     13815     1  0 11:14 ?        00:00:00 /opt/redis_cluster/bin/redis-server 192.168.1.118:7001 [cluster]
root     13830     1  0 11:14 ?        00:00:00 /opt/redis_cluster/bin/redis-server 192.168.1.118:7002 [cluster]
root     13983     1  0 11:17 ?        00:00:00 /opt/redis_cluster/bin/redis-server 192.168.1.118:7003 [cluster]

3.192.168.1.85机器上启动3个节点
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c1.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c2.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c3.conf


4.查看进程
[root@localhost log]# ps -ef|grep redis
root     24210     1  0 11:19 ?        00:00:00 /opt/redis_cluster/bin/redis-server 192.168.1.85:8001 [cluster]
root     24294     1  0 11:20 ?        00:00:00 /opt/redis_cluster/bin/redis-server 192.168.1.85:8002 [cluster]
root     24343     1  0 11:21 ?        00:00:00 /opt/redis_cluster/bin/redis-server 192.168.1.85:8003 [cluster]



-----------------------------配置集群------------------------------------------------
该操作只需要在一台机器上操作即可,我这里是在192.168.1.118这台机器上操作
1.从安装目录拷贝redis-trib.rb文件到指定的目录,因为make install没有安装该文件到bin目录
[root@localhost src]# cd /opt/redis-3.2.11/src
[root@localhost src]# cp redis-trib.rb /opt/redis_cluster/


2.集群初始化
/opt/redis_cluster/redis-trib.rb create --replicas 1 192.168.1.118:7001 192.168.1.118:7002 192.168.1.118:7003 192.168.1.85:8001 192.168.1.85:8002 192.168.1.85:8003
报错如下
/usr/bin/env: ruby: No such file or directory
解决办法:
yum -y install ruby


再次执行集群初始化
[root@localhost redis_cluster]# /opt/redis_cluster/redis-trib.rb create --replicas 1 192.168.1.118:7001 192.168.1.118:7002 192.168.1.118:7003 192.168.1.85:8001 192.168.1.85:8002 192.168.1.85:8003
/usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- redis (LoadError)
        from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in `require'
        from /opt/redis_cluster/redis-trib.rb:25:in `'
解决办法: 
[root@localhost redis_cluster]# gem install redis
Fetching: redis-4.1.0.gem (100%)
ERROR:  Error installing redis:
        redis requires Ruby version >= 2.2.2


继续处理
2.1.安装curl
yum install curl
2.2、安装RVM(具体命令可以查看官网,Ruby官网地址 和 Ruby官网安装教程)
具体RVM安装命令地址:
[root@linux ~]# gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
[root@linux ~]# curl -sSL | bash -s stable
[root@linux ~]# find / -name rvm -print
[root@linux ~]# source /usr/local/rvm/scripts/rvm


2.3、查看rvm库中已知的ruby版本
[root@linux ~]# rvm list known


2.4、安装一个ruby版本
[root@linux ~]# rvm install 2.4.5


2.5、使用一个ruby版本:
[root@linux ~]# rvm use 2.4.5


2.6、设置默认版本(设置ruby2.4.5为默认的ruby,因为还安装有1.8.3)
[root@linux ~]# rvm use 2.4.5 --default


2.7、查看ruby版本:
[root@linux ~]# ruby --version


2.8、安装redis:
[root@linux ~]# gem install redis


再次执行集群初始化


[root@localhost redis_cluster]# /opt/redis_cluster/redis-trib.rb create --replicas 1 192.168.1.118:7001 192.168.1.118:7002 192.168.1.118:7003 192.168.1.85:8001 192.168.1.85:8002 192.168.1.85:8003
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.1.118:7001
192.168.1.85:8001
192.168.1.118:7002
Adding replica 192.168.1.85:8002 to 192.168.1.118:7001
Adding replica 192.168.1.118:7003 to 192.168.1.85:8001
Adding replica 192.168.1.85:8003 to 192.168.1.118:7002
M: 4c5b3e7e8902137d9477f63c4e177c31a6680870 192.168.1.118:7001
   slots:0-5460 (5461 slots) master
M: 289f2f8007bc0eeedbaafe4ebf9d8c14025c400a 192.168.1.118:7002
   slots:10923-16383 (5461 slots) master
S: b2b698310e6bf18bcf42947cee0e824ec56cdc87 192.168.1.118:7003
   replicates cba3f434f3354b84b50999f10dee9ef2ba958aa2
M: cba3f434f3354b84b50999f10dee9ef2ba958aa2 192.168.1.85:8001
   slots:5461-10922 (5462 slots) master
S: 7e855735563529a3c6b828c2febf26307b1ada68 192.168.1.85:8002
   replicates 4c5b3e7e8902137d9477f63c4e177c31a6680870
S: b8bc2f7b1ea32bb7d55990ad4aa761d3acea85b4 192.168.1.85:8003
   replicates 289f2f8007bc0eeedbaafe4ebf9d8c14025c400a
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 192.168.1.118:7001)
M: 4c5b3e7e8902137d9477f63c4e177c31a6680870 192.168.1.118:7001
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: cba3f434f3354b84b50999f10dee9ef2ba958aa2 192.168.1.85:8001
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: b8bc2f7b1ea32bb7d55990ad4aa761d3acea85b4 192.168.1.85:8003
   slots: (0 slots) slave
   replicates 289f2f8007bc0eeedbaafe4ebf9d8c14025c400a
S: b2b698310e6bf18bcf42947cee0e824ec56cdc87 192.168.1.118:7003
   slots: (0 slots) slave
   replicates cba3f434f3354b84b50999f10dee9ef2ba958aa2
M: 289f2f8007bc0eeedbaafe4ebf9d8c14025c400a 192.168.1.118:7002
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 7e855735563529a3c6b828c2febf26307b1ada68 192.168.1.85:8002
   slots: (0 slots) slave
   replicates 4c5b3e7e8902137d9477f63c4e177c31a6680870
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@localhost redis_cluster]# 


登陆集群
[root@localhost bin]# ./redis-cli -c -h 192.168.1.118 -p 7001
192.168.1.118:7001>  cluster nodes
4c5b3e7e8902137d9477f63c4e177c31a6680870 192.168.1.118:7001 myself,master - 0 0 1 connected 0-5460
cba3f434f3354b84b50999f10dee9ef2ba958aa2 192.168.1.85:8001 master - 0 1551765883486 4 connected 5461-10922
b8bc2f7b1ea32bb7d55990ad4aa761d3acea85b4 192.168.1.85:8003 slave 289f2f8007bc0eeedbaafe4ebf9d8c14025c400a 0 1551765881484 6 connected
b2b698310e6bf18bcf42947cee0e824ec56cdc87 192.168.1.118:7003 slave cba3f434f3354b84b50999f10dee9ef2ba958aa2 0 1551765879476 4 connected
289f2f8007bc0eeedbaafe4ebf9d8c14025c400a 192.168.1.118:7002 master - 0 1551765878474 2 connected 10923-16383
7e855735563529a3c6b828c2febf26307b1ada68 192.168.1.85:8002 slave 4c5b3e7e8902137d9477f63c4e177c31a6680870 0 1551765882485 5 connected


检查集群
[root@localhost redis_cluster]# ./redis-trib.rb check 192.168.1.118:7001
>>> Performing Cluster Check (using node 192.168.1.118:7001)
M: 4c5b3e7e8902137d9477f63c4e177c31a6680870 192.168.1.118:7001
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: cba3f434f3354b84b50999f10dee9ef2ba958aa2 192.168.1.85:8001
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: b8bc2f7b1ea32bb7d55990ad4aa761d3acea85b4 192.168.1.85:8003
   slots: (0 slots) slave
   replicates 289f2f8007bc0eeedbaafe4ebf9d8c14025c400a
S: b2b698310e6bf18bcf42947cee0e824ec56cdc87 192.168.1.118:7003
   slots: (0 slots) slave
   replicates cba3f434f3354b84b50999f10dee9ef2ba958aa2
M: 289f2f8007bc0eeedbaafe4ebf9d8c14025c400a 192.168.1.118:7002
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 7e855735563529a3c6b828c2febf26307b1ada68 192.168.1.85:8002
   slots: (0 slots) slave
   replicates 4c5b3e7e8902137d9477f63c4e177c31a6680870
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


---------------------测试验证数据-----------------------------
登录其中一个主节点
[root@localhost bin]# ./redis-cli -c -h 192.168.1.118 -p 7001
设置key值
set name01 'huangxueliang01'
set name02 'huangxueliang02'
set name03 'huangxueliang03'
set name04 'huangxueliang04'
set name05 'huangxueliang05'
set name06 'huangxueliang06'
set name07 'huangxueliang07'
set name08 'huangxueliang08'
set name09 'huangxueliang09'
set name10 'huangxueliang10'
set name11 'huangxueliang11'
set name12 'huangxueliang12'
set name13 'huangxueliang13'
set name14 'huangxueliang14'
set name15 'huangxueliang15'
set name16 'huangxueliang16'
set name17 'huangxueliang17'
set name18 'huangxueliang18'
set name19 'huangxueliang19'
set name20 'huangxueliang20'


登录一个从节点get该key值,看集群是否同步,从下面的输出可以看出,集群已经同步.
[root@localhost bin]# ./redis-cli -c -h 192.168.1.85 -p 8002
192.168.1.85:8002> get name1
-> Redirected to slot [5798] located at 192.168.1.85:8001
"huangxueliang1"



--------------修改各节点的端口---------------------
1.保持数据
登陆6个节点,每个节点都执行save命令或是bgsave操作保存数据


2.关闭集群
192.168.1.118
./redis-cli -c -h 192.168.1.118 -p 7001 shutdown
./redis-cli -c -h 192.168.1.118 -p 7002 shutdown
./redis-cli -c -h 192.168.1.118 -p 7003 shutdown


192.168.1.85
./redis-cli -c -h 192.168.1.85 -p 8001 shutdown
./redis-cli -c -h 192.168.1.85 -p 8002 shutdown
./redis-cli -c -h 192.168.1.85 -p 8003 shutdown


3.修改配置文件的端口号
192.168.1.118(c1.conf,c2.conf,c3.conf)
7001->1001
7002->1002
7003->1003


192.168.1.85(c1.conf,c2.conf,c3.conf)
8001->2001
8002->2002
8003->2003


4.将如下文件删除掉
A.cluster-config-file指定的文件
B.data目录下的aof文件,启用appendonly=yes的话会生成文件
C.data目录下的rdb文件
我这里是采用mv的方式
[root@localhost c1]# rm c1.conf
[root@localhost c1]# mv dump-c1.rdb bak_dump-c1.rdb ##这里从命名的目的是等集群重新配置后,修改回去,这样的话原来的数据就可以恢复了
其他节点做同样的操作


5.启动
192.168.1.118
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c1.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c2.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c3.conf


192.168.1.85
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c1.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c2.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c3.conf


6.检查当前集群信息
[root@localhost redis_cluster]# ./redis-trib.rb check 192.168.1.118:1001
>>> Performing Cluster Check (using node 192.168.1.118:1001)
M: 9573a3a6802d076c65a99a092cb8fafc27f8b2f7 192.168.1.118:1001
   slots: (0 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.


这个时候整个集群都是空的




7.集群初始化
登陆其中一个节点192.168.1.118,重新创建集群
[root@localhost redis_cluster]# /opt/redis_cluster/redis-trib.rb create --replicas 1 192.168.1.118:1001 192.168.1.118:1002 192.168.1.118:1003 192.168.1.85:2001 192.168.1.85:2002 192.168.1.85:2003
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.1.118:1001
192.168.1.85:2001
192.168.1.118:1002
Adding replica 192.168.1.85:2002 to 192.168.1.118:1001
Adding replica 192.168.1.118:1003 to 192.168.1.85:2001
Adding replica 192.168.1.85:2003 to 192.168.1.118:1002
M: 9573a3a6802d076c65a99a092cb8fafc27f8b2f7 192.168.1.118:1001
   slots:0-5460 (5461 slots) master
M: ea4eb2631a813ba878d96b040df0ef2e5de8dce5 192.168.1.118:1002
   slots:10923-16383 (5461 slots) master
S: 87b1a65e8e1207c615a4d96827475877f255e541 192.168.1.118:1003
   replicates 5fd7db9819bbb4d785f96880a8e2e04b834982a9
M: 5fd7db9819bbb4d785f96880a8e2e04b834982a9 192.168.1.85:2001
   slots:5461-10922 (5462 slots) master
S: 6ab475bbe6434560b3fda1d2a15c58ab33d560f0 192.168.1.85:2002
   replicates 9573a3a6802d076c65a99a092cb8fafc27f8b2f7
S: cd6bd556955e4074d923fe14191dfae6ee2afcc0 192.168.1.85:2003
   replicates ea4eb2631a813ba878d96b040df0ef2e5de8dce5
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 192.168.1.118:1001)
M: 9573a3a6802d076c65a99a092cb8fafc27f8b2f7 192.168.1.118:1001
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 87b1a65e8e1207c615a4d96827475877f255e541 192.168.1.118:1003
   slots: (0 slots) slave
   replicates 5fd7db9819bbb4d785f96880a8e2e04b834982a9
M: ea4eb2631a813ba878d96b040df0ef2e5de8dce5 192.168.1.118:1002
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: cd6bd556955e4074d923fe14191dfae6ee2afcc0 192.168.1.85:2003
   slots: (0 slots) slave
   replicates ea4eb2631a813ba878d96b040df0ef2e5de8dce5
M: 5fd7db9819bbb4d785f96880a8e2e04b834982a9 192.168.1.85:2001
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 6ab475bbe6434560b3fda1d2a15c58ab33d560f0 192.168.1.85:2002
   slots: (0 slots) slave
   replicates 9573a3a6802d076c65a99a092cb8fafc27f8b2f7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


集群初始化完成后,在每个节点的data目录下都会生成对应的rdb文件




8.查看当前的集群是否有之前的数据
[root@localhost bin]# ./redis-cli -c -h 192.168.1.118 -p 1001
192.168.1.118:1001> keys *
(empty list or set)


发现是空的,没有数据,下面我们进行数据的恢复




----------------------数据恢复----------------------------------
1.停掉当前的集群
192.168.1.118
./redis-cli -c -h 192.168.1.118 -p 1001 shutdown
./redis-cli -c -h 192.168.1.118 -p 1002 shutdown
./redis-cli -c -h 192.168.1.118 -p 1003 shutdown


192.168.1.85
./redis-cli -c -h 192.168.1.85 -p 2001 shutdown
./redis-cli -c -h 192.168.1.85 -p 2002 shutdown
./redis-cli -c -h 192.168.1.85 -p 2003 shutdown


2.将之前备份的rdb文件重命名为当前的rdb文件,即覆盖当前的rdb文件
注意这里要是开启了appendonly=yes的话,需要使用aof文件进行恢复,也就是将之前备份的aof文件覆盖当前的aof文件,经过测试发现appendonly=yes不会使用rdb文件恢复
[root@localhost c1]# mv bak_dump-c1.rdb dump-c1.rdb      
mv: overwrite ‘dump-c1.rdb’? y


3.启动集群
192.168.1.118
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c1.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c2.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c3.conf


192.168.1.85
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c1.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c2.conf
/opt/redis_cluster/bin/redis-server /opt/redis_cluster/conf/c3.conf


4.检查数据恢复情况
[root@localhost bin]# ./redis-cli -c -h 192.168.1.118 -p 1001
192.168.1.118:1001> keys *
1) "name04"
2) "name13"
3) "name17"
4) "name08"
192.168.1.118:1001> 
192.168.1.118:1001> get name20
-> Redirected to slot [10589] located at 192.168.1.85:2001
"huangxueliang20"


可以看到该节点上有数据了,逐一检查其他节点

-- The End--
阅读(5013) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~