分类: Mysql/postgreSQL
2012-09-28 21:37:15
MongoDB Replica Sets不仅提供高可用性的解决方案,它也同时提供负载均衡的解决方案,增减Replica Sets节点在实际应用中非常普遍,例如当应用的读压力暴增时,3台节点的环境已不能满足需求,那么就需要增加一些节点将压力平均分配一下;当应用的压力小时,可以减少一些节点来减少硬件资源的成本;总之这是一个长期且持续的工作。
官方给我们提了2个方案用于增加节点,一种是通过oplog来增加节点,一种是通过数据库快照(--fastsync)和oplog来增加节点,下面将分别介绍。
①、配置并启动新节点,启用28013这个端口给新的节点
[root@localhost ~]# mkdir -p /data/data/r3 [root@localhost ~]# echo "this is rs1 super secret key" > /data/key/r3 [root@localhost ~]# chmod 600 /data/key/r3 [root@localhost ~]# /Apps/mongo/bin/mongod --replSet rs1 --keyFile /data/key/r3 --fork --port 28013 --dbpath /data/data/r3 --logpath=/data/log/r3.log --logappend all output going to: /data/log/r3.log forked process: 10553 [root@localhost ~]# |
②、添加此新节点到现有的Replica Sets
rs1:PRIMARY> rs.add("localhost:28013") { "ok" : 1 } |
③、查看Replica Sets我们可以清晰的看到内部是如何添加28013这个新节点的.
步骤一: 进行初始化
rs1: PRIMARY > rs.status() { "set" : "rs1", "date" : ISODate("2012-05-31T12:17:44Z"), "myState" : 1, "members" : [ …… { "_id" : 3, "name" : "localhost:28013", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2012-05-31T12:17:43Z"), "errmsg" : "still initializing" } ], "ok" : 1 } |
步骤二: 进行数据同步
rs1:PRIMARY> rs.status() { "set" : "rs1", "date" : ISODate("2012-05-31T12:18:07Z"), "myState" : 1, "members" : [ …… { "_id" : 3, "name" : "localhost:28013", "health" : 1, "state" : 3, "stateStr" : "RECOVERING", "uptime" : 16, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2012-05-31T12:18:05Z"), "errmsg" : "initial sync need a member to be primary or secondary to do our initial sync" } ], "ok" : 1 } |
步骤三: 初始化同步完成
rs1:PRIMARY> rs.status() { "set" : "rs1", "date" : ISODate("2012-05-31T12:18:08Z"), "myState" : 1, "members" : [ …… { "_id" : 3, "name" : "localhost:28013", "health" : 1, "state" : 3, "stateStr" : "RECOVERING", "uptime" : 17, "optime" : { "t" : 1338466661000, "i" : 1 }, "optimeDate" : ISODate("2012-05-31T12:17:41Z"), "lastHeartbeat" : ISODate("2012-05-31T12:18:07Z"), "errmsg" : "initial sync done" } ], "ok" : 1 } |
步骤四: 节点添加完成,状态正常
rs1:PRIMARY> rs.status() { "set" : "rs1", "date" : ISODate("2012-05-31T12:18:10Z"), "myState" : 1, "members" : [ …… { "_id" : 3, "name" : "localhost:28013", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 19, "optime" : { "t" : 1338466661000, "i" : 1 }, "optimeDate" : ISODate("2012-05-31T12:17:41Z"), "lastHeartbeat" : ISODate("2012-05-31T12:18:09Z") } ], "ok" : 1 } |
④、验证数据已经同步过来了
[root@localhost data]# /Apps/mongo/bin/mongo -port 28013 MongoDB shell version: 1.8.1 connecting to: 127.0.0.1:28013/test rs1:SECONDARY> rs.slaveOk() rs1:SECONDARY> db.c1.find() { "_id" : ObjectId("4fc760d2383ede1dce14ef86"), "age" : 10 } rs1:SECONDARY> |
通过oplog直接进行增加节点操作简单且无需人工干预过多,但oplog是capped collection,采用循环的方式进行日志处理,所以采用oplog的方式进行增加节点,有可能导致数据的不一致,因为日志中存储的信息有可能已经刷新过了。不过没关系,我们可以通过数据库快照(--fastsync)和oplog结合的方式来增加节点,这种方式的操作流程是,先取某一个复制集成员的物理文件来做为初始化数据,然后剩余的部分用oplog日志来追,最终达到数据一致性
①、取某一个复制集成员的物理文件来做为初始化数据
[root@localhost ~]# scp -r /data/data/r3 /data/data/r4 [root@localhost ~]# echo "this is rs1 super secret key" > /data/key/r4 [root@localhost ~]# chmod 600 /data/key/r4 |
②、在取完物理文件后,在c1集中插入一条新文档,用于最后验证此更新也同步了
rs1:PRIMARY> db.c1.find() { "_id" : ObjectId("4fc760d2383ede1dce14ef86"), "age" : 10 } rs1:PRIMARY> db.c1.insert({age:20}) rs1:PRIMARY> db.c1.find() { "_id" : ObjectId("4fc760d2383ede1dce14ef86"), "age" : 10 } { "_id" : ObjectId("4fc7748f479e007bde6644ef"), "age" : 20 } rs1:PRIMARY> |
③、启用28014这个端口给新的节点
/Apps/mongo/bin/mongod --replSet rs1 --keyFile /data/key/r4 --fork --port 28014 --dbpath /data/data/r4 --logpath=/data/log/r4.log --logappend--fastsync |
④、添加28014节点
rs1:PRIMARY> rs.add("localhost:28014") { "ok" : 1 } |
⑤、验证数据已经同步过来了
[root@localhost data]# /Apps/mongo/bin/mongo -port 28014 MongoDB shell version: 1.8.1 connecting to: 127.0.0.1:28014/test rs1:SECONDARY> rs.slaveOk() rs1:SECONDARY> db.c1.find() { "_id" : ObjectId("4fc760d2383ede1dce14ef86"), "age" : 10 } { "_id" : ObjectId("4fc7748f479e007bde6644ef"), "age" : 20 } rs1:SECONDARY> |
下面将刚刚添加的两个新节点28013和28014从复制集中去除掉,只需执行rs.remove指令就可以了,具体如下:
rs1:PRIMARY> rs.remove("localhost:28014") { "ok" : 1 } rs1:PRIMARY> rs.remove("localhost:28013") { "ok" : 1 } |
查看复制集状态,可以看到现在只有28010、28011、28012这三个成员,原来的28013和28014都成功去除了
rs1:PRIMARY> rs.status() { "set" : "rs1", "date" : ISODate("2012-05-31T14:08:29Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "localhost:28010", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "optime" : { "t" : 1338473273000, "i" : 1 }, "optimeDate" : ISODate("2012-05-31T14:07:53Z"), "self" : true }, { "_id" : 1, "name" : "localhost:28011", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 34, "optime" : { "t" : 1338473273000, "i" : 1 }, "optimeDate" : ISODate("2012-05-31T14:07:53Z"), "lastHeartbeat" : ISODate("2012-05-31T14:08:29Z") }, { "_id" : 2, "name" : "localhost:28012", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 34, "optime" : { "t" : 1338473273000, "i" : 1 }, "optimeDate" : ISODate("2012-05-31T14:07:53Z"), "lastHeartbeat" : ISODate("2012-05-31T14:08:29Z") } ], "ok" : 1 } rs1:PRIMARY> |
-------------------------------------------------------------------