友情提示:您看到此篇文章时可能已经过时了,当前最新版本是2.2.0。
昨天配置的mongoshard集群,出现了如下异常,今天直接down掉:
- [Balancer] caught exception while doing balance: error checking clock skew of cluster mongotest12:30011,mongotest22:30011,mongotest32:30011 :: caused by :: 13650 clock skew of the cluster mongotest12:30011,mongotest22:30011,mongotest32:30011 is too far out of bounds to allow distributed locking.
看来mongodb的server之间时间必须一致,或者不能差距太大,察看了下源码,记录如下:
报错的信息在client/distlock.cpp的DistributedLock这个类中。DistributedLock(分布锁)这个类为configdb提供了同步整个集群环境任务状态的方法。每个任务在集群中都必须有一个唯一的名字,比如数据平衡任务'balancer'。这个锁的信息记录在configdb的locks集合中。每个锁生效都必须在一个预先规定的时间范围内,在初始化类的时候这个类都会自动去维护这个时间,判断是否超时。
DistributedLock的got函数
- string got( DistributedLock& lock, unsigned long long sleepTime ) {
- ....
- // Check our clock skew
- try {
- if( lock.isRemoteTimeSkewed() ) {
- throw LockException( str::stream() << "clock skew of the cluster " << conn.toString() << " is too far out of bounds to allow distributed locking." , 13650 );
- }
- }
- catch( LockException& e) {
- throw LockException( str::stream() << "error checking clock skew of cluster " << conn.toString() << causedBy( e ) , 13651);
- }
- ....
- }
- bool DistributedLock::isRemoteTimeSkewed() {
- return !DistributedLock::checkSkew( _conn, NUM_LOCK_SKEW_CHECKS, _maxClockSkew, _maxNetSkew );
- }
- /**
- * Check the skew between a cluster of servers
- */
- static bool checkSkew( const ConnectionString& cluster, unsigned skewChecks = NUM_LOCK_SKEW_CHECKS, unsigned long long maxClockSkew = MAX_LOCK_CLOCK_SKEW, unsigned long long maxNetSkew = MAX_LOCK_NET_SKEW );
checkSkew就是判断server之间时间差的函数,此函数有几个参数
1、skewChecks 检查次数
2、maxClockSkew 最大的时间差
3、maxNetSkew 检查时网络的最大时间差
每个参数初始化的时候都有默认值,此默认值在distlock.h头文件中
- #define LOCK_TIMEOUT (15 * 60 * 1000)
- #define LOCK_SKEW_FACTOR (30)
- #define LOCK_PING (LOCK_TIMEOUT / LOCK_SKEW_FACTOR)
- #define MAX_LOCK_NET_SKEW (LOCK_TIMEOUT / LOCK_SKEW_FACTOR)
- #define MAX_LOCK_CLOCK_SKEW (LOCK_TIMEOUT / LOCK_SKEW_FACTOR)
- #define NUM_LOCK_SKEW_CHECKS (3)
可以看到skewChecks 默认检查3次,maxClockSkew 的默认值是30s,maxNetSkew 也是30s,时间还是比较短的。
maxNetSkew 是从检查机器到被检查机器,执行serverStatus命令返回的最大时间
- Date_t then = jsTime();
- bool success = conn->get()->runCommand( string("admin"),BSON( "serverStatus" << 1 ), result );
- delay = jsTime() - then;
如果delay>2*MAX_LOCK_NET_SKEW 则认为超时
checkSkew通过相互比较集群中server的时间3次,得到集群中差值最大的时间间隔,如果大于maxClockSkew ,那么报出异常
- // Make sure our max skew is not more than our pre-set limit
- if(totalSkew > (long long) maxClockSkew) {
- log( logLvl + 1 ) << "total clock skew of " << totalSkew << "ms for servers " << cluster << " is out of " << maxClockSkew << "ms bounds." << endl;
- return false;
- }
起初我们认为是由于时间问题引起的down机问题,后来才发现是因为crontab里配置了太多的切割日志的脚本,且都在同一时间,导致一个分片都down掉,整个集群才down的。
切割日志命令:
- killall -SIGUSR1 mongod
- killall -SIGUSR1 mongos
但是由于时间不同步引发的问题还需进一步讨论
阅读(5827) | 评论(0) | 转发(0) |