Chinaunix首页 | 论坛 | 博客
  • 博客访问: 407682
  • 博文数量: 92
  • 博客积分: 3153
  • 博客等级: 中校
  • 技术积分: 780
  • 用 户 组: 普通用户
  • 注册时间: 2010-02-03 16:26
文章存档

2011年(21)

2010年(71)

分类: LINUX

2011-08-30 17:49:29

发生的场景:本机上启动一个namenode和一个datanode ,然后datanode无法启动 。

错误:namespaceIDs不一致  。

原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败,所要做的就是每次fotmat前,清空tmp一下 的所有目录.

解决,给出两种方法:

Workaround 1: Start from scratch

I can testify that the following steps solve this error, but the side effects won't make you happy (me neither). The crude workaround I have found is to:

1.     stop the cluster

2.     delete the data directory on the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop(这个是该机器的登录名)/dfs/data 

3.     reformat the namenode (NOTE: all HDFS data is lost during this process!)

4.     restart the cluster

When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try.

停掉hadoop集群,然后将本地的datanode的数据文件夹删除 。文件夹位置在conf/hdfs-site.xml中配置,默认的话放在/usr/local/hadoop-datastore/hadoop-hadoop(这个是该机器的登录名)/dfs/data    ,但我的默认放在 /tmp/hadoop-dev(我的hadoop启动的用户名)  /dfs/data

Workaround 2: Updating namespaceID of problematic datanodes

Big thanks to Jared Stehler for the following suggestion. I have not tested it myself yet, but feel free to try it out and send me your feedback. This workaround is "minimally invasive" as you only have to edit one file on the problematic datanodes:

1.     stop the datanode

2.     edit the value of namespaceID in /current/VERSION to match the value of the current namenode

3.     restart the datanode

If you followed the instructions in my tutorials, the full path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION (background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, and we set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop).

第二种是不需要重启hadoop集群,只需要对单个的datanode节点来操作 。

阅读(2024) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~