Chinaunix首页 | 论坛 | 博客
  • 博客访问: 3036490
  • 博文数量: 167
  • 博客积分: 613
  • 博客等级: 中士
  • 技术积分: 5473
  • 用 户 组: 普通用户
  • 注册时间: 2011-09-13 21:35
个人简介

人, 既无虎狼之爪牙,亦无狮象之力量,却能擒狼缚虎,驯狮猎象,无他,唯智慧耳。

文章分类
文章存档

2015年(19)

2014年(70)

2013年(54)

2012年(14)

2011年(10)

分类: HADOOP

2013-08-15 16:27:44

     在经历了几周的努力之后,终于选择放弃现在安装的稳定版本,转而安装旧版本来部署Hadoop系统。直到昨天,自己还一直为Inconsistent configuration的错误头疼不已。既然同事的版本已经装成功了,那自己也就先用跑通的系统试一下吧,毕竟先放下再回来看现在的问题可能有更好的解决思路吧。今天开始正式重新安装Hadoop与Hbase。之前的安装笔记比较凌乱,今天借着这样的机会把整个步骤重新梳理一遍。

一、安装Hadoop
     自己使用的软件版本是hadoop-1.0.3,比较早的一个版本,可以去hadoop的官方网站去下载。在安装hadoop以前首先要设置系统环境:
<1> 安装java-1.6版本,之前自己安装的java-1.7,但是没有成功,不晓得是不是java版本的缘故;无论如何,这次自己选择了比较保守的方案,从oracle官方注册后下载jdk-6u45-linux-i586.bin后解包得到【jdk-1.6.0_45】。注意到这里都是bin文件,因此需要chmod该文件以775的可执行权限,然后./filename.bin即可;
<2> 安装ssh,并且设置ssh无密码登录hadoop
<2.1>运行sudo apt-get install ssh/rsync,运行sudo apt-get install openjdk-6-jdk (for jps command)
<2.2>配置ssh本机免口令登录主要有两步:
运行ssh-keygen -t dsa -P '' -f /.ssh/id_dsa
   -t用来指定加密算法,可以选择dsa和rsa两种加密方式;
   -P用来指定密码,两个单引号表示空密码'';
   -f用来指定存放密钥的文件
运行cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
   这一步将公钥添加进本机的authorized_keys中,完成这两步后可以ssh localhost验证是否成功。接下来需要进入已经解好的hadoop-1.0.3中进行配置。hadoop的伪分布模式主要需要配置以下几个配置文件:
<3>conf/hbase-env.sh:主要用来配置hadoop的运行环境,这里需要修改JAVA_HOME到你的jdk1.6目录(见黑体)

点击(此处)折叠或打开

  1. #Set Hadoop-specific environment variables here.

  2. # The only required environment variable is JAVA_HOME. All others are
  3. # optional. When running a distributed configuration it is best to
  4. # set JAVA_HOME in this file, so that it is correctly defined on
  5. # remote nodes.

  6. # The java implementation to use. Required.
  7. export JAVA_HOME=/home/hadoop/platform/jdk1.6.0_45

  8. # Extra Java CLASSPATH elements. Optional.
  9. # export HADOOP_CLASSPATH=

  10. # The maximum amount of heap to use, in MB. Default is 1000.
  11. # export HADOOP_HEAPSIZE=2000

  12. # Extra Java runtime options. Empty by default.
  13. # export HADOOP_OPTS=-server

  14. # Command specific options appended to HADOOP_OPTS when specified
  15. export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
  16. export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
  17. export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
  18. export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
  19. export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
  20. # export HADOOP_TASKTRACKER_OPTS=
  21. # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
  22. # export HADOOP_CLIENT_OPTS

  23. # Extra ssh options. Empty by default.
  24. # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

  25. # Where log files are stored. $HADOOP_HOME/logs by default.
  26. # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

  27. # File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
  28. # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

  29. # host:path where hadoop code should be rsync'd from. Unset by default.
  30. # export HADOOP_MASTER=master:/home/$USER/src/hadoop

  31. # Seconds to sleep between slave commands. Unset by default. This
  32. # can be useful in large clusters, where, e.g., slave rsyncs can
  33. # otherwise arrive faster than the master can service them.
  34. # export HADOOP_SLAVE_SLEEP=0.1

  35. # The directory where pid files are stored. /tmp by default.
  36. # export HADOOP_PID_DIR=/var/hadoop/pids

  37. # A string representing this instance of hadoop. $USER by default.
  38. # export HADOOP_IDENT_STRING=$USER

  39. # The scheduling priority for daemon processes. See 'man nice'.
  40. # export HADOOP_NICENESS=10
<4>conf/core-site.xml
     这里主要配置fs.default.name(用来指定namenode)和hadoop.tmp.dir(默认的hdfs的tmp目录位置),这里可以不设置hadoop.tmp.dir,那么就会保存在默认的/tmp下,每次重启机器都会丢失数据。

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3.   
  4. <!-- Put site-specific property overrides in this file. -->

  5. <configuration>
  6.    <property>
  7.        <name>fs.default.name</name>
  8.        <value>hdfs://localhost:9000</value>
  9.    </property>
  10.    <property>
  11.        <name>hadoop.tmp.dir</name>
  12.        <value>/home/hadoop/hdfs/tmp</value>
  13.    </property>

  14. </configuration>
<5>hdfs-site.xml
     这里的dfs.replication用来设置每份数据块的副本数目,默认是3,因为我们是在单机上配置的伪分布模式,因此设为1。dfs.name.dir和dfs.data.dir非常重要,用来设置存放hdfs中namenode和datanode数据的本地存放位置。这里如果设置不好,后续会出现多个错误。当然你也可以不设置采用默认的/tmp下的目录,但是同样重启会丢失数据。

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

  3. <!-- Put site-specific property overrides in this file. -->

  4. <configuration>
  5.    <property>
  6.        <name>dfs.replication</name>
  7.        <value>1</value>
  8.    </property>
  9.    <property>
  10.        <name>dfs.name.dir</name>
  11.        <value>/home/hadoop/hdfs/name</value>
  12.    </property>
  13.    <property>
  14.        <name>dfs.data.dir</name>
  15.        <value>/home/hadoop/hdfs/data</value>
  16.    </property>

  17. </configuration>
<6>mapred-site.xml

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

  3. <!-- Put site-specific property overrides in this file. -->

  4. <configuration>
  5.    <property>
  6.        <name>mapred.job.tracker</name>
  7.        <value>localhost:9001</value>
  8.    </property>

  9. </configuration>
      然后就是运行测试了,把hadoop-1.0.3/bin加入到/etc/profile中的PATH路径中,方便我们执行Hadoop命令。运行start-all.sh后出现了问题,jps查看namenode无法启动,使用hadoop namenode -format也不能成功,查看日志:

    提示我们存储的HDFS目录要么不存在要么没有权限:
FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:Directory /home/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
查看发现已经生成了hdfs目录,那么问题就是权限了,将/home下的hadoop目录权限由755设为775,然后重新运行hadoop,成功:

      在进行安装hbase之前,我们先来按照官方的方法测试一下伪分布式的hadoop,看看安装是否成功:首先将conf下的所有文件拷贝到hdfs上的input目录中,然后运行jar文件将结果存储到hdfs中的output目录中,最后从output目录中查看结果:


点击(此处)折叠或打开

  1. $ bin/hadoop fs -put conf input

  2. $ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'

  3. Copy the output files from the distributed filesystem to the local filesytem and examine them:
  4. $ bin/hadoop fs -get output output
  5. $ cat output/*

  6. or

  7. View the output files on the distributed filesystem:
  8. $ bin/hadoop fs -cat output/*

  9. When you're done, stop the daemons with:
  10. $ bin/stop-all.sh

二、安装hbase
     这里选择的版本是hbase-0.90.0,首先从google中直接搜索该版本,下载后解压得到hbase-0.90.0。同hadoop一样,这里我们的主要工作同样是修改配置文件:
<1> 修改/etc/hosts文件,将127.0.1.1 hadoop修改为127.0.0.1
<2> 设置ulimits:
修改/etc/security/limits.conf,添加:
hadoop -  nofile 32768
hadoop soft/hard nproc 32000
修改/etc/pam.d/common-session ,添加
session required pam_limits.so
<3>修改conf/hbase-env.xml
     这里主要设置JAVA_HOME目录,HBASE_LOG_DIR路径以及启用hbase自带的zookeeper

点击(此处)折叠或打开

  1. #
  2. #/**
  3. # * Copyright 2007 The Apache Software Foundation
  4. # *
  5. # * Licensed to the Apache Software Foundation (ASF) under one
  6. # * or more contributor license agreements. See the NOTICE file
  7. # * distributed with this work for additional information
  8. # * regarding copyright ownership. The ASF licenses this file
  9. # * to you under the Apache License, Version 2.0 (the
  10. # * "License"); you may not use this file except in compliance
  11. # * with the License. You may obtain a copy of the License at
  12. # *
  13. # * http://
  14. # *
  15. # * Unless required by applicable law or agreed to in writing, software
  16. # * distributed under the License is distributed on an "AS IS" BASIS,
  17. # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  18. # * See the License for the specific language governing permissions and
  19. # * limitations under the License.
  20. # */

  21. # Set environment variables here.

  22. # The java implementation to use. Java 1.6 required.
  23.  export JAVA_HOME=/home/hadoop/platform/jdk1.6.0_45

  24. # Extra Java CLASSPATH elements. Optional.
  25. # export HBASE_CLASSPATH=

  26. # The maximum amount of heap to use, in MB. Default is 1000.
  27. # export HBASE_HEAPSIZE=1000

  28. # Extra Java runtime options.
  29. # Below are what we set by default. May only work with SUN JVM.
  30. # For more on why as well as other possible settings,
  31. # see http://wiki.apache.org/hadoop/PerformanceTuning
  32. export HBASE_OPTS="$HBASE_OPTS -ea -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"

  33. # Uncomment below to enable java garbage collection logging.
  34. # export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log"

  35. # Uncomment and adjust to enable JMX exporting
  36. # See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
  37. # More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
  38. #
  39. # export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
  40. # export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
  41. # export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
  42. # export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
  43. # export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"

  44. # File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.
  45. # export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers

  46. # Extra ssh options. Empty by default.
  47. # export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"

  48. # Where log files are stored. $HBASE_HOME/logs by default.
  49. export HBASE_LOG_DIR=${HBASE_HOME}/logs

  50. # A string representing this instance of hbase. $USER by default.
  51. # export HBASE_IDENT_STRING=$USER

  52. # The scheduling priority for daemon processes. See 'man nice'.
  53. # export HBASE_NICENESS=10

  54. # The directory where pid files are stored. /tmp by default.
  55. # export HBASE_PID_DIR=/var/hadoop/pids

  56. # Seconds to sleep between slave commands. Unset by default. This
  57. # can be useful in large clusters, where, e.g., slave rsyncs can
  58. # otherwise arrive faster than the master can service them.
  59. # export HBASE_SLAVE_SLEEP=0.1

  60. # Tell HBase whether it should manage it's own instance of Zookeeper or not.
  61.  export HBASE_MANAGES_ZK=true
<4>配置conf/hbase-site.xml
     这里要将我们的hbase设置为伪分布模式,因此除了设置hbase的根目录hbase.rootdir外,还需要设置hbase.cluster.distributed和hbase.zookeeper.quorum两个参数;至于zookeeper.znode.parent是指定zookeeper的相对目录。

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4. /**
  5.  * Copyright 2010 The Apache Software Foundation
  6.  *
  7.  * Licensed to the Apache Software Foundation (ASF) under one
  8.  * or more contributor license agreements. See the NOTICE file
  9.  * distributed with this work for additional information
  10.  * regarding copyright ownership. The ASF licenses this file
  11.  * to you under the Apache License, Version 2.0 (the
  12.  * "License"); you may not use this file except in compliance
  13.  * with the License. You may obtain a copy of the License at
  14.  *
  15.  * http://
  16.  *
  17.  * Unless required by applicable law or agreed to in writing, software
  18.  * distributed under the License is distributed on an "AS IS" BASIS,
  19.  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  20.  * See the License for the specific language governing permissions and
  21.  * limitations under the License.
  22.  */
  23. -->
  24. <configuration>
  25.     <property>
  26.         <name>hbase.rootdir</name>
  27.         <value>hdfs://localhost:9000/hbase</value>
  28.     </property>
  29.         <property>
  30.         <name>hbase.cluster.distributed</name>
  31.         <value>true</value>
  32.     </property>
  33.     <property>
  34.         <name>hbase.zookeeper.quorum</name>
  35.         <value>localhost</value>
  36.     </property>
  37.     <property>
  38.         <name>zookeeper.znode.parent</name>
  39.         <value>/hbase</value>
  40.     </property>



  41. </configuration>

<5>运行测试      
      然后同样将hbase下的bin目录加入到PATH中,运行start-hbase.sh后发现HMaster未启动,查看日志发现以下提示:
FATAL: org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.io.IOException:
Call to localhost/127.0.0.1:9000 failed on local exception: java.io.EOFException

      原因是需要将hadoop中的hadoop-core-1.0.4.jar覆盖掉hbase-0.90.0/lib下的hadoop-core-0.20-append-r1056947.jar文件,再次运行提示了新错误:java.lang.NoClassDefFoundError


     看来是找不到相关的类,所以我们需要将hadoop-1.0.3下lib目录的jar文件全部拷贝到hbase下的lib目录下,再次运行,顺利通过:

<5>运行hbase shell
     Hbase为我们提供了shell接口进行测试,可以使用create命令创建一个新表,然后利用put命令添加新的行,使用scan命令查看建立的表:

     好了,hadoop和hbase的安装到此基本结束,明天开始配置Sleuthkit。

阅读(27629) | 评论(5) | 转发(2) |
给主人留下些什么吧!~~

jzhx1072015-05-14 15:24:04

不错,谢谢!

windhawkgyang2013-08-20 09:38:19

jizhiwang:好几天没写了,遇到的错误也是一种积累

收到,这几天装系统总是出错,照搬同事的框架还是不断报错,今天会把错误遇到的问题整理一下

回复 | 举报

jizhiwang2013-08-20 08:49:11

好几天没写了,遇到的错误也是一种积累

windhawkgyang2013-08-16 22:44:47

javaxf:不错,目前也在学习中

一起努力!

回复 | 举报

javaxf2013-08-16 21:13:48

不错,目前也在学习中