在经历了几周的努力之后,终于选择放弃现在安装的稳定版本,转而安装旧版本来部署Hadoop系统。直到昨天,自己还一直为Inconsistent configuration的错误头疼不已。既然同事的版本已经装成功了,那自己也就先用跑通的系统试一下吧,毕竟先放下再回来看现在的问题可能有更好的解决思路吧。今天开始正式重新安装Hadoop与Hbase。之前的安装笔记比较凌乱,今天借着这样的机会把整个步骤重新梳理一遍。
一、安装Hadoop
自己使用的软件版本是hadoop-1.0.3,比较早的一个版本,可以去hadoop的官方网站去下载。在安装hadoop以前首先要设置系统环境:
<1> 安装java-1.6版本,之前自己安装的java-1.7,但是没有成功,不晓得是不是java版本的缘故;无论如何,这次自己选择了比较保守的方案,从oracle官方注册后下载jdk-6u45-linux-i586.bin后解包得到【jdk-1.6.0_45】。注意到这里都是bin文件,因此需要chmod该文件以775的可执行权限,然后./filename.bin即可;
<2> 安装ssh,并且设置ssh无密码登录hadoop
<2.1>运行sudo apt-get install ssh/rsync,运行sudo apt-get install openjdk-6-jdk (for jps command)
<2.2>配置ssh本机免口令登录主要有两步:
运行ssh-keygen -t dsa -P '' -f /.ssh/id_dsa
-t用来指定加密算法,可以选择dsa和rsa两种加密方式;
-P用来指定密码,两个单引号表示空密码'';
-f用来指定存放密钥的文件
运行cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
这一步将公钥添加进本机的authorized_keys中,完成这两步后可以ssh localhost验证是否成功。接下来需要进入已经解好的hadoop-1.0.3中进行配置。hadoop的伪分布模式主要需要配置以下几个配置文件:
<3>conf/hbase-env.sh:主要用来配置hadoop的运行环境,这里需要修改JAVA_HOME到你的jdk1.6目录(见黑体)
-
#Set Hadoop-specific environment variables here.
-
-
# The only required environment variable is JAVA_HOME. All others are
-
# optional. When running a distributed configuration it is best to
-
# set JAVA_HOME in this file, so that it is correctly defined on
-
# remote nodes.
-
-
# The java implementation to use. Required.
-
export JAVA_HOME=/home/hadoop/platform/jdk1.6.0_45
-
-
# Extra Java CLASSPATH elements. Optional.
-
# export HADOOP_CLASSPATH=
-
-
# The maximum amount of heap to use, in MB. Default is 1000.
-
# export HADOOP_HEAPSIZE=2000
-
-
# Extra Java runtime options. Empty by default.
-
# export HADOOP_OPTS=-server
-
-
# Command specific options appended to HADOOP_OPTS when specified
-
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
-
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
-
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
-
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
-
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
-
# export HADOOP_TASKTRACKER_OPTS=
-
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
-
# export HADOOP_CLIENT_OPTS
-
-
# Extra ssh options. Empty by default.
-
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
-
-
# Where log files are stored. $HADOOP_HOME/logs by default.
-
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
-
-
# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
-
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
-
-
# host:path where hadoop code should be rsync'd from. Unset by default.
-
# export HADOOP_MASTER=master:/home/$USER/src/hadoop
-
-
# Seconds to sleep between slave commands. Unset by default. This
-
# can be useful in large clusters, where, e.g., slave rsyncs can
-
# otherwise arrive faster than the master can service them.
-
# export HADOOP_SLAVE_SLEEP=0.1
-
-
# The directory where pid files are stored. /tmp by default.
-
# export HADOOP_PID_DIR=/var/hadoop/pids
-
-
# A string representing this instance of hadoop. $USER by default.
-
# export HADOOP_IDENT_STRING=$USER
-
-
# The scheduling priority for daemon processes. See 'man nice'.
-
# export HADOOP_NICENESS=10
<4>conf/core-site.xml
这里主要配置fs.default.name(用来指定namenode)和hadoop.tmp.dir(默认的hdfs的tmp目录位置),这里可以不设置hadoop.tmp.dir,那么就会保存在默认的/tmp下,每次重启机器都会丢失数据。
-
<?xml version="1.0"?>
-
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
-
<!-- Put site-specific property overrides in this file. -->
-
-
<configuration>
-
<property>
-
<name>fs.default.name</name>
-
<value>hdfs://localhost:9000</value>
-
</property>
-
<property>
-
<name>hadoop.tmp.dir</name>
-
<value>/home/hadoop/hdfs/tmp</value>
-
</property>
-
-
</configuration>
<5>hdfs-site.xml
这里的dfs.replication用来设置每份数据块的副本数目,默认是3,因为我们是在单机上配置的伪分布模式,因此设为1。dfs.name.dir和dfs.data.dir非常重要,用来设置存放hdfs中namenode和datanode数据的本地存放位置。这里如果设置不好,后续会出现多个错误。当然你也可以不设置采用默认的/tmp下的目录,但是同样重启会丢失数据。
-
<?xml version="1.0"?>
-
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
-
<!-- Put site-specific property overrides in this file. -->
-
-
<configuration>
-
<property>
-
<name>dfs.replication</name>
-
<value>1</value>
-
</property>
-
<property>
-
<name>dfs.name.dir</name>
-
<value>/home/hadoop/hdfs/name</value>
-
</property>
-
<property>
-
<name>dfs.data.dir</name>
-
<value>/home/hadoop/hdfs/data</value>
-
</property>
-
-
</configuration>
<6>mapred-site.xml
-
<?xml version="1.0"?>
-
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
-
<!-- Put site-specific property overrides in this file. -->
-
-
<configuration>
-
<property>
-
<name>mapred.job.tracker</name>
-
<value>localhost:9001</value>
-
</property>
-
-
</configuration>
然后就是运行测试了,把hadoop-1.0.3/bin加入到/etc/profile中的PATH路径中,方便我们执行Hadoop命令。运行start-all.sh后出现了问题,jps查看namenode无法启动,使用hadoop namenode -format也不能成功,查看日志:
提示我们存储的HDFS目录要么不存在要么没有权限:
FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:Directory /home/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
查看发现已经生成了hdfs目录,那么问题就是权限了,将/home下的hadoop目录权限由755设为775,然后重新运行hadoop,成功:
在进行安装hbase之前,我们先来按照官方的方法测试一下伪分布式的hadoop,看看安装是否成功:首先将conf下的所有文件拷贝到hdfs上的input目录中,然后运行jar文件将结果存储到hdfs中的output目录中,最后从output目录中查看结果:
-
$ bin/hadoop fs -put conf input
-
-
$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
-
-
Copy the output files from the distributed filesystem to the local filesytem and examine them:
-
$ bin/hadoop fs -get output output
-
$ cat output/*
-
-
or
-
-
View the output files on the distributed filesystem:
-
$ bin/hadoop fs -cat output/*
-
-
When you're done, stop the daemons with:
-
$ bin/stop-all.sh
二、安装hbase
这里选择的版本是hbase-0.90.0,首先从google中直接搜索该版本,下载后解压得到hbase-0.90.0。同hadoop一样,这里我们的主要工作同样是修改配置文件:
<1> 修改/etc/hosts文件,将127.0.1.1 hadoop修改为127.0.0.1
<2> 设置ulimits:
修改/etc/security/limits.conf,添加:
hadoop - nofile 32768
hadoop soft/hard nproc 32000
修改/etc/pam.d/common-session ,添加
session required pam_limits.so
<3>修改conf/hbase-env.xml
这里主要设置JAVA_HOME目录,HBASE_LOG_DIR路径以及启用hbase自带的zookeeper
-
#
-
#/**
-
# * Copyright 2007 The Apache Software Foundation
-
# *
-
# * Licensed to the Apache Software Foundation (ASF) under one
-
# * or more contributor license agreements. See the NOTICE file
-
# * distributed with this work for additional information
-
# * regarding copyright ownership. The ASF licenses this file
-
# * to you under the Apache License, Version 2.0 (the
-
# * "License"); you may not use this file except in compliance
-
# * with the License. You may obtain a copy of the License at
-
# *
-
# * http://
-
# *
-
# * Unless required by applicable law or agreed to in writing, software
-
# * distributed under the License is distributed on an "AS IS" BASIS,
-
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-
# * See the License for the specific language governing permissions and
-
# * limitations under the License.
-
# */
-
-
# Set environment variables here.
-
-
# The java implementation to use. Java 1.6 required.
-
export JAVA_HOME=/home/hadoop/platform/jdk1.6.0_45
-
-
# Extra Java CLASSPATH elements. Optional.
-
# export HBASE_CLASSPATH=
-
-
# The maximum amount of heap to use, in MB. Default is 1000.
-
# export HBASE_HEAPSIZE=1000
-
-
# Extra Java runtime options.
-
# Below are what we set by default. May only work with SUN JVM.
-
# For more on why as well as other possible settings,
-
# see http://wiki.apache.org/hadoop/PerformanceTuning
-
export HBASE_OPTS="$HBASE_OPTS -ea -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
-
-
# Uncomment below to enable java garbage collection logging.
-
# export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log"
-
-
# Uncomment and adjust to enable JMX exporting
-
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
-
# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
-
#
-
# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
-
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
-
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
-
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
-
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"
-
-
# File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.
-
# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
-
-
# Extra ssh options. Empty by default.
-
# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"
-
-
# Where log files are stored. $HBASE_HOME/logs by default.
-
export HBASE_LOG_DIR=${HBASE_HOME}/logs
-
-
# A string representing this instance of hbase. $USER by default.
-
# export HBASE_IDENT_STRING=$USER
-
-
# The scheduling priority for daemon processes. See 'man nice'.
-
# export HBASE_NICENESS=10
-
-
# The directory where pid files are stored. /tmp by default.
-
# export HBASE_PID_DIR=/var/hadoop/pids
-
-
# Seconds to sleep between slave commands. Unset by default. This
-
# can be useful in large clusters, where, e.g., slave rsyncs can
-
# otherwise arrive faster than the master can service them.
-
# export HBASE_SLAVE_SLEEP=0.1
-
-
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
-
export HBASE_MANAGES_ZK=true
<4>配置conf/hbase-site.xml
这里要将我们的hbase设置为伪分布模式,因此除了设置hbase的根目录hbase.rootdir外,还需要设置hbase.cluster.distributed和hbase.zookeeper.quorum两个参数;至于zookeeper.znode.parent是指定zookeeper的相对目录。
-
<?xml version="1.0"?>
-
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
<!--
-
/**
-
* Copyright 2010 The Apache Software Foundation
-
*
-
* Licensed to the Apache Software Foundation (ASF) under one
-
* or more contributor license agreements. See the NOTICE file
-
* distributed with this work for additional information
-
* regarding copyright ownership. The ASF licenses this file
-
* to you under the Apache License, Version 2.0 (the
-
* "License"); you may not use this file except in compliance
-
* with the License. You may obtain a copy of the License at
-
*
-
* http://
-
*
-
* Unless required by applicable law or agreed to in writing, software
-
* distributed under the License is distributed on an "AS IS" BASIS,
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-
* See the License for the specific language governing permissions and
-
* limitations under the License.
-
*/
-
-->
-
<configuration>
-
<property>
-
<name>hbase.rootdir</name>
-
<value>hdfs://localhost:9000/hbase</value>
-
</property>
-
<property>
-
<name>hbase.cluster.distributed</name>
-
<value>true</value>
-
</property>
-
<property>
-
<name>hbase.zookeeper.quorum</name>
-
<value>localhost</value>
-
</property>
-
<property>
-
<name>zookeeper.znode.parent</name>
-
<value>/hbase</value>
-
</property>
-
-
-
-
</configuration>
<5>运行测试
然后同样将hbase下的bin目录加入到PATH中,运行start-hbase.sh后发现HMaster未启动,查看日志发现以下提示:
FATAL: org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.io.IOException:
Call to localhost/127.0.0.1:9000 failed on local exception: java.io.EOFException
原因是需要将hadoop中的hadoop-core-1.0.4.jar覆盖掉hbase-0.90.0/lib下的hadoop-core-0.20-append-r1056947.jar文件,再次运行提示了新错误:java.lang.NoClassDefFoundError
看来是找不到相关的类,所以我们需要将hadoop-1.0.3下lib目录的jar文件全部拷贝到hbase下的lib目录下,再次运行,顺利通过:
<5>运行hbase shell
Hbase为我们提供了shell接口进行测试,可以使用create命令创建一个新表,然后利用put命令添加新的行,使用scan命令查看建立的表:
好了,hadoop和hbase的安装到此基本结束,明天开始配置Sleuthkit。
阅读(530) | 评论(0) | 转发(0) |