Chinaunix首页 | 论坛 | 博客
  • 博客访问: 6666418
  • 博文数量: 1005
  • 博客积分: 8199
  • 博客等级: 中将
  • 技术积分: 13071
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-25 20:19
个人简介

脚踏实地、勇往直前!

文章分类

全部博文(1005)

文章存档

2020年(2)

2019年(93)

2018年(208)

2017年(81)

2016年(49)

2015年(50)

2014年(170)

2013年(52)

2012年(177)

2011年(93)

2010年(30)

分类: HADOOP

2014-10-28 13:38:32

 

linux下安装hbase

环境:

OS:Rad Hat Linux As5

hbase-0.95.2


1.安装步骤

1.1 下载安装介质

下载安装介质,下载地址为:

根据情况选择下载的版本,我这里下载的版本是hbase-0.95.2-hadoop1-bin.tar.gz

以下的步骤只需要在主节点(名称节点)上操作

1.2 解压并安装

使用hadoop登陆

[hadoop1@node1 ~]$ echo $HADOOP_HOME

/usr1/hadoop

将安装介质拷贝到如下的目录

[root@node1 hbase]# cp hbase-0.95.2-hadoop1-bin.tar.gz /usr1

解压

[root@node1 usr1]# tar -zxvf hbase-0.95.2-hadoop1-bin.tar.gz

目录改名

 [root@node1 usr1]# mv hbase-0.95.2 hbase

hive目录权限赋予hadoop用户

[root@node1 usr1]# chown -R hadoop1:hadoop1 ./hbase

 

1.3 添加环境变量

export HBASE_HOME=/usr1/hbase

PATH变量添加 $ HBASE_HOME/bin

 

1.4 配置 hbase配置文件

 

1.4.1  配置 hbase-env.sh

该文件默认路径是/usr1/hbase/conf/

添加JAVA_HOME环境变量

export JAVA_HOME=/usr/java/jdk1.8.0_05

export HBASE_MANAGES_ZK=false

export HBASE_CLASSPATH=/usr1/hadoop/conf

 

1.4.2  配置 hbase-site.xml

文件目录/usr1/hbase/conf

添加如下参数

  

       hbase.rootdir

       hdfs://192.168.56.101:9000/hbase

  

  

       hbase.cluster.distributed

       true

  

   

        dfs.replication

        3

   

   

        hbase.zookeeper.quorum

        192.168.56.101,192.168.56.102,192.168.56.103,192.168.56.104

   

   

        hbase.zookeeper.property.dataDir

        /home/hadoop1/zookeeperdir/zookeeper-data

   

其中hbase.rootdir要保持与hadoopcore-site.xml文件中的fs.default.name中的值一致,然后在后面添加自己的子目录,我这里定义是hbase

 

1.4.3  配置 regionserver

文件目录/usr1/hbase/conf

添加如下内容

192.168.56.101

192.168.56.102

192.168.56.103

192.168.56.104

配置完成后,hbase整个目录拷贝到另外的节点

scp hbase.tar root@192.168.56.102:/usr1/

scp hbase.tar root@192.168.56.103:/usr1/

scp hbase.tar root@192.168.56.104:/usr1/

[root@node2 usr1]# tar -xvf hbase.tar

[root@node2 usr1]# chown -R hadoop1:hadoop1 ./hbase

 

1.5 启动hbase

使用hadoop用户登录

在主节点上启动整个集群

[hadoop1@node1 bin]$ ./start-hbase.sh

启动完成后,执行如下命令可以进入到hbase shell界面

[hadoop1@node1 bin]$ ./hbase shell

 

1.6 验证

hbase(main):001:0> create 'test', 'cf'

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr1/hbase/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr1/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See for an explanation.

0 row(s) in 33.5640 seconds

=> Hbase::Table - test

hbase(main):002:0> list

TABLE                                                                                                                               

hbase:namespace                                                                                                                     

test                                                                                                                               

2 row(s) in 6.3780 seconds

=> #<#:0x1aed682>

hbase(main):003:0>

hbase(main):003:0> put 'test', 'row1', 'cf:a', 'value1'

0 row(s) in 1.9870 seconds

hbase(main):004:0> put 'test', 'row2', 'cf:b', 'value2'

0 row(s) in 0.0220 seconds

hbase(main):005:0> put 'test', 'row3', 'cf:c', 'value3'

0 row(s) in 0.0680 seconds

hbase(main):006:0> scan 'test'

ROW              COLUMN+CELL                                                                                     

 row1            column=cf:a, timestamp=1414466774970, value=value1                                              

 row2            column=cf:b, timestamp=1414466783818, value=value2                                              

 row3            column=cf:c, timestamp=1414466790998, value=value3                                              

3 row(s) in 0.0530 seconds

 

1.7 遇到的问题

 

1.7.1  错误1

WARNING! HBase file layout needs to be upgraded.  You have version 7 and I want version 8.  Is your hbase.rootdir valid?  If so, you

 may need to run 'hbase hbck -fixVersionFile'.

14/10/28 11:16:05 FATAL master.HMaster: Unhandled exception. Starting shutdown.

org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded.  You have version 7 and I want vers

ion 8.  Is your hbase.rootdir valid?  If so, you may need to run 'hbase hbck -fixVersionFile'.

        at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:583)

        at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:456)

        at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)

        at org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSystem.java:131)

        at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:761)

        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:578)

        at java.lang.Thread.run(Thread.java:745)

14/10/28 11:16:05 INFO master.HMaster: Aborting

14/10/28 11:16:05 INFO ipc.RpcServer: Stopping server on 60000

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=0,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=1,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=2,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=3,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=4,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=5,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=6,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=7,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=8,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=9,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=10,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=11,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=12,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=13,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=14,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=15,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=16,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=17,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=18,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=19,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=20,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=21,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=22,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=23,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=24,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=25,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=26,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=27,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=28,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=29,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: Replication.RpcServer.handler=0,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: Replication.RpcServer.handler=1,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: Replication.RpcServer.handler=2,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.listener,port=60000: stopping

14/10/28 11:16:05 INFO master.HMaster: Stopping infoServer

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.responder: stopped

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.responder: stopping

14/10/28 11:16:05 INFO mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010

14/10/28 11:16:05 INFO zookeeper.ZooKeeper: Session: 0x149547f5e0d0001 closed

14/10/28 11:16:05 INFO master.HMaster: HMaster main thread exiting

14/10/28 11:16:05 INFO zookeeper.ClientCnxn: EventThread shut down

14/10/28 11:16:05 ERROR master.HMasterCommandLine: Master exiting

java.lang.RuntimeException: HMaster Aborted

        at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:191)

        at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)

        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)

        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:78)

        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2812)

原因:

hadoop版本跟hbase版本不一致

hadoop目录下的hadoop-core-x.x.x.jar 替换掉hbase/lib目录下的hadoop-core.y.y.y文件

 

1.7.2  错误2

You have version null and I want version 8.  Is your hbase.rootdir valid?  If so, you may need to run 'hbase hbck -fixVersionFile'.

重建一下 hdfs/hbase 文件

bin/hadoop fs -rm -r /hbase

 

1.7.2  错误3

hbase(main):003:0* scan 'test'

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr1/hbase/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr1/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See for an explanation.

ROW           COLUMN+CELL                                                                                     

 row1         column=cf:a, timestamp=1414466774970, value=value1                                               

 row2         column=cf:b, timestamp=1414466783818, value=value2                                              

 row3         column=cf:c, timestamp=1414466790998, value=value3                                              

 row4         column=cf:d, timestamp=1414471915567, value=value4                                              

 row5         column=cf:e, timestamp=1414471877185, value=value5                                              

 row6         column=cf:f, timestamp=1414471898749, value=value6

查看涉及到slf4jjar

[hadoop1@node1 logs]$ hbase classpath | tr ":" "\n" | grep -i slf4j

/usr1/hbase/lib/slf4j-api-1.6.4.jar

/usr1/hbase/lib/slf4j-log4j12-1.6.1.jar

/usr1/hadoop/libexec/../lib/slf4j-api-1.4.3.jar

/usr1/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar

解决办法,将一个hbase lib下的jar移除,警告消除。

(不能将hadoop lib下的jar文件移除,否则调用shell 脚本start-all.sh远程启动hadoop时会报找不到log4j包的错误。)

hadoop1@node1 logs]$ cd /usr1/hbase/lib

[hadoop1@node1 lib]$ ls -1 slf4j*

slf4j-api-1.6.4.jar

slf4j-log4j12-1.6.1.jar

[hadoop1@node1 lib]$ mv slf4j-api-1.6.4.jar  ./otherpath/

[hadoop1@node1 lib]$ mv slf4j-log4j12-1.6.1.jar ./otherpath/

再次登录查询

hbase(main):003:0* scan 'test'

ROW               COLUMN+CELL

 row1             column=cf:a, timestamp=1414466774970, value=value1

 row2             column=cf:b, timestamp=1414466783818, value=value2

 row3             column=cf:c, timestamp=1414466790998, value=value3

 row4             column=cf:d, timestamp=1414471915567, value=value4

 row5             column=cf:e, timestamp=1414471877185, value=value5

 row6             column=cf:f, timestamp=1414471898749, value=value6

6 row(s) in 0.1130 seconds

-- The End --

阅读(7697) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~