脚踏实地、勇往直前!
全部博文(1005)
分类: HADOOP
2014-10-28 13:38:32
linux下安装hbase
环境:
OS:Rad Hat Linux As5
hbase-0.95.2
1.安装步骤
下载安装介质,下载地址为:
根据情况选择下载的版本,我这里下载的版本是hbase-0.95.2-hadoop1-bin.tar.gz
以下的步骤只需要在主节点(名称节点)上操作
使用hadoop登陆
[hadoop1@node1 ~]$ echo $HADOOP_HOME
/usr1/hadoop
将安装介质拷贝到如下的目录
[root@node1 hbase]# cp hbase-0.95.2-hadoop1-bin.tar.gz /usr1
解压
[root@node1 usr1]# tar -zxvf hbase-0.95.2-hadoop1-bin.tar.gz
目录改名
[root@node1 usr1]# mv hbase-0.95.2 hbase
将hive目录权限赋予hadoop用户
[root@node1 usr1]# chown -R hadoop1:hadoop1 ./hbase
export HBASE_HOME=/usr1/hbase
PATH变量添加 $ HBASE_HOME/bin
该文件默认路径是/usr1/hbase/conf/
添加JAVA_HOME环境变量
export JAVA_HOME=/usr/java/jdk1.8.0_05
export HBASE_MANAGES_ZK=false
export HBASE_CLASSPATH=/usr1/hadoop/conf
文件目录/usr1/hbase/conf
添加如下参数
其中hbase.rootdir要保持与hadoop的core-site.xml文件中的fs.default.name中的值一致,然后在后面添加自己的子目录,我这里定义是hbase
文件目录/usr1/hbase/conf
添加如下内容
192.168.56.101
192.168.56.102
192.168.56.103
192.168.56.104
配置完成后,hbase整个目录拷贝到另外的节点
scp hbase.tar root@192.168.56.102:/usr1/
scp hbase.tar root@192.168.56.103:/usr1/
scp hbase.tar root@192.168.56.104:/usr1/
[root@node2 usr1]# tar -xvf hbase.tar
[root@node2 usr1]# chown -R hadoop1:hadoop1 ./hbase
使用hadoop用户登录
在主节点上启动整个集群
[hadoop1@node1 bin]$ ./start-hbase.sh
启动完成后,执行如下命令可以进入到hbase shell界面
[hadoop1@node1 bin]$ ./hbase shell
hbase(main):001:0> create 'test', 'cf'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr1/hbase/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr1/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
0 row(s) in 33.5640 seconds
=> Hbase::Table - test
hbase(main):002:0> list
TABLE
hbase:namespace
test
2 row(s) in 6.3780 seconds
=> #<#
hbase(main):003:0>
hbase(main):003:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 1.9870 seconds
hbase(main):004:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0220 seconds
hbase(main):005:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0680 seconds
hbase(main):006:0> scan 'test'
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1414466774970, value=value1
row2 column=cf:b, timestamp=1414466783818, value=value2
row3 column=cf:c, timestamp=1414466790998, value=value3
3 row(s) in 0.0530 seconds
WARNING! HBase file layout needs to be upgraded. You have version 7 and I want version 8. Is your hbase.rootdir valid? If so, you
may need to run 'hbase hbck -fixVersionFile'.
14/10/28 11:16:05 FATAL master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded. You have version 7 and I want vers
ion 8. Is your hbase.rootdir valid? If so, you may need to run 'hbase hbck -fixVersionFile'.
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:583)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:456)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
at org.apache.hadoop.hbase.master.MasterFileSystem.
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:761)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:578)
at java.lang.Thread.run(Thread.java:745)
14/10/28 11:16:05 INFO master.HMaster: Aborting
14/10/28 11:16:05 INFO ipc.RpcServer: Stopping server on 60000
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=0,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=1,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=2,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=3,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=4,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=5,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=6,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=7,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=8,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=9,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=10,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=11,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=12,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=13,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=14,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=15,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=16,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=17,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=18,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=19,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=20,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=21,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=22,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=23,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=24,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=25,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=26,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=27,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=28,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=29,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: Replication.RpcServer.handler=0,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: Replication.RpcServer.handler=1,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: Replication.RpcServer.handler=2,port=60000: exiting
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.listener,port=60000: stopping
14/10/28 11:16:05 INFO master.HMaster: Stopping infoServer
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.responder: stopped
14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.responder: stopping
14/10/28 11:16:05 INFO mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010
14/10/28 11:16:05 INFO zookeeper.ZooKeeper: Session: 0x149547f5e0d0001 closed
14/10/28 11:16:05 INFO master.HMaster: HMaster main thread exiting
14/10/28 11:16:05 INFO zookeeper.ClientCnxn: EventThread shut down
14/10/28 11:16:05 ERROR master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:191)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:78)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2812)
原因:
hadoop版本跟hbase版本不一致
将hadoop目录下的hadoop-core-x.x.x.jar 替换掉hbase/lib目录下的hadoop-core.y.y.y文件
You have version null and I want version 8. Is your hbase.rootdir valid? If so, you may need to run 'hbase hbck -fixVersionFile'.
重建一下 hdfs/hbase 文件
bin/hadoop fs -rm -r /hbase
hbase(main):003:0* scan 'test'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr1/hbase/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr1/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1414466774970, value=value1
row2 column=cf:b, timestamp=1414466783818, value=value2
row3 column=cf:c, timestamp=1414466790998, value=value3
row4 column=cf:d, timestamp=1414471915567, value=value4
row5 column=cf:e, timestamp=1414471877185, value=value5
row6 column=cf:f, timestamp=1414471898749, value=value6
查看涉及到slf4j的jar包
[hadoop1@node1 logs]$ hbase classpath | tr ":" "\n" | grep -i slf4j
/usr1/hbase/lib/slf4j-api-1.6.4.jar
/usr1/hbase/lib/slf4j-log4j12-1.6.1.jar
/usr1/hadoop/libexec/../lib/slf4j-api-1.4.3.jar
/usr1/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar
解决办法,将一个hbase lib下的jar移除,警告消除。
(不能将hadoop lib下的jar文件移除,否则调用shell 脚本start-all.sh远程启动hadoop时会报找不到log4j包的错误。)
hadoop1@node1 logs]$ cd /usr1/hbase/lib
[hadoop1@node1 lib]$ ls -1 slf4j*
slf4j-api-1.6.4.jar
slf4j-log4j12-1.6.1.jar
[hadoop1@node1 lib]$ mv slf4j-api-1.6.4.jar ./otherpath/
[hadoop1@node1 lib]$ mv slf4j-log4j12-1.6.1.jar ./otherpath/
再次登录查询
hbase(main):003:0* scan 'test'
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1414466774970, value=value1
row2 column=cf:b, timestamp=1414466783818, value=value2
row3 column=cf:c, timestamp=1414466790998, value=value3
row4 column=cf:d, timestamp=1414471915567, value=value4
row5 column=cf:e, timestamp=1414471877185, value=value5
row6 column=cf:f, timestamp=1414471898749, value=value6
6 row(s) in 0.1130 seconds
-- The End --