Chinaunix首页 | 论坛 | 博客
  • 博客访问: 872769
  • 博文数量: 204
  • 博客积分: 2433
  • 博客等级: 大尉
  • 技术积分: 2205
  • 用 户 组: 普通用户
  • 注册时间: 2011-04-05 13:32
文章分类

全部博文(204)

分类: LINUX

2017-08-11 15:55:31

记录一次hadoop的安装配置,
版本号:Hadoop 2.7.3,
三台机器:test,home-test,hbase-test(主机名随便起,不过要先在/etc/hosts配置好)
创建一个hadoop账户,来安装hadoop,也可以直接使用root安装,配置好三台机器ssh无密码登陆,如下:
useradd hadoop;passwd hadoop;ssh-keygen -t rsa(直接回车,回车);cp -a ~/.ssh/id_rsa.pub authorized_keys;把authorized_keys传给其他两台机器,相同,在其他两台机器也这样做,确保从任何一台机器登陆其他两台都不需要密码!

安装hadoop之前,需要先把java环境配置好,直接解压jdk到/usr/local/lib,然后配置变量
~/.bashrc:

点击(此处)折叠或打开

  1. export JAVA_HOME=/usr/local/lib/jdk1.8.0_144
  2. export JAVA_BIN=$JAVA_HOME/bin
  3. export JAVA_LIB=$JAVA_HOME/lib
  4. export CLASSPATH=.:$JAVA_LIB/tools.jar:$JAVA_LIB/dt.jar
  5. export PATH=$JAVA_BIN:$PATH
现在安装hadoop,直接解压hadoop的/usr/local/目录,开始配置变量:

点击(此处)折叠或打开

  1. # Hadoop Environment Variables
  2. export HADOOP_HOME=/usr/local/hadoop
  3. export HADOOP_INSTALL=$HADOOP_HOME
  4. export HADOOP_MAPRED_HOME=$HADOOP_HOME
  5. export HADOOP_COMMON_HOME=$HADOOP_HOME
  6. export HADOOP_HDFS_HOME=$HADOOP_HOME
  7. export YARN_HOME=$HADOOP_HOME
  8. export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
  9. export PATH=$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$PATH
  10. export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
  11. export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native
开始配置hdfs-site.xml:

点击(此处)折叠或打开

  1. <configuration>
  2.     <property>
  3.         <name>dfs.replication</name>
  4.         <value>2</value>
  5.     </property>
  6.     <property>
  7.         <name>dfs.namenode.name.dir</name>
  8.         <value>file:/usr/local/hadoop/tmp/dfs/name</value>
  9.     </property>
  10.     <property>
  11.         <name>dfs.datanode.data.dir</name>
  12.         <value>file:/usr/local/hadoop/tmp/dfs/data</value>
  13.     </property>
  14.     <property>
  15.      <name>dfs.datanode.address</name>
  16.       <value>0.0.0.0:50011</value>
  17.     </property>
  18.     <property>
  19.          <name>dfs.datanode.http.address</name>
  20.          <value>0.0.0.0:50076</value>
  21.     </property>
  22.     <property>
  23.          <name>dfs.datanode.ipc.address</name>
  24.          <value>0.0.0.0:50021</value>
  25.     </property>
  26. </configuration>
core-site.xml

点击(此处)折叠或打开

  1. <configuration>
  2.     <property>
  3.         <name>hadoop.tmp.dir</name>
  4.         <value>file:/usr/local/hadoop/tmp</value>
  5.         <description>Abase for other temporary directories.</description>
  6.     </property>
  7.     <property>
  8.         <name>fs.defaultFS</name>
  9.         <value>hdfs://test:9000</value>
  10.     </property>
  11.  <property>
  12.        <name>ha.zookeeper.quorum</name>
  13.        <value>test:2181,home-test:2181,hbase-test:2181</value>
  14.    </property>
  15. </configuration>
yarn-site.xml:

点击(此处)折叠或打开

  1. <configuration>
  2.     <property>
  3.          <name>yarn.resourcemanager.hostname</name>
  4.          <value>test</value>
  5.     </property>
  6.     <property>
  7.         <name>yarn.nodemanager.aux-services</name>
  8.         <value>mapreduce_shuffle</value>
  9.     </property>
  10.     <property>
  11.         <name>yarn.resourcemanager.address</name>
  12.         <value>test:8032</value>
  13.     </property>
  14.     <property>
  15.         <name>yarn.resourcemanager.scheduler.address</name>
  16.         <value>test:8030</value>
  17.     </property>
  18.     <property>
  19.         <name>yarn.resourcemanager.resource-tracker.address</name>
  20.         <value>test:8031</value>
  21.     </property>
  22.     <property>
  23.         <name>yarn.resourcemanager.admin.address</name>
  24.         <value>test:8033</value>
  25.     </property>
  26.     <property>
  27.         <name>yarn.resourcemanager.webapp.address</name>
  28.         <value>test:8088</value>
  29.     </property>
  30. </configuration>
mapred-site.xml

点击(此处)折叠或打开

  1. <configuration>
  2.     <property>
  3.         <name>mapreduce.framework.name</name>
  4.         <value>yarn</value>
  5.         <final>true</final>
  6.     </property>
  7.     <property>
  8.         <name>mapreduce.jobtracker.http.address</name>
  9.         <value>test:50030</value>
  10.     </property>
  11.     <property>
  12.         <name>mapreduce.jobhistory.address</name>
  13.         <value>test:10020</value>
  14.     </property>
  15.     <property>
  16.         <name>mapreduce.jobhistory.webapp.address</name>
  17.         <value>test:19888</value>
  18.     </property>
  19.     <property>
  20.          <name>mapred.job.tracker</name>
  21.          <value>http://test:9001</value>
  22.     </property>
  23. </configuration>
slave文件加入两个datanode的主机名
以上配置好后,把整个hadoop目录拷贝到另外两台机器,现在准备配置zookeeper,
把zookeeper解压到/usr/local/zookeeper的配置文件zoo.cfg

点击(此处)折叠或打开

  1. # The number of milliseconds of each tick
  2. tickTime=2000
  3. # The number of ticks that the initial
  4. # synchronization phase can take
  5. initLimit=10
  6. # The number of ticks that can pass between
  7. # sending a request and getting an acknowledgement
  8. syncLimit=5
  9. # the directory where the snapshot is stored.
  10. # do not use /tmp for storage, /tmp here is just
  11. # example sakes.
  12. dataDir=/opt/zookeeper/data
  13. dataLogDir=/opt/zookeeper/logs
  14. # the port at which the clients will connect
  15. clientPort=2181
  16. server.0=192.168.2.131:2888:3888
  17. server.1=192.168.2.138:2888:3888
  18. server.2=192.168.2.139:2888:3888
  19. # the maximum number of client connections.
  20. # increase this if you need to handle more clients
  21. maxClientCnxns=60
  22. #
  23. # Be sure to read the maintenance section of the
  24. # administrator guide before turning on autopurge.
  25. #
  26. # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
  27. #
  28. # The number of snapshots to retain in dataDir
  29. #autopurge.snapRetainCount=3
  30. # Purge task interval in hours
  31. # Set to "0" to disable auto purge feature
  32. #autopurge.purgeInterval=1
分别在/opt目录创建data和logs目录:mkdir -p /opt/{data,logs}
并在data目录里面创建myid文件,填入当前机器的number,如当前机器是test,按照你的配置文件:server.0=192.168.2.131:2888:3888,直接在myid填入0

把zookeeper整个目录,拷贝到其他两台机器,主要要修改对应的myid号

紧接着,可以来弄hbase了,解压hbase到任意目录,修改配置hbase-site.xml

点击(此处)折叠或打开

  1. <configuration>
  2.        <!--HBase数据目录位置-->
  3.    <property>
  4.        <name>hbase.rootdir</name>
  5.        <value>hdfs://test:9000/hbase</value>
  6.    </property>
  7.        <!--启用分布式集群-->
  8.    <property>
  9.        <name>hbase.cluster.distributed</name>
  10.        <value>true</value>
  11.    </property>
  12.        <!--默认HMaster HTTP访问端口-->
  13.    <property>
  14.        <name>hbase.master.info.port</name>
  15.        <value>17010</value>
  16.     </property>
  17.        <!--默认HRegionServer HTTP访问端口-->
  18.     <property>
  19.        <name>hbase.regionserver.info.port</name>
  20.        <value>16030</value>
  21.     </property>
  22.        <!--不使用默认内置的,配置独立的ZK集群地址-->
  23.    <property>
  24.        <name>hbase.zookeeper.quorum</name>
  25.        <value>test,home-test</value>
  26.    </property>
  27. <property>
  28. <name>hbase.zookeeper.property.dataDir</name>
  29. <value>/opt/zookeeper/data</value>
  30. </property>
  31. </configuration>
配置hbase的环境:
~/.bashrc

点击(此处)折叠或打开

  1. #Hbase Environment
  2. HBASE_HOME=/opt/hbase-1.2.6
  3. export PATH=$HBASE_HOME/bin:$PATH
hbase自带zookeeper,不适用自带的需要关闭它:
hbase-env.sh:

点击(此处)折叠或打开

  1. # Tell HBase whether it should manage it's own instance of Zookeeper or not.
  2. export JAVA_HOME="/usr/local/lib/jdk1.8.0_144"
  3. export HBASE_CLASSPATH=$HADOOP_HOME/etc/hadoop
  4. export HBASE_MANAGES_ZK=false<---这个true的话,是使用hbase自带的zookeeper
拷贝整个hbase目录到另外两台机器,到这里基本该做的做完了,现在可以启动了,先启动zookeeper:
zkServer.sh start<---每一台都执行,然后zkServer.sh status看一下结果:

点击(此处)折叠或打开

  1. ZooKeeper JMX enabled by default
  2. Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
  3. Mode: follower

  4. ZooKeeper JMX enabled by default
  5. Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
  6. Mode: leader
启动zookeeper成功,现在来启动hadoop:start-dfs.sh && start-yarn.sh,命令jps看一下结果:

点击(此处)折叠或打开

  1. Master(Namenode)上的:
  2. 1232 QuorumPeerMain
  3. 1608 SecondaryNameNode
  4. 1770 ResourceManager
  5. 8012 Jps
  6. 1407 NameNode

  7. DataNode上的:
  8. 2275 QuorumPeerMain
  9. 2389 DataNode
  10. 4217 Jps
  11. 2506 NodeManager
可以看到正常启动,不错,现在开启hbase,命令start-hbase.sh,命令jps查看结果:

点击(此处)折叠或打开

  1. Master(NameNode)上的:
  2. 1232 QuorumPeerMain
  3. 3732 HMaster
  4. 1608 SecondaryNameNode
  5. 1770 ResourceManager
  6. 8012 Jps
  7. 1407 NameNode

  8. DataNode上的:
  9. 2275 QuorumPeerMain
  10. 2389 DataNode
  11. 3159 HRegionServer
  12. 4217 Jps
  13. 2506 NodeManager
可以看到NameNode和DataNode个多了一个进程,HMaster和HRegionServer,表明hbase已经其启动了,一般来说,在master上启动,会把其他的节点也一起启动,如果其他节点没有启动成功,登陆其他节点,使用命令:hbase-daemon.sh start regionserver

到此,基本上配置好了,值得注意的是,上面hadoop配置完成后,需要先格式化hdfs,Master上执行:hdfs namenode -format
按照上面的配置会在/usr/local/hadoop/tmp生成name和data目录,

现在打开hbase shell验证一下hbase,还有按照上面的配置也可以直接通过web访问hbase,地址:

点击(此处)折叠或打开

  1. [hadoop@test ~]$ hbase shell
  2. SLF4J: Class path contains multiple SLF4J bindings.
  3. SLF4J: Found binding in [jar:file:/opt/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  4. SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  5. SLF4J: See for an explanation.
  6. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  7. HBase Shell; enter 'help' for list of supported commands.
  8. Type "exit" to leave the HBase Shell
  9. Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017

  10. hbase(main):001:0> status
  11. 1 active master, 0 backup masters, 2 servers, 0 dead, 1.0000 average load

  12. hbase(main):002:0>
可以看到,没问题了,按照上面的配置,hbase的数据是在hdfs的/hbase上,查看一下:

点击(此处)折叠或打开

  1. [hadoop@test ~]$ hdfs dfs -lsr /
  2. lsr: DEPRECATED: Please use 'ls -R' instead.
  3. drwxr-xr-x - hadoop supergroup 0 2017-08-11 14:19 /hbase
  4. drwxr-xr-x - hadoop supergroup 0 2017-08-11 14:19 /hbase/.tmp
  5. drwxr-xr-x - hadoop supergroup 0 2017-08-11 15:19 /hbase/MasterProcWALs
  6. -rw-r--r-- 2 hadoop supergroup 0 2017-08-11 15:19 /hbase/MasterProcWALs/state-00000000000000000017.log
  7. drwxr-xr-x - hadoop supergroup 0 2017-08-11 14:19 /hbase/WALs
  8. drwxr-xr-x - hadoop supergroup 0 2017-08-11 10:53 /hbase/WALs/hbase-test,16020,1502414543892
  9. drwxr-xr-x - hadoop supergroup 0 2017-08-11 15:19 /hbase/WALs/hbase-test,16020,1502432350982
  10. -rw-r--r-- 2 hadoop supergroup 83 2017-08-11 15:19 /hbase/WALs/hbase-test,16020,1502432350982/hbase-test%2C16020%2C1502432350982.default.1502435957318
  11. drwxr-xr-x - hadoop supergroup 0 2017-08-10 17:02 /hbase/WALs/home-test,16020,1502334255452
  12. drwxr-xr-x - hadoop supergroup 0 2017-08-10 17:18 /hbase/WALs/home-test,16020,1502356239257
  13. drwxr-xr-x - hadoop supergroup 0 2017-08-11 15:19 /hbase/WALs/home-test,16020,1502432347782
  14. -rw-r--r-- 2 hadoop supergroup 83 2017-08-11 15:19 /hbase/WALs/home-test,16020,1502432347782/home-test%2C16020%2C1502432347782..meta.1502435954818.meta
  15. -rw-r--r-- 2 hadoop supergroup 83 2017-08-11 15:19 /hbase/WALs/home-test,16020,1502432347782/home-test%2C16020%2C1502432347782.default.1502435952957
  16. drwxr-xr-x - hadoop supergroup 0 2017-08-11 14:25 /hbase/archive
  17. drwxr-xr-x - hadoop supergroup 0 2017-08-10 11:04 /hbase/data
  18. drwxr-xr-x - hadoop supergroup 0 2017-08-10 14:24 /hbase/data/default
  19. drwxr-xr-x - hadoop supergroup 0 2017-08-10 11:04 /hbase/data/hbase
  20. drwxr-xr-x - hadoop supergroup 0 2017-08-10 11:04 /hbase/data/hbase/meta
  21. drwxr-xr-x - hadoop supergroup 0 2017-08-10 11:04 /hbase/data/hbase/meta/.tabledesc
  22. -rw-r--r-- 2 hadoop supergroup 398 2017-08-10 11:04 /hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000001
  23. drwxr-xr-x - hadoop supergroup 0 2017-08-10 11:04 /hbase/data/hbase/meta/.tmp
  24. drwxr-xr-x - hadoop supergroup 0 2017-08-11 14:19 /hbase/data/hbase/meta/1588230740
  25. -rw-r--r-- 2 hadoop supergroup 32 2017-08-10 11:04 /hbase/data/hbase/meta/1588230740/.regioninfo
  26. drwxr-xr-x - hadoop supergroup 0 2017-08-11 14:26 /hbase/data/hbase/meta/1588230740/.tmp
  27. drwxr-xr-x - hadoop supergroup 0 2017-08-11 14:26 /hbase/data/hbase/meta/1588230740/info
  28. -rw-r--r-- 2 hadoop supergroup 5256 2017-08-11 14:26 /hbase/data/hbase/meta/1588230740/info/3fd729d52d074b0589519f6274e16d55
  29. -rw-r--r-- 2 hadoop supergroup 7774 2017-08-11 14:19 /hbase/data/hbase/meta/1588230740/info/635a0727cf9f45bfaba3137b47ca7958
  30. drwxr-xr-x - hadoop supergroup 0 2017-08-11 14:19 /hbase/data/hbase/meta/1588230740/recovered.edits
  31. -rw-r--r-- 2 hadoop supergroup 0 2017-08-11 14:19 /hbase/data/hbase/meta/1588230740/recovered.edits/62.seqid
  32. drwxr-xr-x - hadoop supergroup 0 2017-08-10 11:04 /hbase/data/hbase/namespace
  33. drwxr-xr-x - hadoop supergroup 0 2017-08-10 11:04 /hbase/data/hbase/namespace/.tabledesc
  34. -rw-r--r-- 2 hadoop supergroup 312 2017-08-10 11:04 /hbase/data/hbase/namespace/.tabledesc/.tableinfo.0000000001
  35. drwxr-xr-x - hadoop supergroup 0 2017-08-10 11:04 /hbase/data/hbase/namespace/.tmp
  36. drwxr-xr-x - hadoop supergroup 0 2017-08-10 17:10 /hbase/data/hbase/namespace/b89aee108e8c946ec1ef91e9ca9ff17a
  37. -rw-r--r-- 2 hadoop supergroup 42 2017-08-10 11:04 /hbase/data/hbase/namespace/b89aee108e8c946ec1ef91e9ca9ff17a/.regioninfo
  38. drwxr-xr-x - hadoop supergroup 0 2017-08-10 11:11 /hbase/data/hbase/namespace/b89aee108e8c946ec1ef91e9ca9ff17a/info
  39. -rw-r--r-- 2 hadoop supergroup 4963 2017-08-10 11:11 /hbase/data/hbase/namespace/b89aee108e8c946ec1ef91e9ca9ff17a/info/a63013952e4749a5a44bcf6f15348e7c
  40. drwxr-xr-x - hadoop supergroup 0 2017-08-11 14:19 /hbase/data/hbase/namespace/b89aee108e8c946ec1ef91e9ca9ff17a/recovered.edits
  41. -rw-r--r-- 2 hadoop supergroup 0 2017-08-11 14:19 /hbase/data/hbase/namespace/b89aee108e8c946ec1ef91e9ca9ff17a/recovered.edits/26.seqid
  42. -rw-r--r-- 2 hadoop supergroup 42 2017-08-10 11:04 /hbase/hbase.id
  43. -rw-r--r-- 2 hadoop supergroup 7 2017-08-10 11:04 /hbase/hbase.version
  44. drwxr-xr-x - hadoop supergroup 0 2017-08-11 15:30 /hbase/oldWALs
可以看到,有数据,没问题!

到此,也差不多了,可能有些小的细节没写清楚,以后慢慢补充吧,下一篇会是spark的集群!

















阅读(2334) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~