Hadoop完全分布式模式
实验环境- 系统 : centos 5.6
- JDK : jdk-6u26-linux-i586-rpm.bin
- 账号 : hadoop
- 目录 : /usr/local/hadoop
- 主机名 :master slave1 slave2
实验目的组建三台机器的群集
- master: 部署namecode,JobTracker,DataNode,TaskTracker
- slave1: 部署JobTracker,DataNode,TaskTracker
- slave2: 部署JobTracker,DataNode,TaskTracker
其实这个不是最好的组建方法。实验为了更好的测试多节点而这样设置。
安装- 请确保每台机器都安装了sun jdk 将hadoop安装在相同的目录(/usr/local/hadoop)。
- 请确保hadoop/conf/hadoop-env.sh 中JAVA_HOME=/usr/java/jdk1.6.0_26 设置并且正确。
- 每台机器创建hadoop账户。
#useradd hadoop #passwd hadoop
$ssh-keygent -t dsa (我将密码设置为空方便测试。正常环境请安装keychain,) $cd .ssh $cat cat id_rsa.pub > authorized_keys $chmod 600 authorized_keys (将权限设置为600否者ssh将不读取公钥信息)
$ssh-copy-id slave1 $ssh-copy-id slave2
配置文件概述- NameNode : core-site.xml
- JobTracker : mapred-site.xml
- DataNode : hdfs-site.xml
- master : masters
- slave : slaves
配置$vi core-site.xml
fs.default.name hdfs://192.168.60.149:9000/ hadoop.tmp.dir /usr/local/hadoop/hadooptmp $vi mapred-site.xml
mapred.job.tracker 192.168.60.149:9001 mapred.local.dir /usr/local/hadoop/mapred/local mapred.system.dir /tmp/hadoop/mapred/system $vi hdfs-site.xml
dfs.name.dir /usr/local/hadoop/hdfs/name dfs.data.dir /usr/local/hadoop/hdfs/data dfs.replication 3 - 修改slave1,slave2的配置
- 修改slave1,slave2的JobTracker的配置
$vi mapred-site.xml
mapred.job.tracker 192.168.60.149:9001 mapred.local.dir /usr/local/hadoop/mapred/local mapred.system.dir /tmp/hadoop/mapred/system - 修改slave1,slave2的DataNode配置
$vi hdfs-site.xml
dfs.name.dir /usr/local/hadoop/hdfs/name dfs.data.dir /usr/local/hadoop/hdfs/data dfs.replication 3 $vi masters master
$vi slaves master slave1 slave2
运行$$bin/hadoop namenode -format
$/usr/local/hadoop/bin/start-all.sh starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-namenode-master.out master: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-datanode-master.out slave2: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-datanode-slave2.out slave1: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-datanode-slave1.out master: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-secondarynamenode-master.out starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-jobtracker-master.out slave1: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-tasktracker-slave1.out slave2: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-tasktracker-slave2.out master: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-tasktracker-master.out
测试- 查看 nodes里面是3证明三个节点正常接入
- 创建测试pustest文件夹用于分布式文件系统测试
$bin/hadoop dfs -mkdir pustest
- 将conf/hadoop-env.sh放到pushtest目录 用于测试。
$bin/hadoop dfs -put conf/hadoop-env.sh pushtest
- Browse the filesystem 发现跳转slave1 or slave2 证明分布式文件系统正常。
- hadoop默认开放web状态展示访问地址为
hadoop自带一些简单的实例。测试下单词统计功能。 $bin/hadoop jar hadoop-examples-0.20.203.0.jar wordcount pushtest testoutput 运行后将可以在web界面看见job的状态。和完成的状态。 具体单词数量等统计结果要查看 $bin/hadoop fs -ls drwxr-xr-x - hadoop supergroup 0 2011-07-11 11:13 /user/hadoop/test drwxr-xr-x - hadoop supergroup 0 2011-07-11 11:15 /user/hadoot/testoutput $bin/hadoop fs -ls testoutput Found 3 items -rw-r--r-- 1 hadoop supergroup 0 2011-07-11 16:31 /user/hadoop/shanyang1/_SUCCESS drwxr-xr-x - hadoop supergroup 0 2011-07-11 16:30 /user/hadoop/shanyang1/_logs -rw-r--r-- 1 hadoop supergroup 32897 2011-07-11 16:31 /user/hadoop/shanyang1/part-r-00000 $bin/hadoop fs -cat /user/hadoop/shanyang1/part-r-00000 将可以看到详细的统计信息
阅读(2856) | 评论(0) | 转发(2) |