Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1856313
  • 博文数量: 473
  • 博客积分: 13997
  • 博客等级: 上将
  • 技术积分: 5953
  • 用 户 组: 普通用户
  • 注册时间: 2010-01-22 11:52
文章分类

全部博文(473)

文章存档

2014年(8)

2013年(38)

2012年(95)

2011年(181)

2010年(151)

分类:

2012-10-24 16:21:12



环境如下:
ubuntu server x86_64 12.04
hadoop 1.0.2

1) master和slave /etc/hosts文件修改

点击(此处)折叠或打开

  1. hadoop@hadoop-master:~$ cat /etc/hosts
  2. 192.168.10.100 slave1 hadoop-slave1
  3. 192.168.10.101 master hadoop-master
  4. 192.168.10.102 slave2 hadoop-slave2

2) 创建统一用户hadoop并且实现ssh认证登录(master可以无密码登录slave)

点击(此处)折叠或打开

  1. hadoop@hadoop-master:~$sudo useradd -m -s /bin/bash -G sudo hadoop
  2. hadoop@hadoop-master:~$sudo apt-get install ssh
  3. hadoop@hadoop-master:~$sudo /etc/init.d/sshd start
  4. #在hadoop-master创建ssh-key
  5. hadoop@hadoop-master:~$ssh-copy-id -i id_rsa.pub localhost
  6. hadoop@hadoop-master:~$ssh-copy-id -i id_rsa.pub hadoop-slave1 
  7. hadoop@hadoop-master:~$ssh-copy-id -i id_rsa.pub hadoop-slave2
NOTE: 建议从master登录下hadoop-salve1和hadoop-salve2,因为电脑会出现安全认证 yes/no 以免下面的实现master无法同步slave

3) 安装jdk

点击(此处)折叠或打开

  1. hadoop@hadoop-master:~$ sudo apt-get install default-jdk
NOTE:大概是170M左右的文件。龟速下载中 (也可以用bin包,不过个人喜欢用apt..懒。)
配置/etc/profile

点击(此处)折叠或打开

  1. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk-amd64
  2. export HADOOP_HOME=/home/hadoop/hadoop-1.0.2
  3. export PATH=$PATH:$HADOOP_HOME/bin:$JAVA_HOME/bin
  4. export HADOOP_HOME_WARN_SUPPRESS=1  #屏蔽hadoop的一个警告
4) 安装hadoop

点击(此处)折叠或打开

  1. #下载hadoop-1.0.2
  2. hadoop@hadoop-master:~$ wget -c http://archive.apache.org/dist/hadoop/core/hadoop-1.0.1/hadoop-1.0.2.tar.gz
  3. #解压
  4. hadoop@hadoop-master:~$tar xvzf hadoop-1.0.2.tar.gz
  5. #软链接
  6. hadoop@hadoop-master:~$ ln -s hadoop-1.0.2 hadoop
5) 配置hadoop
#
conf/hadoop-env.sh

点击(此处)折叠或打开

  1. #添加jdk
  2. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk-amd64

#conf/mapred-site.xml

点击(此处)折叠或打开

  1. #hdfs-site.xm<?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!-- Put site-specific property overrides in this file. -->
  4. <configuration>

  5. <property>
  6. <name>mapred.job.tracker</name>
  7. <value>hadoop-master:9001</value>
  8. </property>


  9. </configuration>

#conf/hdfs-site.xml

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!-- Put site-specific property overrides in this file. -->

  4. <configuration>

  5. <property>
  6. <name>dfs.name.dir</name>
  7. <value>/home/hadoop/name</value>
  8. </property>

  9. <property>
  10. <name>dfs.data.dir</name>
  11. <value>/home/hadoop/data</value>
  12. </property>

  13. <property>
  14. <name>dfs.replication</name>
  15. <value>2</value>  #默认是3份
  16. </property>


  17. </configuration>
#conf/core-site.xml

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!-- Put site-specific property overrides in this file. -->

  4. <configuration>

  5.   <property>
  6.     <name>fs.default.name</name>
  7.     <value>hdfs://hadoop-master:9000</value>
  8.   </property>

  9.   <property>
  10.     <name>hadoop.tmp.dir</name>
  11.     <value>/home/hadoop/tmp</value>
  12.   </property>

  13. </configuration>

#conf/master

点击(此处)折叠或打开

  1. hadoop-master

#conf/slaves

点击(此处)折叠或打开

  1. hadoop-slave1
  2. hadoop-slave2
NOTE:创建name data目录不能预先创建,hadoop格式化会自动创建。

6) 拷贝 master的 hadoop目录到slave

点击(此处)折叠或打开

  1. hadoop@hadoop-master:~$ scp -r hadoop hadoop-slave1:
  2. hadoop@hadoop-master:~$ scp -r hadoop hadoop-slave2:
7) 格式化文件系统

点击(此处)折叠或打开

  1. hadoop@hadoop-master:~$ cd hadoop-1.0.2/
  2. hadoop@hadoop-master:~/hadoop-1.0.2$ bin/hadoop namenode -format
  3. #sucess output
  4. /************************************************************
  5. SHUTDOWN_MSG: Shutting down NameNode atv-jiwan-ubuntu-0/127.0.0.1
  6. *************************************************************
8) 启动所有结点

点击(此处)折叠或打开

  1. hadoop@hadoop-master:~/hadoop-1.0.2$ bin/start-all.sh

9) 文件操作

点击(此处)折叠或打开

  1. hadoop@hadoop-master:~$ hadoop dfs -mkdir os
  2. hadoop@hadoop-master:~/hadoop-1.0.2$ bin/hadoop dfs -put bin/start-all.sh os
  3. hadoop@hadoop-master:~/hadoop-1.0.2$ bin/hadoop dfs -ls os
  4. drwxr-xr-x - hadoop supergroup 0 2012-05-08 11:38 /user/hadoop/os/start-all.sh
10)在slave上启动

点击(此处)折叠或打开

  1. hadoop@hadoop-slave1:~/hadoop-1.0.2$ bin/start-dfs.sh #单独启动HDFS集群DataNode
  2. hadoop@hadoop-slave1:~/hadoop-1.0.2$ bin/start-mapred.sh #单独启动Map/Reduce TaskTracker
11)关闭所有节点

点击(此处)折叠或打开

  1. hadoop@hadoop-master:~/hadoop-1.0.2$ bin/stop-all.sh








阅读(722) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~