Chinaunix首页 | 论坛 | 博客
  • 博客访问: 291920
  • 博文数量: 82
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 874
  • 用 户 组: 普通用户
  • 注册时间: 2015-03-21 09:58
个人简介

traveling in cumputer science!!

文章分类

全部博文(82)

文章存档

2016年(13)

2015年(69)

我的朋友

分类: 云计算

2015-08-05 22:56:21

声明:文章原创,转载需注明出处。由于文章大多是学习过程中的记录内容,技术能力有限,希望大家指出其中错误,共同交流进步。由此原因,文章会不定期改善,看原文最新内容,请到:http://blog.chinaunix.net/uid/29454152.html
after above two step now we can configure our hadoop
at first we must install the "jdk" and then we can install hadoop
1. install jdk
    i use the command "sudo apt-get install openjdk-7-jdk"
    and you can down the new jdk package and install it
2. down the hadoop package  at below url
     />     and you can search "hadoop releases" by google to find
3. install and configure the file
    (1 install it
    uncompress the package
    at the "Downloads" directory use the below commands
    "tar-vxzf hadoop-2.6.0.tar.gz -C /home/warrior/bigData"
    "cd /home/warrior/bigData"
    "mv hadoop-2.6.0 hadoop"
    (2 configure it 
    at first set the "/etc/profile"
    add the content 

点击(此处)折叠或打开

  1. export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
  2. export HADOOP_INSTALL=/home/warrior/bigData/hadoop
  3. export PATH=$PATH:$HADOOP_INSTALL/bin
  4. export PATH=$PATH:$HADOOP_INSTALL/sbin
  5. export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
  6. export HADOOP_COMMON_HOME=$HADOOP_INSTALL
  7. export HADOOP_HDFS_HOME=$HADOOP_INSTALL
  8. export YARN_HOME=$HADOOP_INSTALL
    (3 set the /home/warrior/bigData/hadoop/etc/hadoop/core-site.xml

点击(此处)折叠或打开

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.   Licensed under the Apache License, Version 2.0 (the "License");
  5.   you may not use this file except in compliance with the License.
  6.   You may obtain a copy of the License at

  7.     http://www.apache.org/licenses/LICENSE-2.0

  8.   Unless required by applicable law or agreed to in writing, software
  9.   distributed under the License is distributed on an "AS IS" BASIS,
  10.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.   See the License for the specific language governing permissions and
  12.   limitations under the License. See accompanying LICENSE file.
  13. -->

  14. <!-- Put site-specific property overrides in this file. -->

  15. <configuration>
  16.     /* path of HDFS */
  17.     <property>
  18.         <name>fs.defaultFS</name>
  19.     </property>
  20.     /* buffer size */
  21.     <property>
  22.         <name>fs.default.name</name>
  23.         <value>hdfs://warrior:9000</value>
  24.     </property>
  25.     /* temp folder */
  26.     <property>
  27.         <name>hadoop.tmp.dir</name>
  28.         <value>file:/home/warrior/tmp</value>
  29.         <description>Abase for other temporary directories.</description>
  30.     </property>


  31. </configuration>
    (4 set the /home/warrior/bigData/hadoop/etc/hadoop/yarn-site.xml

点击(此处)折叠或打开

  1. <?xml version="1.0"?>
  2. <!--
  3.   Licensed under the Apache License, Version 2.0 (the "License");
  4.   you may not use this file except in compliance with the License.
  5.   You may obtain a copy of the License at

  6.     http://www.apache.org/licenses/LICENSE-2.0

  7.   Unless required by applicable law or agreed to in writing, software
  8.   distributed under the License is distributed on an "AS IS" BASIS,
  9.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  10.   See the License for the specific language governing permissions and
  11.   limitations under the License. See accompanying LICENSE file.
  12. -->
  13. <configuration>

  14. <!-- Site specific YARN configuration properties -->
  15.     <property>
  16.         <name>yarn.nodemanager.aux-services</name>
  17.         <value>mapreduce_shuffle</value>
  18.     </property>
  19.     
  20.     <property>
  21.         <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  22.         <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  23.     </property>
  24.     
  25.     <property>
  26.         <name>yarn.resourcemanager.address</name>
  27.         <value>warrior:8032</value>
  28.     </property>
  29.     <property>
  30.         <name>yarn.resourcemanager.scheduler.address</name>
  31.         <value>warrior:8030</value>
  32.     </property>
  33.     
  34.     <property>
  35.         <name>yarn.resourcemanager.resource-tracker.address</name>
  36.         <value>warrior:8031</value>
  37.     </property>
  38.     
  39.     <property>
  40.         <name>yarn.resourcemanager.admin.address</name>
  41.         <value>warrior:8033</value>
  42.     </property>
  43.     
  44.     <property>
  45.         <name>yarn.resourcemanager.webapp.address</name>
  46.         <value>warrior:8088</value>
  47.     </property>
  48. </configuration>
    (5 set the /home/warrior/bigData/hadoop/etc/hadoop/mapres-site.xml

点击(此处)折叠或打开

  1. <configuration>
  2.     <property>
  3.         <name>mapreduce.framework.name</name>
  4.         <value>yarn</value>
  5.     </property>
  6.     
  7.     <property>
  8.         <name>mapreduce.jobhistory.address</name>
  9.         <value>warrior:10020</value>
  10.     </property>

  11.     <property>
  12.         <name>mapred.job.tracker</name>
  13.         <value>warrior:10001</value>
  14.     </property>
  15.     
  16.     <property>
  17.         <name>mapreduce.jobhistory.webapp.address</name>
  18.         <value>warrior:19888</value>
  19.     </property>
  20. </configuration>
    (6 make directory "namenode" "datanode" at "/home/warrior/bigData/"
    and set the /home/warrior/bigData/hadoop/etc/hadoop/hdfs-site.xml

点击(此处)折叠或打开

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.   Licensed under the Apache License, Version 2.0 (the "License");
  5.   you may not use this file except in compliance with the License.
  6.   You may obtain a copy of the License at

  7.     http://www.apache.org/licenses/LICENSE-2.0

  8.   Unless required by applicable law or agreed to in writing, software
  9.   distributed under the License is distributed on an "AS IS" BASIS,
  10.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.   See the License for the specific language governing permissions and
  12.   limitations under the License. See accompanying LICENSE file.
  13. -->

  14. <!-- Put site-specific property overrides in this file. -->

  15. <configuration>
  16.     <property>
  17.         <name>dfs.namenode.secondary.http-address</name>
  18.         <value>warrior:9001</value>
  19.     </property>
  20.     
  21.     <property>
  22.         <name>dfs.namenode.name.dir</name>
  23.         <value>file:/home/warrior/bigData/hdfs//namenode</value>
  24.     </property>
  25.     
  26.     <property>
  27.         <name>dfs.datanode.data.dir</name>
  28.         <value>file:/home/warrior/bigData/hdfs//datanode</value>
  29.     </property>
  30.     
  31.     <property>
  32.         <name>dfs.replication</name>
  33.         <value>1</value>
  34.     </property>
  35.     
  36.     <property>
  37.         <name>dfs.webdfs.enabled</name>
  38.         <value>true</value>
  39.     </property>
  40. </configuration>
    (7 set the "master" and "slaves" 
    below is my set 
    set the /home/warrior/bigData/hadoop/etc/hadoop/master
    warrior@10.0.2.10
    set the /home/warrior/bigData/hadoop/etc/hadoop/slaves
    warrior@10.0.2.20
    warrior@10.0.2.30


4. scp the directory to all the other machines
    like at 10.0.2.10 to 10.0.2.20
    "scp /home/warrior/bigData/hadoop/  warrior@10.0.2.20:/home/warrior/bigData/hadoop/"
    to other like above
5. use below command test the hadoop cluster
    "./bin/hadoop namenode -format"
    "./sbin/start-all.sh"


阅读(1273) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~