Chinaunix首页 | 论坛 | 博客
  • 博客访问: 6686620
  • 博文数量: 1005
  • 博客积分: 8199
  • 博客等级: 中将
  • 技术积分: 13071
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-25 20:19
个人简介

脚踏实地、勇往直前!

文章分类

全部博文(1005)

文章存档

2020年(2)

2019年(93)

2018年(208)

2017年(81)

2016年(49)

2015年(50)

2014年(170)

2013年(52)

2012年(177)

2011年(93)

2010年(30)

分类: 大数据

2013-11-18 21:44:50

 

linux下安装hadoop

环境:

OS:Rad Hat Linux As5

Hadoop: 1.2.1


1.安装步骤

1.1 配置hosts文件

每台机器分配了ip和设定了主机名后,需要在hosts文件下添加如下项

[root@node4 ~]# more /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

#Public

192.168.56.101   node1        node1

192.168.56.102   node2        node2

192.168.56.103   node3        node3

192.168.56.104   node4        node4

1.2 节点角色分配

节点角色

ip

hostname

Name node

192.168.56.101

node1

Datanode1

192.168.56.102

node2

Datanode2

192.168.56.103

node3

Datanode3

192.168.56.104

node4









我们这里共
4个节点,其中一个namenode节点,3datanode节点.

1.3 创建hadoop用户和组

在每个节点上创建hadoop用户和hadoop 组,并设置hadoop用户密码

[root@node1 ~]# groupadd hadoop

[root@node2 ~]# groupadd hadoop

[root@node3~]# groupadd hadoop

[root@node4 ~]# groupadd hadoop

[root@node1 ~]# useradd -g hadoop -G hadoop hadoop

[root@node1 ~]# passwd hadoop

[root@node2 ~]# useradd -g hadoop -G hadoop hadoop

[root@node2 ~]# passwd hadoop

[root@node3 ~]# useradd -g hadoop -G hadoop hadoop

[root@node3 ~]# passwd hadoop

[root@node4 ~]# useradd -g hadoop -G hadoop hadoop

[root@node4 ~]# passwd hadoop

1.4  创建名称节点和各数据节点之间的等效性

名称节点和数据节点之间通信需要无密码登陆,所以需要名称结合和其他节点之间配置ssh无密码登陆连接,该步骤在hadoop用户下执行。

1.4.1  建立名称节点1Datanode1的等效连接

[hadoop@node1 ~]$ cd ~

[hadoop@node1 ~]$ mkdir ~/.ssh

[hadoop@node1 ~]$ chmod 700 ~/.ssh

[hadoop@node1 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

84:88:f0:dc:7c:a6:d0:e8:68:65:53:97:35:02:05:fb hadoop@node1

[hadoop@node1 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_dsa.

Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.

The key fingerprint is:

94:80:60:37:38:e6:aa:10:22:50:1a:c2:42:56:22:0a hadoop@node1

[hadoop@node1 .ssh]$ chmod 644 ~/.ssh /authorized_keys

[hadoop@node2 ~]$ cd ~

[hadoop@node2 ~]$ mkdir ~/.ssh

[hadoop@node2 ~]$ chmod 700 ~/.ssh

[hadoop@node2 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

00:ee:68:42:3a:16:a3:75:6c:d4:36:49:57:a5:5c:57 hadoop@node2

[hadoop@node2 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_dsa.

Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.

The key fingerprint is:

21:47:cd:13:d2:17:a0:cb:c9:a0:c5:fc:39:3e:c4:bb hadoop@node2

[hadoop@node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[hadoop@node1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

[hadoop@node1 ~]$ ssh node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

The authenticity of host 'node2 (192.168.56.102)' can't be established.

RSA key fingerprint is 5b:13:97:1a:0c:4d:36:93:7b:b5:87:2f:ac:34:26:1f.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node2,192.168.56.102' (RSA) to the list of known hosts.

hadoop@node2's password:

[hadoop@node1 ~]$ ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

hadoop@node2's password:

[hadoop@node1 ~]$ scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys

hadoop@node2's password:

authorized_keys                                                                                   100% 1992     2.0KB/s   00:00

[hadoop@node1 .ssh]$ chmod 644 ~/.ssh /authorized_keys

1.4.2  建立名称节点1Datanode2的等效连接

[hadoop@node3 ~]$ cd ~

[hadoop@node3 ~]$ mkdir ~/.ssh

[hadoop@node3 ~]$ chmod 700 ~/.ssh

[hadoop@node3 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

43:18:2a:22:37:19:3c:42:25:56:7d:a9:55:26:62:e6 hadoop@node3

[hadoop@node3 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_dsa.

Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.

The key fingerprint is:

a4:b9:29:98:6c:57:3e:ea:a8:95:23:4c:c2:25:37:23 hadoop@node3

[hadoop@node1 ~]$ ssh node3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

The authenticity of host 'node3 (192.168.56.103)' can't be established.

RSA key fingerprint is 56:dc:97:60:22:59:17:f2:04:67:9f:fd:82:4c:64:cf.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node3,192.168.56.103' (RSA) to the list of known hosts.

hadoop@node3's password:

[hadoop@node1 ~]$ ssh node3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

hadoop@node3's password:

[hadoop@node1 ~]$ scp ~/.ssh/authorized_keys node3:~/.ssh/authorized_keys

hadoop@node3's password:

authorized_keys                                                                                   100% 2988     2.9KB/s   00:00

我这些遇到authorized_keys文件权限问题导致无法建立连接的,修改下该文件的权限,问题解决

[hadoop@node3 .ssh]$ chmod 644 ~/.ssh /authorized_keys

1.4.3  建立名称节点1Datanode3的等效连接

[hadoop@node4 ~]$ cd ~

[hadoop@node4 ~]$ mkdir ~/.ssh

[hadoop@node4 ~]$ chmod 700 ~/.ssh

[hadoop@node4 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

3d:27:cb:18:c4:ec:7b:5f:78:47:2a:5e:d4:c7:fa:75 hadoop@node4

[hadoop@node4 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_dsa.

Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.

The key fingerprint is:

09:17:df:34:85:76:ba:2a:4f:c1:9e:3b:46:68:05:47 hadoop@node4

[hadoop@node1 ~]$ ssh node4 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

The authenticity of host 'node3 (192.168.56.103)' can't be established.

RSA key fingerprint is 56:dc:97:60:22:59:17:f2:04:67:9f:fd:82:4c:64:cf.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node3,192.168.56.103' (RSA) to the list of known hosts.

hadoop@node3's password:

[hadoop@node1 ~]$ ssh node4 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

hadoop@node3's password:

[hadoop@node1 ~]$ scp ~/.ssh/authorized_keys node4:~/.ssh/authorized_keys

hadoop@node3's password:

authorized_keys                                                                                   100% 2988     2.9KB/s   00:00

1.4.4  验证等效连接


 

1.4.4.1  名称节点到各节点的等效性

[hadoop@node1 ~]$ ssh node1;node2;node3;node4 date

Last login: Wed Jul  2 16:31:47 2014 from node1

1.4.4.2 datanode1到名称节点的等效性

[hadoop@node2 ~]$ ssh node1 date

Thu Jul  3 09:19:51 CST 2014

1.4.4.3 datanode2到名称节点的等效性

[hadoop@node3 ~]$ ssh node1 date

Thu Jul  3 09:19:57 CST 2014

1.4.4.4 datanode3到名称节点的等效性

[hadoop@node4 ~]$ ssh node1 date

Thu Jul  3 09:20:03 CST 2014

1.5  关闭防护墙


 

1.6安装jdk

目前系统的jdk1.4,需要安装1.6以上的版本

[hadoop@node1 ~]$ java -version

java version "1.4.2"

gij (GNU libgcj) version 4.1.2 20080704 (Red Hat 4.1.2-51)

 

1.6.1  下载jdk

可以到oracle官网下载jdk,下载地址下载地址:

我这里下载的是jdk-8u5-linux-i586.rpm

1.6.1  安装jdk

我这里下载的是rpm安装文件,可以先查看rpm包中java的安装路径

[root@node1 soft]# rpm -qpl jdk-8u5-linux-i586.rpm

/usr/java/jdk1.8.0_05/man/man1/javap.1

/usr/java/jdk1.8.0_05/man/man1/javaws.1

/usr/java/jdk1.8.0_05/man/man1/jcmd.1

/usr/java/jdk1.8.0_05/man/man1/jconsole.1

/usr/java/jdk1.8.0_05/man/man1/jdb.1

/usr/java/jdk1.8.0_05/man/man1/jdeps.1

/usr/java/jdk1.8.0_05/man/man1/jhat.1

/usr/java/jdk1.8.0_05/man/man1/jinfo.1

/usr/java/jdk1.8.0_05/man/man1/jjs.1

/usr/java/jdk1.8.0_05/man/man1/jmap.1

/usr/java/jdk1.8.0_05/man/man1/jmc.1

/usr/java/jdk1.8.0_05/man/man1/jps.1

/usr/java/jdk1.8.0_05/man/man1/jrunscript.1

/usr/java/jdk1.8.0_05/man/man1/jsadebugd.1

/usr/java/jdk1.8.0_05/man/man1/jstack.1

/usr/java/jdk1.8.0_05/man/man1/jstat.1

/usr/java/jdk1.8.0_05/man/man1/jstatd.1

/usr/java/jdk1.8.0_05/man/man1/jvisualvm.1

/usr/java/jdk1.8.0_05/man/man1/keytool.1

/usr/java/jdk1.8.0_05/man/man1/native2ascii.1

/usr/java/jdk1.8.0_05/man/man1/orbd.1

/usr/java/jdk1.8.0_05/man/man1/pack200.1

/usr/java/jdk1.8.0_05/man/man1/policytool.1

/usr/java/jdk1.8.0_05/man/man1/rmic.1

/usr/java/jdk1.8.0_05/man/man1/rmid.1

/usr/java/jdk1.8.0_05/man/man1/rmiregistry.1

/usr/java/jdk1.8.0_05/man/man1/schemagen.1

/usr/java/jdk1.8.0_05/man/man1/serialver.1

/usr/java/jdk1.8.0_05/man/man1/servertool.1

/usr/java/jdk1.8.0_05/man/man1/tnameserv.1

/usr/java/jdk1.8.0_05/man/man1/unpack200.1

/usr/java/jdk1.8.0_05/man/man1/wsgen.1

/usr/java/jdk1.8.0_05/man/man1/wsimport.1

/usr/java/jdk1.8.0_05/man/man1/xjc.1

/usr/java/jdk1.8.0_05/release

/usr/java/jdk1.8.0_05/src.zip

部分输出,可以看出直接安装rpm包的话,java会默认安装在/usr/java/目录下,hadoop设置环境变量java_home的时候可以指定该目录,不指定的话,系统会默认去找系统默认安装的java目录。

使用如下的命令直接安装java

[root@node1 soft]# rpm -ivh jdk-8u5-linux-i586.rpm

Preparing...                ########################################### [100%]

   1:jdk                    ########################################### [100%]

Unpacking JAR files...

        rt.jar...

        jsse.jar...

        charsets.jar...

        tools.jar...

        localedata.jar...

        jfxrt.jar...

        plugin.jar...

        javaws.jar...

        deploy.jar...

安装完成后,为hadoop设定环境变量,如下:

[hadoop@node1 ~]$ more .bash_profile

# .bash_profile

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

# User specific environment and startup programs

export JAVA_HOME=/usr/java/jdk1.8.0_05

export JRE_HOME=/usr/java/jdk1.8.0_05/jre

export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib

export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

使用hadoop用户退出后重新登录,查看java是否指向新安装的数据库版本。

[hadoop@node1 ~]$ java -version

java version "1.8.0_05"

Java(TM) SE Runtime Environment (build 1.8.0_05-b13)

Java HotSpot(TM) Client VM (build 25.5-b02, mixed mode, sharing)

如上输出,说明jdk已经配置完成,安装如上的步骤在那个其他节点上实施,即每个节点都要配置jdk

1.7安装hadoop

 

1.7.1  下载hadoop

下载地址:

我这里下载的版本是: hadoop-2.4.0.tar.gz

 

1.7.2  安装hadoop

这里先在名称节点上安装,安装没有问题后再在其他节点上安装,安装过程是在root用户下。

hadoop安装包复制一份到/usr目录下,hadoop就安装在该目录下

[root@node1 soft]# cp hadoop-1.2.1.tar.gz /usr/

解压缩hadoop安装包

[root@node1 usr]#cd usr

[root@node1 usr]# tar -zxvf hadoop-1.2.1.tar.gz

目录命名为hadoop

[root@node1 usr]# mv hadoop-1.2.1 hadoop

创建hadoop临时目录

[root@node1 usr]# mkdir /usr/hadoop/tmp

将该目录属主赋予hadoop

[root@node1 usr]# chown -R hadoop:hadoop hadoop

1.7.3  设置hadoop环境变量

将如下的语句添加到hadoop用户下的.bash_profile文件中,同时将HADOOP_HOME添加到PATH变量中。

export HADOOP_HOME=/usr/hadoop

export PATH=$HADOOP_HOME/bin:$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

 

1.7.4  配置hadoop

 

1.7.4.1  配置hadoop-env.sh

该文件路径:/usr/hadoop/conf

编辑该文件,在该文件最后添加JAVA_HOME

[hadoop@node1 hadoop]$ vi hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_05

 

1.7.4.2  配置core-site.xml文件

该文件路径:/usr/hadoop/conf

编辑core-site.xml,添加如下内容

   

        hadoop.tmp.dir

        /usr/hadoop/tmp

        A base for other temporary directories.

   

   

        fs.default.name

        hdfs://192.168.56.101:9000

   

 

1.7.4.3  配置hdfs-site.xml文件

该文件路径:/usr/hadoop/conf

编辑该文件,添加如下配置项

   

        dfs.replication

        1

   

 

1.7.4.4  配置mapred-site.xml文件

修改HadoopMapReduce的配置文件,配置的是JobTracker的地址和端口.

编辑该文件添加如下内容:

   

      mapred.job.tracker

     

   

 

1.7.4.5  配置masters文件

编辑masters文件

添加名称节点的ip地址

[hadoop@node1 conf]$ more masters

192.168.56.101

 

1.7.4.6  配置slaves文件(名称主机特有)

编辑该文件,将datanode节点的ip地址写到该文件

[hadoop@node1 hadoop]$ more slaves

192.168.56.102

192.168.56.103

192.168.56.104

到这里名称节点已经配置完成了,下面继续配置datanode节点。

1.7.4.7  配置datanode节点

从节点1拷贝到datanode1

scp hadoop-env.sh hadoop@192.168.56.102:/usr/hadoop/conf

scp core-site.xml hadoop@192.168.56.102:/usr/hadoop/conf /

scp hdfs-site.xml hadoop@192.168.56.102:/usr/hadoop/conf

scp mapred-site.xml hadoop@192.168.56.102:/usr/hadoop/conf

scp masters hadoop@192.168.56.102:/usr/hadoop/conf

使用同样的命名将如上文件拷贝到另外两个节点。

1.8 启动和验证

 

1.8.1  格式化HDFS文件系统

 

在名称节点上执行如下的命令格式化文件系统

[hadoop@node1 ~]$ hadoop namenode -format

只需执行一次,下次启动的时候不需要再执行。

1.8.2  启动hadoop

 

在每个节点上的root用户执行如下的命令关闭防火墙

[root@node1 ~]# service iptables stop

[hadoop@node1 sbin]$ cd $HADOOP_HOME/bin

[hadoop1@node1 bin]$ ./start-all.sh

Warning: $HADOOP_HOME is deprecated.

starting namenode, logging to /usr1/hadoop/libexec/../logs/hadoop-hadoop1-namenode-node1.out

192.168.56.104: starting datanode, logging to /usr1/hadoop/libexec/../logs/hadoop-hadoop1-datanode-node4.out

192.168.56.103: starting datanode, logging to /usr1/hadoop/libexec/../logs/hadoop-hadoop1-datanode-node3.out

192.168.56.102: starting datanode, logging to /usr1/hadoop/libexec/../logs/hadoop-hadoop1-datanode-node2.out

192.168.56.101: starting secondarynamenode, logging to /usr1/hadoop/libexec/../logs/hadoop-hadoop1-secondarynamenode-node1.out

starting jobtracker, logging to /usr1/hadoop/libexec/../logs/hadoop-hadoop1-jobtracker-node1.out

192.168.56.104: starting tasktracker, logging to /usr1/hadoop/libexec/../logs/hadoop-hadoop1-tasktracker-node4.out

192.168.56.102: starting tasktracker, logging to /usr1/hadoop/libexec/../logs/hadoop-hadoop1-tasktracker-node2.out

192.168.56.103: starting tasktracker, logging to /usr1/hadoop/libexec/../logs/hadoop-hadoop1-tasktracker-node3.out

1.8.3  验证

在每个节点上执行jps命令

namenode节点

[hadoop@node1 soft]$ jps

5409 SecondaryNameNode

15143 Jps

5241 NameNode

5485 JobTracker

datanode1节点

[hadoop@node2 logs]$ jps

13954 Jps

4971 DataNode

5069 TaskTracker

datanode2节点

[hadoop@node3 tmp]$ jps

13954 Jps

4971 DataNode

5069 TaskTracker

datanode3节点

[hadoop@node4 tmp]$ jps

13954 Jps

4971 DataNode

5069 TaskTracker

打开web界面

http:192.168.56.101:50030

http:192.168.56.101:50070

 

1.8.4  wordcount验证

/home/hadoop/目录下创建file目录

[hadoop@node1 ~]$ mkdir /home/hadoop/file

在该目录下创建两个文件

[hadoop@node1 file]$ echo "Hello World">file1.txt

[hadoop@node1 file]$ echo "Hello Hadoop">file2.txt

[hadoop@node1 file]$ more file1.txt

Hello World

[hadoop@node1 file]$ more file2.txt

Hello Hadoop

创建hdfs目录

hadoop fs -mkdir -p /user/hadoop/input

将刚才创建的file1.txtfile2.txt上传到hdfs

[hadoop@node1 file]$ hadoop fs -put /home/hadoop/file/file*.txt input

[hadoop1@node1 soft]$ hadoop fs -ls input

Warning: $HADOOP_HOME is deprecated.

Found 4 items

-rw-r--r--   1 hadoop1 supergroup         12 2014-07-08 11:29 /user/hadoop1/input/file1.txt

-rw-r--r--   1 hadoop1 supergroup         13 2014-07-08 11:29 /user/hadoop1/input/file2.txt

执行wordcount程序

[hadoop1@node3 file]$ hadoop jar /usr1/hadoop/hadoop-examples-1.2.1.jar wordcount input output

Warning: $HADOOP_HOME is deprecated.

14/07/09 11:26:35 INFO input.FileInputFormat: Total input paths to process : 4

14/07/09 11:26:35 INFO util.NativeCodeLoader: Loaded the native-hadoop library

14/07/09 11:26:35 WARN snappy.LoadSnappy: Snappy native library not loaded

14/07/09 11:26:44 INFO mapred.JobClient: Running job: job_201407090921_0003

14/07/09 11:26:45 INFO mapred.JobClient:  map 0% reduce 0%

14/07/09 11:27:27 INFO mapred.JobClient:  map 25% reduce 0%

14/07/09 11:27:28 INFO mapred.JobClient:  map 50% reduce 0%

14/07/09 11:27:41 INFO mapred.JobClient:  map 50% reduce 16%

14/07/09 11:28:19 INFO mapred.JobClient:  map 75% reduce 16%

14/07/09 11:28:27 INFO mapred.JobClient:  map 75% reduce 25%

14/07/09 11:30:09 INFO mapred.JobClient:  map 100% reduce 25%

14/07/09 11:30:59 INFO mapred.JobClient:  map 100% reduce 100%

14/07/09 11:31:15 INFO mapred.JobClient: Job complete: job_201407090921_0003

14/07/09 11:31:47 INFO mapred.JobClient: Counters: 30

14/07/09 11:31:47 INFO mapred.JobClient:   Map-Reduce Framework

14/07/09 11:31:47 INFO mapred.JobClient:     Spilled Records=12

14/07/09 11:31:47 INFO mapred.JobClient:     Map output materialized bytes=97

14/07/09 11:31:47 INFO mapred.JobClient:     Reduce input records=6

14/07/09 11:31:47 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1549914112

14/07/09 11:31:47 INFO mapred.JobClient:     Map input records=4

14/07/09 11:31:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=480

14/07/09 11:31:47 INFO mapred.JobClient:     Map output bytes=61

14/07/09 11:31:47 INFO mapred.JobClient:     Reduce shuffle bytes=97

14/07/09 11:31:47 INFO mapred.JobClient:     Physical memory (bytes) snapshot=615403520

14/07/09 11:31:47 INFO mapred.JobClient:     Reduce input groups=3

14/07/09 11:31:47 INFO mapred.JobClient:     Combine output records=6

14/07/09 11:31:47 INFO mapred.JobClient:     Reduce output records=3

14/07/09 11:31:47 INFO mapred.JobClient:     Map output records=6

14/07/09 11:31:47 INFO mapred.JobClient:     Combine input records=6

14/07/09 11:31:47 INFO mapred.JobClient:     CPU time spent (ms)=2090

14/07/09 11:31:47 INFO mapred.JobClient:     Total committed heap usage (bytes)=614481920

14/07/09 11:31:47 INFO mapred.JobClient:   File Input Format Counters

14/07/09 11:31:47 INFO mapred.JobClient:     Bytes Read=37

14/07/09 11:31:47 INFO mapred.JobClient:   FileSystemCounters

14/07/09 11:31:47 INFO mapred.JobClient:     HDFS_BYTES_READ=517

14/07/09 11:31:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=277804

14/07/09 11:31:47 INFO mapred.JobClient:     FILE_BYTES_READ=79

14/07/09 11:31:47 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=25

14/07/09 11:31:47 INFO mapred.JobClient:   Job Counters

14/07/09 11:31:47 INFO mapred.JobClient:     Launched map tasks=6

14/07/09 11:31:47 INFO mapred.JobClient:     Launched reduce tasks=1

14/07/09 11:31:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=210985

14/07/09 11:31:47 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

14/07/09 11:31:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=192073

14/07/09 11:31:47 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

14/07/09 11:31:47 INFO mapred.JobClient:     Rack-local map tasks=4

14/07/09 11:31:47 INFO mapred.JobClient:     Data-local map tasks=2

14/07/09 11:31:47 INFO mapred.JobClient:   File Output Format Counters

14/07/09 11:31:47 INFO mapred.JobClient:     Bytes Written=25

[hadoop1@node3 file]$ hadoop fs -cat  /user/hadoop1/output/part-r-00000

Warning: $HADOOP_HOME is deprecated.

查看输出结果

[hadoop1@node3 file]$ hadoop fs -ls output

Warning: $HADOOP_HOME is deprecated.

Found 3 items

-rw-r--r--   1 hadoop1 supergroup          0 2014-07-09 11:31 /user/hadoop1/output/_SUCCESS

drwxr-xr-x   - hadoop1 supergroup          0 2014-07-09 11:26 /user/hadoop1/output/_logs

-rw-r--r--   1 hadoop1 supergroup         25 2014-07-09 11:30 /user/hadoop1/output/part-r-00000

输出结果保留在part-r-00000文件中

[hadoop1@node3 file]$ hadoop fs -cat  /user/hadoop/output/part-r-00000

Warning: $HADOOP_HOME is deprecated.

Hadoop  1

Hello   3

World   2

-- The End --

阅读(3522) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~