Chinaunix首页 | 论坛 | 博客
  • 博客访问: 293386
  • 博文数量: 82
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 874
  • 用 户 组: 普通用户
  • 注册时间: 2015-03-21 09:58
个人简介

traveling in cumputer science!!

文章分类

全部博文(82)

文章存档

2016年(13)

2015年(69)

我的朋友

分类: HADOOP

2016-04-20 12:17:59

之前一直使用hadoop-streaming,克服心里抵触,开始使用java!!

1. 安装intelliJ环境

下载链接:
下载免费版足够我用了^_^!
然后按照官方说明
(1)解压安装包:

点击(此处)折叠或打开

  1. tar -zxvf idea-2016.1.1.tar.gz -C your_path
(2)安装:
执行解压后bin目录下的 idea.sh

2.基本开发需要导入的jar包

2.1创建工程

file->new->project

2.2选择开发jdk

选择java开发

选择jdk,new->jdk

选择需要的jdk版本

然后一路next,创建工程名


2.3创建工程后选择HADOOP开发需要的jar包

(1)file->project structure
(2)在窗口左侧栏选择Modules

(3)添加需要的jar包

我的jar包目录是
/home/warrior/bigData/hadoop/share/hadoop/common 目录下的 hadoop-common-2.6.0.jar
/home/warrior/bigData/hadoop/share/hadoop/mapreduce 目录下的 hadoop-mapreduce-client-common-2.6.0.jar和hadoop-mapreduce-client-core-2.6.0.jar


3. map-reduce程序基本框架

yourMapper extends Mapper ...... 然后重构 map 方法
yourReducer extends Reducer ...... 然后重构reduce方法
最后main方法里面实现hadoop conf设置

点击(此处)折叠或打开

  1. import org.apache.hadoop.conf.Configuration;
  2. import org.apache.hadoop.fs.Path;
  3. import org.apache.hadoop.io.IntWritable;
  4. import org.apache.hadoop.io.LongWritable;
  5. import org.apache.hadoop.io.Text;
  6. import org.apache.hadoop.mapreduce.Job;
  7. import org.apache.hadoop.mapreduce.Mapper;
  8. import org.apache.hadoop.mapreduce.Reducer;
  9. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  10. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  11. //import org.omg.CORBA.Object;

  12. import java.io.IOException;
  13. import java.util.Iterator;
  14. import java.util.StringTokenizer;

  15. /**
  16.  * Created by warrior on 16-4-15.
  17.  */
  18. public class wordCount {
  19.     public static class WCMapper
  20.         extends Mapper<Object, Text, Text, IntWritable>{

  21.         private final static IntWritable one = new IntWritable(1);
  22.         private Text word = new Text();

  23.     public void map(Object key, Text value, Context context
  24.     ) throws IOException, InterruptedException {
  25.         StringTokenizer itr = new StringTokenizer(value.toString());
  26.         while (itr.hasMoreTokens()) {
  27.             word.set(itr.nextToken());
  28.             context.write(word, one);
  29.         }
  30.     }
  31. }

  32.     public static class WCReducer
  33.         extends Reducer<Text, IntWritable, Text, IntWritable>{
  34.         private IntWritable result = new IntWritable();
  35.         public void reduce(Text key, Iterable<IntWritable> values, Context context
  36.             ) throws IOException, InterruptedException{
  37.             Integer sum = 0;
  38.             for(IntWritable val : values){
  39.                 sum += val.get();
  40.             }
  41.             result.set(sum);
  42.             context.write(key, result);
  43.         }
  44.     }

  45.     public static void main(String[] args) throws Exception{
  46.         Configuration conf = new Configuration();
  47.         Job job = Job.getInstance(conf, "word count!!");
  48.         job.setJarByClass(wordCount.class);
  49.         job.setMapperClass(WCMapper.class);
  50.         job.setCombinerClass(WCReducer.class);
  51.         job.setReducerClass(WCReducer.class);
  52.         job.setOutputKeyClass(Text.class);
  53.         job.setOutputValueClass(IntWritable.class);
  54.         FileInputFormat.addInputPath(job, new Path(args[0]));
  55.         FileOutputFormat.setOutputPath(job, new Path(args[1]));
  56.         System.exit(job.waitForCompletion(true)? 0:1);
  57.     }
  58. }

4. intelliJ 完成 hadoop可执行jar包生成

(1)file->project structure
(2)在窗口中选择Artifacts

(3)生成空JAR包,然后在Name处,命名jar包

(4)然后在output layout处添加生成包

(5)然后apply,完成。回到开发环境,通过build->build artifacts->build 或者 rebuild

5. 生成jar包后,job提交

点击(此处)折叠或打开

  1. hadoop jar ./out/artifacts/invertedList/invertedList.jar hdfs_input_path hdfs_output_path

参考:

http://blog.sina.com.cn/s/blog_3fe961ae0102uy42.html

《深入理解大数据-大数据处理与编程实践》



阅读(4280) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~