之前一直使用hadoop-streaming,克服心里抵触,开始使用java!!
1. 安装intelliJ环境
下载链接:
下载免费版足够我用了^_^!
然后按照官方说明
(1)解压安装包:
-
tar -zxvf idea-2016.1.1.tar.gz -C your_path
(2)安装:
执行解压后bin目录下的 idea.sh
2.基本开发需要导入的jar包
2.1创建工程
file->new->project
2.2选择开发jdk
选择java开发
选择jdk,new->jdk
选择需要的jdk版本
然后一路next,创建工程名
2.3创建工程后选择HADOOP开发需要的jar包
(1)file->project structure
(2)在窗口左侧栏选择Modules
(3)添加需要的jar包
我的jar包目录是
/home/warrior/bigData/hadoop/share/hadoop/common 目录下的 hadoop-common-2.6.0.jar
/home/warrior/bigData/hadoop/share/hadoop/mapreduce 目录下的 hadoop-mapreduce-client-common-2.6.0.jar和hadoop-mapreduce-client-core-2.6.0.jar
3. map-reduce程序基本框架
yourMapper extends Mapper ...... 然后重构 map 方法
yourReducer extends Reducer
...... 然后重构reduce方法
最后main
方法里面实现hadoop conf设置
-
import org.apache.hadoop.conf.Configuration;
-
import org.apache.hadoop.fs.Path;
-
import org.apache.hadoop.io.IntWritable;
-
import org.apache.hadoop.io.LongWritable;
-
import org.apache.hadoop.io.Text;
-
import org.apache.hadoop.mapreduce.Job;
-
import org.apache.hadoop.mapreduce.Mapper;
-
import org.apache.hadoop.mapreduce.Reducer;
-
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
-
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
-
//import org.omg.CORBA.Object;
-
-
import java.io.IOException;
-
import java.util.Iterator;
-
import java.util.StringTokenizer;
-
-
/**
-
* Created by warrior on 16-4-15.
-
*/
-
public class wordCount {
-
public static class WCMapper
-
extends Mapper<Object, Text, Text, IntWritable>{
-
-
private final static IntWritable one = new IntWritable(1);
-
private Text word = new Text();
-
-
public void map(Object key, Text value, Context context
-
) throws IOException, InterruptedException {
-
StringTokenizer itr = new StringTokenizer(value.toString());
-
while (itr.hasMoreTokens()) {
-
word.set(itr.nextToken());
-
context.write(word, one);
-
}
-
}
-
}
-
-
public static class WCReducer
-
extends Reducer<Text, IntWritable, Text, IntWritable>{
-
private IntWritable result = new IntWritable();
-
public void reduce(Text key, Iterable<IntWritable> values, Context context
-
) throws IOException, InterruptedException{
-
Integer sum = 0;
-
for(IntWritable val : values){
-
sum += val.get();
-
}
-
result.set(sum);
-
context.write(key, result);
-
}
-
}
-
-
public static void main(String[] args) throws Exception{
-
Configuration conf = new Configuration();
-
Job job = Job.getInstance(conf, "word count!!");
-
job.setJarByClass(wordCount.class);
-
job.setMapperClass(WCMapper.class);
-
job.setCombinerClass(WCReducer.class);
-
job.setReducerClass(WCReducer.class);
-
job.setOutputKeyClass(Text.class);
-
job.setOutputValueClass(IntWritable.class);
-
FileInputFormat.addInputPath(job, new Path(args[0]));
-
FileOutputFormat.setOutputPath(job, new Path(args[1]));
-
System.exit(job.waitForCompletion(true)? 0:1);
-
}
-
}
4. intelliJ 完成 hadoop可执行jar包生成
(1)file->project structure
(2)在窗口中选择Artifacts
(3)生成空JAR包,然后在Name处,命名jar包
(4)然后在output layout处添加生成包
(5)然后apply,完成。回到开发环境,通过build->build artifacts->build 或者 rebuild
5. 生成jar包后,job提交
-
hadoop jar ./out/artifacts/invertedList/invertedList.jar hdfs_input_path hdfs_output_path
参考:
http://blog.sina.com.cn/s/blog_3fe961ae0102uy42.html
《深入理解大数据-大数据处理与编程实践》
阅读(4280) | 评论(0) | 转发(0) |