Windows系统下,通过MapReduce实现次词频统计 MapReduce编程实例-----词频统计
1)·首先,MapReduce通过默认组件TextInputFormat将待处理的数据文件(如ext1.txt和text2.txt),
把每一行的数据都转变为
2)·其次,调用Map()方法,将单词进行切割并进行计数,输出键值对作为Reducer阶段的输入键值对
3)·最后,调用Reduce()方法将单词汇总、排序后,通过TextOutputFormat组件输出结果文件中
1)自定义Mapper,继承自己的父类;
2)Mapper输入数据是kv键值对形式;形如
3)Mapper阶段的逻辑代码写入map()方法内;
4)Mapper输出的数据也是kv键值对类型;
5)map()方法,每一个kv都要调用一次;
package word.com; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class WordMapper extends MapperReduce阶段:{ @Override protected void map(LongWritable key, Text value, Mapper .Context context) throws IOException, InterruptedException { String line = value.toString(); String[] words = line.split(" "); for(String word:words){ context.write(new Text(word), new IntWritable(1)); } } }
1)自定义Reducer,继承自己的父类;
2)Reducer输入数据是mapper的输出数据类型;形如
3)Reducer阶段的逻辑代码写入reducer()方法内;
4)reducer()方法,每一个相同的kv都要调用一次;
package word.com; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordReduce extends ReducerDriver阶段:{ @Override protected void reduce(Text key, Iterable values, Reducer .Context context) throws IOException, InterruptedException { int count = 0; for(IntWritable value:values){ count += value.get(); } context.write(key, new IntWritable(count)); } }
通俗讲,相当于连接Mapper和Reducer的桥梁
package word.com; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordReduce extends Reducer实验结果: 可能出现的问题{ @Override protected void reduce(Text key, Iterable values, Reducer .Context context) throws IOException, InterruptedException { int count = 0; for(IntWritable value:values){ count += value.get(); } context.write(key, new IntWritable(count)); } }
Windows系统下运行代码,可能会出现Exception in thread "main" java.lang.NullPointerException
解决方法将hadoop.dll放到C:WindowsSystem32(之前放到hadoop-2.7.2bin没起作用)
将winutils.exe放到hadoop-2.7.2bin下就可(不要忘记事先配好环境变量)



