0

我尝试自己实现字数统计示例,这是我的映射器实现:

public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {

    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        Text word = new Text();     
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            context.write(word, new IntWritable(1));
        }
    }
}

和减速机:

public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterator<IntWritable> values, Context context) throws IOException, InterruptedException {
        int sum = 0;
        while (values.hasNext())
            sum += values.next().get();
    context.write(key, new IntWritable(sum));
    }
}

但是我执行此代码得到的输出看起来只是 mapper 的输出,例如,如果输入是“hello world hello”,则输出将是

hello 1
hello 1
world 1

我还在映射和归约之间使用组合器。谁能解释一下这段代码有什么问题?

非常感谢!

4

2 回答 2

3

用这个替换你的 reduce 方法:

        @Override
        protected void reduce(Text key, java.lang.Iterable<IntWritable> values, org.apache.hadoop.mapreduce.Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException,
                InterruptedException {
            int sum = 0;
            for (IntWritable value : values) {
                sum += value.get();
            }
            context.write(key, new IntWritable(sum));
        }

所以底线是你没有覆盖正确的方法。@Override 有助于解决此类错误。

还要确保将 Reduce.class 设置为 reduce class 而不是 Reducer.class !

;) HTH 约翰内斯

于 2011-03-26T02:07:48.557 回答
0

如果您不想在覆盖时使用reduce方法的args,则替代解决方案可以是:

@Override
protected void reduce(Object key, Iterable values, Context context) throws 
IOException, InterruptedException {

 int sum = 0;
 Iterable<IntWritable> v = values;
 Iterator<IntWritable> itr = v.iterator();

 while(itr.hasNext()){
    sum += itr.next().get();
 }

 context.write(key, new IntWritable(sum));
}
于 2017-09-03T07:39:59.690 回答