1

我是 Cloudera 和 Hadoop 的新手,Cloudera WordCount 1.0 示例 (part-00000) 的输出为空。我正在使用的步骤和文件在这里。我想提供任何有帮助的工作日志信息,版本同上——我只需要一些关于在哪里可以找到它们的指导。以下是作业输出和来源。在其他写入的部分(part-00001 到 part-00011)中,非空部分是 part-00001(再见 1)、part-00002(Hadoop 2)、part-00004(再见 1)、part-00005(世界2)和part-00009(你好2)。任何帮助都是极好的。

以下是命令和输出:

[me@server ~]$ hadoop fs -cat /user/me/wordcount/input/file0
Hello World Bye World

[me@server ~]$ hadoop fs -cat /user/me/wordcount/input/file1
Hello Hadoop Goodbye Hadoop

[me@server ~]$ hadoop jar wordcount.jar org.myorg.WordCount /user/me/wordcount/input /user/me/wordcount/output
13/11/12 10:39:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/11/12 10:39:41 INFO mapred.FileInputFormat: Total input paths to process : 2
13/11/12 10:39:42 INFO mapred.JobClient: Running job: job_201311051201_0014
13/11/12 10:39:43 INFO mapred.JobClient:  map 0% reduce 0%
13/11/12 10:39:49 INFO mapred.JobClient:  map 33% reduce 0%
13/11/12 10:39:52 INFO mapred.JobClient:  map 67% reduce 0%
13/11/12 10:39:53 INFO mapred.JobClient:  map 100% reduce 0%
13/11/12 10:39:58 INFO mapred.JobClient:  map 100% reduce 25%
13/11/12 10:40:01 INFO mapred.JobClient:  map 100% reduce 100%
13/11/12 10:40:04 INFO mapred.JobClient: Job complete: job_201311051201_0014
13/11/12 10:40:04 INFO mapred.JobClient: Counters: 33
13/11/12 10:40:04 INFO mapred.JobClient:   File System Counters
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of bytes read=313
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of bytes written=2695420
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of read operations=0
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of large read operations=0
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of write operations=0
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of bytes read=410
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of bytes written=41
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of read operations=18
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of large read operations=0
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of write operations=24
13/11/12 10:40:04 INFO mapred.JobClient:   Job Counters
13/11/12 10:40:04 INFO mapred.JobClient:     Launched map tasks=3
13/11/12 10:40:04 INFO mapred.JobClient:     Launched reduce tasks=12
13/11/12 10:40:04 INFO mapred.JobClient:     Data-local map tasks=3
13/11/12 10:40:04 INFO mapred.JobClient:     Total time spent by all maps in occupied slots (ms)=16392
13/11/12 10:40:04 INFO mapred.JobClient:     Total time spent by all reduces in occupied slots (ms)=61486
13/11/12 10:40:04 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/11/12 10:40:04 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/11/12 10:40:04 INFO mapred.JobClient:   Map-Reduce Framework
13/11/12 10:40:04 INFO mapred.JobClient:     Map input records=2
13/11/12 10:40:04 INFO mapred.JobClient:     Map output records=8
13/11/12 10:40:04 INFO mapred.JobClient:     Map output bytes=82
13/11/12 10:40:04 INFO mapred.JobClient:     Input split bytes=357
13/11/12 10:40:04 INFO mapred.JobClient:     Combine input records=8
13/11/12 10:40:04 INFO mapred.JobClient:     Combine output records=6
13/11/12 10:40:04 INFO mapred.JobClient:     Reduce input groups=5
13/11/12 10:40:04 INFO mapred.JobClient:     Reduce shuffle bytes=649
13/11/12 10:40:04 INFO mapred.JobClient:     Reduce input records=6
13/11/12 10:40:04 INFO mapred.JobClient:     Reduce output records=5
13/11/12 10:40:04 INFO mapred.JobClient:     Spilled Records=12
13/11/12 10:40:04 INFO mapred.JobClient:     CPU time spent (ms)=15650
13/11/12 10:40:04 INFO mapred.JobClient:     Physical memory (bytes) snapshot=3594293248
13/11/12 10:40:04 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=18375352320
13/11/12 10:40:04 INFO mapred.JobClient:     Total committed heap usage (bytes)=6497697792
13/11/12 10:40:04 INFO mapred.JobClient:   org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
13/11/12 10:40:04 INFO mapred.JobClient:     BYTES_READ=50

[me@server ~]$ hadoop fs -cat /user/me/wordcount/output/part-00000

[me@server ~]$ hdfs dfs -ls -R /user/me/wordcount/output
-rw-r--r--   3 me me          0 2013-11-12 10:40 /user/me/wordcount/output/_SUCCESS
drwxr-xr-x   - me me          0 2013-11-12 10:39 /user/me/wordcount/output/_logs
drwxr-xr-x   - me me          0 2013-11-12 10:39 /user/me/wordcount/output/_logs/history
-rw-r--r--   3 me me      67134 2013-11-12 10:40 /user/me/wordcount/output/_logs/history/job_201311051201_0014_1384270782432_me_wordcount
-rw-r--r--   3 me me      81866 2013-11-12 10:39 /user/me/wordcount/output/_logs/history/job_201311051201_0014_conf.xml
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00000
-rw-r--r--   3 me me          6 2013-11-12 10:39 /user/me/wordcount/output/part-00001
-rw-r--r--   3 me me          9 2013-11-12 10:39 /user/me/wordcount/output/part-00002
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00003
-rw-r--r--   3 me me         10 2013-11-12 10:39 /user/me/wordcount/output/part-00004
-rw-r--r--   3 me me          8 2013-11-12 10:39 /user/me/wordcount/output/part-00005
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00006
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00007
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00008
-rw-r--r--   3 me me          8 2013-11-12 10:39 /user/me/wordcount/output/part-00009
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00010
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00011
[me@server ~]$

这是来源

package org.myorg;

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;

public class WordCount {

  public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
      String line = value.toString();
      StringTokenizer tokenizer = new StringTokenizer(line);
      while (tokenizer.hasMoreTokens()) {
        word.set(tokenizer.nextToken());
        output.collect(word, one);
      }
    }
  }

  public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
      int sum = 0;
      while (values.hasNext()) {
        sum += values.next().get();
      }
      output.collect(key, new IntWritable(sum));
    }
  }

  public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(WordCount.class);
    conf.setJobName("wordcount");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(IntWritable.class);

    conf.setMapperClass(Map.class);
    conf.setCombinerClass(Reduce.class);
    conf.setReducerClass(Reduce.class);

    conf.setInputFormat(TextInputFormat.class);
    conf.setOutputFormat(TextOutputFormat.class);

    FileInputFormat.setInputPaths(conf, new Path(args[0]));
    FileOutputFormat.setOutputPath(conf, new Path(args[1]));

    JobClient.runJob(conf);
  }
}
4

4 回答 4

1

您正在启动 12 个 reduce 任务 ( Launched reduce tasks=12),尽管映射器只有五个输出:根据教程,您有五个预期的输出。在 CDH3 中,reducer 的数量设置为映射器输出的数量:很可能是这种行为在 CDH4 中发生了变化——查看您的配置文件,看看您是否有类似mapred.reduce.tasks或类似的东西。

于 2013-11-12T16:24:18.553 回答
1

这是因为你在工作中使用的 reducer 的数量超过了你实际拥有的 key 的数量,即单词。所以减速器的一些输出文件是空的。检查默认分区器如何根据减速器的数量和它向减速器发送数据的密钥进行分区,即 HashPartitioner Link

于 2013-11-12T16:29:57.940 回答
0

或者,您可以运行简单的命令来组合所有零件文件的输出:

cat part-* > output.txt
于 2015-01-06T18:53:11.543 回答
0

OK, big thanks to Binary01 and davek3 for the direction. I'll have to do some reading to understand what's going on, but for posterity's sake I'll share details here in an answer: I got it to work by compiling the v2.0 code so it would take "-D mapred.reduce.tasks=1", which resulted in the correct output. Just for kicks I ran it on Hamlet without the -D and it also worked.

于 2013-11-12T18:26:26.680 回答