1

Here is a simple map reduce job. Initially this is just a simple way of copying files in an input directory to an output directory. The Map phase completes, but the reduce phase just hangs. What am I doing wrong? It is a small amount of code, here is the whole job:

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class MapDemo {

    public static class Map extends Mapper<Object, Text, Text, NullWritable> {
        private Text word = new Text();
        public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
            String line = value.toString();
            word.set(line);
            context.write(word, NullWritable.get());
        }
    }

    public static class Reduce extends Reducer<Text, NullWritable, Text, NullWritable> {
        public void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
            context.write(key, NullWritable.get());
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration configuration = new Configuration();
        Job job = new Job(configuration, "MapDemo");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(Map.class);
        job.setReducerClass(Reduce.class);
        job.setNumReduceTasks(10);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(NullWritable.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }

}

It executes until here and then just hangs:

$ hadoop jar target/map-demo.jar /Users/dwilliams/input /Users/dwilliams/output
2013-09-16 11:51:19.131 java[6041:1703] Unable to load realm info from SCDynamicStore
13/09/16 11:51:19 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/09/16 11:51:19 INFO input.FileInputFormat: Total input paths to process : 1
13/09/16 11:51:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/09/16 11:51:19 WARN snappy.LoadSnappy: Snappy native library not loaded
13/09/16 11:51:19 INFO mapred.JobClient: Running job: job_201309150844_0012
13/09/16 11:51:20 INFO mapred.JobClient:  map 0% reduce 0%
13/09/16 11:51:25 INFO mapred.JobClient:  map 100% reduce 0%
... then nothing

Whats wrong here? How do I fix this?

4

2 回答 2

0

我的问题是记忆。我使用的是 VirtualBox,我使用了默认的 512M 内存。将内存增加到2G后,一切正常。

于 2014-11-24T14:09:44.353 回答
0

需要重新格式化 namenode 并重新启动守护进程。这是在我的 mac osx 上,可能与睡眠有关。

于 2013-09-23T20:08:05.313 回答