1

我写了一个hadoop程序并在一台机器上运行它,效果很好。但是当我将它迁移到集群(一个namenode,12个datanode)时,它遇到了以下问题(作业没有启动并在map启动后立即完成)

命令在终端上运行:

hadoop jar VOConeSearch.jar 输入输出 142.82 -3.32 1

(这里input是hdfs中的一个目录供输入,output是程序写入的hdfs目录,执行前hdfs中没有输出目录,142.82,-3.32,1是三个额外参数)

运行程序时的集群信息
,输入目录包含167537个文件

11/06/11 09:33:49 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/06/11 09:33:50 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
11/06/11 09:33:50 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/06/11 09:33:57 INFO input.FileInputFormat: Total input paths to process : 167537
11/06/11 09:37:36 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
11/06/11 09:37:36 INFO mapreduce.JobSubmitter: number of splits:1
11/06/11 09:37:36 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null
11/06/11 09:37:36 INFO mapreduce.Job: Running job: job_201106081653_0011
11/06/11 09:37:37 INFO mapreduce.Job:  map 0% reduce 0%
11/06/11 09:37:37 INFO mapreduce.Job: Job complete: job_201106081653_0011
11/06/11 09:37:37 INFO mapreduce.Job: Counters: 4
    Job Counters 
        Total time spent by all maps waiting after reserving slots (ms)=0
        Total time spent by all reduces waiting after reserving slots (ms)=0
        SLOTS_MILLIS_MAPS=0
        SLOTS_MILLIS_REDUCES=0

似乎作业在 0 秒内完成,但 hdfs 中没有输出目录。同一程序在一台机器上运行(namenode,datanode 存在于同一台机器上)但(hdfs)输入目录中只有一个文件。

输入目录中包含一个文件的单个节点信息

11/06/11 10:07:54 INFO security.Groups: Group mapping
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/06/11 10:07:54 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use
mapreduce.task.attempt.id
11/06/11 10:07:54 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing
the arguments. Applications should implement Tool for the same.
11/06/11 10:07:54 INFO input.FileInputFormat: Total input paths to process : 1
11/06/11 10:07:54 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead,
use mapreduce.job.maps
11/06/11 10:07:54 INFO mapreduce.JobSubmitter: number of splits:1
11/06/11 10:07:55 INFO mapreduce.JobSubmitter: adding the following namenodes'
delegation tokens:null
11/06/11 10:07:55 INFO mapreduce.Job: Running job: job_201106111004_0001
11/06/11 10:07:56 INFO mapreduce.Job:  map 0% reduce 0%
11/06/11 10:08:11 INFO mapreduce.Job:  map 100% reduce 0%
11/06/11 10:08:17 INFO mapreduce.Job:  map 100% reduce 100%
11/06/11 10:08:19 INFO mapreduce.Job: Job complete: job_201106111004_0001
11/06/11 10:08:19 INFO mapreduce.Job: Counters: 33
    FileInputFormatCounters
        BYTES_READ=66580278
    FileSystemCounters
        FILE_BYTES_READ=6562
        FILE_BYTES_WRITTEN=13156
        HDFS_BYTES_READ=66580392
        HDFS_BYTES_WRITTEN=6941
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    Job Counters 
        Data-local map tasks=1
        Total time spent by all maps waiting after reserving slots (ms)=0
        Total time spent by all reduces waiting after reserving slots (ms)=0
        SLOTS_MILLIS_MAPS=8744
        SLOTS_MILLIS_REDUCES=3189
        Launched map tasks=1
        Launched reduce tasks=1
    Map-Reduce Framework
        Combine input records=0
        Combine output records=0
        Failed Shuffles=0
        GC time elapsed (ms)=867
        Map input records=118249
        Map output bytes=6512
        Map output records=11
        Merged Map outputs=1
        Reduce input groups=1
        Reduce input records=11
        Reduce output records=11
        Reduce shuffle bytes=6562
        Shuffled Maps =1
        Spilled Records=22
        SPLIT_RAW_BYTES=114

hadoop 程序的一部分

public static void main(String[] args) throws Exception {

    if(args.length != 5)
    {
        System.out.println("Usage : HadoopTest <input path> <output path> <ra> <dec> <sr>");
        System.exit(-1);
    }
    
    
    Job job = new Job();
    
    job.setJarByClass(HadoopTest.class);
    Configuration conf = job.getConfiguration();
    
    if(!isDouble(args[2])||!isDouble(args[3])||!isDouble(args[4]))
    {
        System.out.println("RA DEC SR should be real number");
        System.exit(-1);
    }
    
    DefaultStringifier.store(conf, new DoubleWritable(Double.parseDouble(args[2])), "ra");
    DefaultStringifier.store(conf, new DoubleWritable(Double.parseDouble(args[3])), "dec");
    DefaultStringifier.store(conf, new DoubleWritable(Double.parseDouble(args[4])), "sr");
    
    
    FileInputFormat.addInputPath(job,new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    
    job.setMapperClass(ConeSearchMap.class);
    job.setMapOutputKeyClass(ScircleWritableComparable.class);
    job.setMapOutputValueClass(Text.class);
    
    job.setReducerClass(ConeSearchReduce.class);
    
    job.setOutputKeyClass(ScircleWritableComparable.class);
    job.setOutputValueClass(Text.class);
    
    System.exit(job.waitForCompletion(true)? 0 : 1);
}
4

1 回答 1

0

这个想法是在你的工作对象中设置输入格式:

   SequenceFileInputFormat.addInputPath(job, in);
   SequenceFileOutputFormat.setOutputPath(job, out);
   job.setInputFormatClass(SequenceFileInputFormat.class);
   job.setOutputFormatClass(SequenceFileOutputFormat.class);

因此,您应该将其替换为 FileInputFormat 的具体子类,例如TextFileInputFormator SequenceFileInputFormat

你的输出也一样。

于 2011-06-12T07:44:39.610 回答