0

我有以下映射器类。我想在我的映射器函数中写入 hdfs。所以我需要访问我在 setup() 方法中检索的配置对象。但是它被返回为空,我得到一个 NPE。你能让我知道我做错了什么吗?

这是堆栈跟踪

 hduser@nikhil-VirtualBox:/usr/local/hadoop/hadoop-1.0.4$ bin/hadoop jar GWASMapReduce.jar /user/hduser/tet.gpg /user/hduser/output3
12/11/04 08:50:17 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/11/04 08:50:24 INFO mapred.FileInputFormat: Total input paths to process : 1
12/11/04 08:50:28 INFO mapred.JobClient: Running job: job_201211031924_0008
12/11/04 08:50:29 INFO mapred.JobClient:  map 0% reduce 0%
12/11/04 08:51:35 INFO mapred.JobClient: Task Id : attempt_201211031924_0008_m_000000_0, Status : FAILED
 java.lang.NullPointerException
at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:131)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at com.test.GWASMapper.writeCsvFileSmry(GWASMapper.java:208)
at com.test.GWASMapper.checkForNulls(GWASMapper.java:153)
at com.test.GWASMapper.map(GWASMapper.java:51)
at com.test.GWASMapper.map(GWASMapper.java:1)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at     org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

attempt_201211031924_0008_m_000000_0: ******************************************************************************************************************************************************************************************
attempt_201211031924_0008_m_000000_0: null
attempt_201211031924_0008_m_000000_0: ******************************************************************************************************************************************************************************************
12/11/04 08:51:37 INFO mapred.JobClient: Task Id :     attempt_201211031924_0008_m_000001_0, Status : FAILED

这是我的司机课

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class GWASMapReduce extends Configured implements Tool{

/**
 * @param args
 */
public static void main(String[] args) throws Exception {
    Configuration configuration = new Configuration();
    ToolRunner.run(configuration, new GWASMapReduce(), args);
}

@Override
public int run(String[] arg0) throws Exception {
    JobConf conf = new JobConf();
    conf.setInputFormat(GWASInputFormat.class);
    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(Text.class);
    conf.setJarByClass(GWASMapReduce.class);
    conf.setMapperClass(GWASMapper.class);
    conf.setNumReduceTasks(0);
    FileInputFormat.addInputPath(conf, new Path(arg0[0]));
    FileOutputFormat.setOutputPath(conf, new Path(arg0[1]));
    JobClient.runJob(conf);
    return 0;
}
}

映射器类

import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.TreeMap;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileSplit;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;

import com.google.common.base.Strings;

public class GWASMapper extends MapReduceBase implements Mapper<LongWritable, GWASGenotypeBean, Text, Text> {

private static Configuration conf;


@SuppressWarnings("rawtypes")
public void setup(org.apache.hadoop.mapreduce.Mapper.Context context) throws IOException {

    conf = context.getConfiguration();
    // conf is null here 
}


@Override
public void map(LongWritable inputKey, GWASGenotypeBean inputValue, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
    // mapper code
}


}
4

2 回答 2

1

这只是给面临类似问题的其他人的提示:
请确保您首先设置值并声明工作。

例如:

Configuration conf = new Configuration(); 
conf.set("a","2"); 
conf.set("inputpath",args[0]);
//Must be set before the below line:
Job myjob = new Job(conf); 

希望这可以帮助。

于 2015-02-14T21:16:12.190 回答
1

我想你错过了这个

JobClient jobClient = new JobClient();
client.setConf(conf);

JobClient.runJob(conf);

conf 参数不传递给jobclient。试试这个,看看它是否有帮助

我建议使用新的 mapreduce 库。检查 v2.0 的字数 http://hadoop.apache.org/docs/mapreduce/r0.22.0/mapred_tutorial.html#Example%3A+WordCount+v2.0

也试试这个 JobConf job = new JobConf(new Configuration());

我认为配置对象没有在这里初始化。此外,您在配置对象中没有任何特别之处,因此您也可以在映射器中初始化配置对象,但这对于尝试来说并不是一个好习惯

于 2012-11-04T19:01:18.133 回答