我正在用 Java 运行 Hadoop Mapreduce 代码。它在我的系统中运行良好,但是当我尝试在其他人的系统(需要运行它的最终系统)中运行相同的程序时,它会给出以下错误。错误应该在line 81
有 Double.parseDouble() 命令的地方。它在我的系统上完美运行。可能是什么问题?
13/06/25 12:07:05 INFO input.FileInputFormat: Total input paths to process : 2
13/06/25 12:07:05 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/06/25 12:07:05 WARN snappy.LoadSnappy: Snappy native library not loaded
13/06/25 12:07:06 INFO mapred.JobClient: Running job: job_201306101543_0158
13/06/25 12:07:07 INFO mapred.JobClient: map 0% reduce 0%
13/06/25 12:07:11 INFO mapred.JobClient: map 100% reduce 0%
13/06/25 12:07:18 INFO mapred.JobClient: map 100% reduce 33%
13/06/25 12:07:20 INFO mapred.JobClient: Task Id : attempt_201306101543_0158_r_000000_0, Status : FAILED
java.lang.NullPointerException
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1008)
at java.lang.Double.parseDouble(Double.java:540)
at Transpose$Reduce.reduce(Transpose.java:89)
at Transpose$Reduce.reduce(Transpose.java:61)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:650)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
attempt_201306101543_0158_r_000000_0: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201306101543_0158_r_000000_0: log4j:WARN Please initialize the log4j system properly.
13/06/25 12:07:21 INFO mapred.JobClient: map 100% reduce 0%
13/06/25 12:07:29 INFO mapred.JobClient: map 100% reduce 33%
13/06/25 12:07:31 INFO mapred.JobClient: Task Id : attempt_201306101543_0158_r_000000_1, Status : FAILED
java.lang.NullPointerException
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1008)
at java.lang.Double.parseDouble(Double.java:540)
at Transpose$Reduce.reduce(Transpose.java:89)
at Transpose$Reduce.reduce(Transpose.java:61)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:650)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
相关代码(完整的Reduce函数)
public static class Reduce extends Reducer<Text, Text, Text, Text> {
private Text mult = new Text();
public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
HashMap<Integer, String> mMap = new HashMap<Integer, String>();
HashMap<Integer, String> mMap1 = new HashMap<Integer, String>();
for(Text value : values){
String[] line = value.toString().split(",", 3);
Integer row = Integer.valueOf(line[1]);
if(line[0].equals("M")){
mMap.put(row, line[2]);
}
else if(line[0].equals("Mt")){
mMap1.put(row, line[2]);
}
}
double sum=0.0;
for(Integer i=1; i<=mMap.size(); i++){
String val1 = mMap.get(i);
String val2 = mMap1.get(i);
double mij = Double.parseDouble(val1);
double mjk = Double.parseDouble(val2);
sum += mij*mjk;
}
String str = Double.toString(sum);
mult.set(str);
context.write(key, mult);
}
}