-1

我对这个领域真的很陌生。通过引用http://java.dzone.com/articles/hadoop-practice我做了 https://github.com/studhadoop/xmlparsing-hadoop/blob/master/XmlParser11.java

并创建 jar 文件,然后运行 ​​mapreduce pgm。我的 xml 文件是

<configuration>
    <property>
            <name>dfs.replication</name>
            <value>1</value>
            <type>tr</type>
    </property>
</configuration>

root# javac -classpath /var/root/hadoop-1.0.4/hadoop-core-1.0.4.jar -d xml11 XmlParser11.java

root# jar -cvf /var/root/xmlparser11/xmlparser.jar -C xml11/ .

root# bin/hadoop jar /var/root/xmlparser11/xmlparser.jar com.org.XmlParser11 /user/root/xmlfiles/conf.xml /user/root/xmlfiles-outputjava3

更新

13/03/30 09:39:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/03/30 09:39:58 INFO input.FileInputFormat: Total input paths to process : 1
13/03/30 09:39:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/03/30 09:39:58 WARN snappy.LoadSnappy: Snappy native library not loaded
13/03/30 09:39:58 INFO mapred.JobClient: Running job: job_201303300855_0004
13/03/30 09:39:59 INFO mapred.JobClient:  map 0% reduce 0%
13/03/30 09:40:13 INFO mapred.JobClient: Task Id : attempt_201303300855_0004_m_000000_0, Status : FAILED
java.io.IOException: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved java.lang.String
    at com.org.XmlParser11$Map.map(XmlParser11.java:186)
    at com.org.XmlParser11$Map.map(XmlParser11.java:148)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved java.lang.String
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1014)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
    at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
    at com.org.XmlParser11$Map.map(XmlParser11.java:184)
    ... 9 more

attempt_201303300855_0004_m_000000_0: ‘&lt;property>
attempt_201303300855_0004_m_000000_0:             <name>dfs.replication</name>
attempt_201303300855_0004_m_000000_0:                 <value>1</value>
attempt_201303300855_0004_m_000000_0:             </property>‘
13/03/30 09:40:19 INFO mapred.JobClient: Task Id : attempt_201303300855_0004_m_000000_1, Status : FAILED
java.io.IOException: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved java.lang.String
    at com.org.XmlParser11$Map.map(XmlParser11.java:186)
    at com.org.XmlParser11$Map.map(XmlParser11.java:148)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved java.lang.String
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1014)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
    at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
    at com.org.XmlParser11$Map.map(XmlParser11.java:184)
    ... 9 more

attempt_201303300855_0004_m_000000_1: ‘&lt;property>
attempt_201303300855_0004_m_000000_1:             <name>dfs.replication</name>
attempt_201303300855_0004_m_000000_1:                 <value>1</value>
attempt_201303300855_0004_m_000000_1:             </property>‘
13/03/30 09:40:25 INFO mapred.JobClient: Task Id : attempt_201303300855_0004_m_000000_2, Status : FAILED
java.io.IOException: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved java.lang.String
    at com.org.XmlParser11$Map.map(XmlParser11.java:186)
    at com.org.XmlParser11$Map.map(XmlParser11.java:148)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved java.lang.String
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1014)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
    at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
    at com.org.XmlParser11$Map.map(XmlParser11.java:184)
    ... 9 more

attempt_201303300855_0004_m_000000_2: ‘&lt;property>
attempt_201303300855_0004_m_000000_2:             <name>dfs.replication</name>
attempt_201303300855_0004_m_000000_2:                 <value>1</value>
attempt_201303300855_0004_m_000000_2:             </property>‘
13/03/30 09:40:37 INFO mapred.JobClient: Job complete: job_201303300855_0004
13/03/30 09:40:37 INFO mapred.JobClient: Counters: 7
13/03/30 09:40:37 INFO mapred.JobClient:   Job Counters 
13/03/30 09:40:37 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=27296
13/03/30 09:40:37 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/03/30 09:40:37 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/03/30 09:40:37 INFO mapred.JobClient:     Launched map tasks=4
13/03/30 09:40:37 INFO mapred.JobClient:     Data-local map tasks=4
13/03/30 09:40:37 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/03/30 09:40:37 INFO mapred.JobClient:     Failed map tasks=1

map-reduce 代码有什么问题?我该如何纠正它?

4

2 回答 2

0

请使用 SAX 解析器或 Hadoop Streaming 来解析 XML。接下来,供您参考,请点击此链接。

http://xmlandhadoop.blogspot.in/ http://www.undercloud.org/?p=408

问候, Sudhakar Reddy

于 2013-03-27T15:27:39.417 回答
0

试试这个命令看看

root# bin/hadoop fs -cat /user/root/xmlfiles-outputjava3/part-r-00000

是否有所需的输出。您指定的输出是 Map Reduce 在 HDFS 中运行时获得的标准输出。

更新

你需要把 System.out.println

if (currentElement.equalsIgnoreCase("name")) {
    propertyName += reader.getText();
    System.out.println(propertyName);
} else if (currentElement.equalsIgnoreCase("value")) {
    propertyValue += reader.getText();
    System.out.println(propertyValue);
}

查看正在设置的属性名称和值。如果不是,您需要找出原因吗?

更新 2

context.write(propertyName.trim(), propertyValue.trim());

propertyName 和 propertyValue 是 String,但您已声明 Mapper 以将 Text 输出为键和值。

改成这样。

Text name = new Text();
Text value = new Text();
name.setText(propertyName.trim());
value.setText(propertyValue.trim());
context.write(name, value);
于 2013-03-27T06:13:55.847 回答