在过去的几天里,我测试了多个版本的 Hadoop(1.0.1、1.0.2、1.1.4)。在每种情况下,我都可以使用以下命令行轻松运行 WordCount 程序:
hadoop jar hadoop-examples-1.1.1.jar wordcount /input output
由于上述命令执行成功,那么我假设我的 Hadoop 配置是正确的。但是,当我尝试使用来自 Eclipse 的完全相同的输入运行程序时,我会收到每个版本的以下错误消息。 谁能告诉我为什么它不能从 Eclipse 运行?
Dec 12, 2012 2:19:41 PM org.apache.hadoop.util.NativeCodeLoader <clinit>
WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Dec 12, 2012 2:19:41 PM org.apache.hadoop.mapred.JobClient copyAndConfigureFiles
WARNING: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
****file:/tmp/wordcount/in
Dec 12, 2012 2:19:42 PM org.apache.hadoop.mapred.JobClient$2 run
INFO: Cleaning up the staging area file:/tmp/hadoop-root/mapred/staging/root-41981592/.staging/job_local_0001
Dec 12, 2012 2:19:42 PM org.apache.hadoop.security.UserGroupInformation doAs
SEVERE: PriviledgedActionException as:root cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/input
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/input
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:235)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:962)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:979)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at com.igalia.wordcount.WordCount.run(WordCount.java:94)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at com.igalia.wordcount.App.main(App.java:28)