4

我在 vmware 的 ubuntu 12.04 的单节点环境中运行 hadoop wordcount 示例。我运行这样的例子: -

hadoop@master:~/hadoop$ hadoop jar hadoop-examples-1.0.4.jar wordcount    
/home/hadoop/gutenberg/ /home/hadoop/gutenberg-output

我在以下位置有输入文件:

/home/hadoop/gutenberg

输出文件的位置是:

    /home/hadoop/gutenberg-output

当我运行 wordcount 程序时,出现以下错误:-

 13/04/18 06:02:10 INFO mapred.JobClient: Cleaning up the staging area     
hdfs://localhost:54310/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201304180554_0001       
13/04/18 06:02:10 ERROR security.UserGroupInformation: PriviledgedActionException       
as:hadoop cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
/home/hadoop/gutenberg-output already exists 
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
/home/hadoop/gutenberg-output already exists at 

org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.j 
ava:137) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:887) at 
org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:416) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at   
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850) at  
org.apache.hadoop.mapreduce.Job.submit(Job.java:500) at  
org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530) at 
org.apache.hadoop.examples.WordCount.main(WordCount.java:67) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:616) at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) 
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at 
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:616) at   
org.apache.hadoop.util.RunJar.main(RunJar.java:156) hadoop@master:~/hadoop$ bin/stop-
all.sh Warning: $HADOOP_HOME is deprecated. stopping jobtracker localhost: stopping   
tasktracker stopping namenode localhost: stopping datanode localhost: stopping 
secondarynamenode    hadoop@master:~/hadoop$
4

4 回答 4

9

删除已经存在的输出文件,或者输出到不同的文件。

(我有点好奇您考虑的错误消息的其他解释。)

于 2013-04-18T14:00:34.960 回答
2

就像戴夫(和例外)所说,您的输出目录已经存在。您要么需要输出到不同的目录,要么首先删除现有的目录,使用:

hadoop fs -rmr /home/hadoop/gutenberg-output
于 2013-04-19T01:38:12.907 回答
2

如果您创建了自己的 .jar 并尝试运行它,请注意:

为了运行您的工作,您必须编写如下内容:

hadoop jar <jar-path> <package-path> <input-in-hdfs-path> <output-in-hdfs-path>

但是,如果您仔细查看驱动程序代码,您会发现您已将其设置arg[0]为输入和arg[1]输出......我将展示它:

FileInputFormart.addInputPath(conf, new Path(args[0]));
FileOutFormart.setOutputPath(conf, new Path(args[1]));

但是,hadoop 以arg[0]<package-path>代替,<input-in-hdfs-path>而 arg[1]<input-in-hdfs-path>代替<output-in-hdfs-path>

因此,为了使其工作,您应该使用:

FileInputFormart.addInputPath(conf, new Path(args[1]));
FileOutFormart.setOutputPath(conf, new Path(args[2]));

arg[1]arg[2]所以它会得到正确的东西!:) 希望它有所帮助。干杯。

于 2015-08-07T20:13:40.813 回答
1

检查是否有“tmp”文件夹。

hadoop fs -ls /

如果您看到输出文件夹或“tmp”,请删除两者(考虑到没有正在运行的活动作业)

hadoop fs -rmr /tmp

于 2013-09-27T10:17:15.083 回答