0

我确实设置了 hadoop Ubuntu OS,遵循了所有必要的步骤,1.创建了 hdfs 文件系统 2.将文本文件移动到输入目录 3.有权访问所有目录。但是当运行简单的字数统计示例时,我得到了:

 import java.io.IOException;
 import java.util.*;

 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.conf.*;
 import org.apache.hadoop.io.*;
 import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class wordcount {

 public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            context.write(word, one);
        }
    }
 } 

 public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {

    public void reduce(Text key, Iterable<IntWritable> values, Context context) 
      throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        context.write(key, new IntWritable(sum));
    }
 }

 public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();

    conf.addResource(new Path("/HADOOP_HOME/conf/core-site.xml"));
    conf.addResource(new Path("/HADOOP_HOME/conf/hdfs-site.xml"));

    Job job = new Job(conf, "wordcount");

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    job.setJarByClass(wordcount.class);

    job.setMapperClass(Map.class);
    job.setReducerClass(Reduce.class);

    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);





 // FileInputFormat.addInputPath(job, new Path(args[0]));
 //  FileOutputFormat.setOutputPath(job, new Path(args[1]));

    FileInputFormat.setInputPaths(job, new Path("/user/gabriele/input"));
    FileOutputFormat.setOutputPath(job, new Path("/user/gabriele/output"));


    job.waitForCompletion(true);
 }

}

但是,输入路径是有效的(也可以从命令行检查),甚至可以从 Eclipse 本身查看该路径中的文件,所以请帮助我错了。

有一个解决方案说添加以下两行:

config.addResource(new Path("/HADOOP_HOME/conf/core-site.xml")); config.addResource(new Path("/HADOOP_HOME/conf/hdfs-site.xml"));

但仍然无法正常工作。

这里的错误:运行为->在hadoop上运行

13/11/08 08:39:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... 在适用的情况下使用内置 java 类 13/11/08 08:39:12 WARN mapred.JobClient:使用 GenericOptionsParser 解析参数。应用程序应该实现同样的工具。13/11/08 08:39:12 WARN mapred.JobClient:没有设置作业 jar 文件。可能找不到用户类。请参阅 JobConf(Class) 或 JobConf#setJar(String)。13/11/08 08:39:12 INFO mapred.JobClient:清理暂存区文件:/tmp/hadoop-gabriele/mapred/staging/gabriele481581440/.staging/job_local481581440_0001 13/11/08 08:39:12 错误security.UserGroupInformation:PriviledgedActionException as:gabriele 原因:org.apache.hadoop.mapreduce.lib.input.InvalidInputException:输入路径不存在:文件:/user/gabriele/input 线程“main”org.apache 中的异常。

谢谢

4

1 回答 1

0

除非您的 Hadoop 安装确实植根于 /HADOOP_HOME,否则我建议您更改以下行,以便将 HADOOP_HOME 替换为实际安装 Hadoop 的位置(/usr/lib/hadoop、/opt/hadoop 或您安装它的任何位置):

conf.addResource(new Path("/usr/lib/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/usr/lib/hadoop/conf/hdfs-site.xml"));

或者在 Eclipse 中,将 /usr/lib/hadoop/conf 文件夹(或安装 hadoop 的任何位置)添加到 Build 类路径)。

于 2013-11-08T12:17:52.480 回答