1

我正在尝试将 csv 文件转换为序列文件,以便我可以在数据中训练和运行分类器。我有一个我编译的作业 java 文件,然后 jar 到 mahout 作业 jar 中。当我尝试hadoop jar在 mahout jar 中工作时,我得到一个java.lang.ClassNotFoundException: org.apache.mahout.math.VectorWritable. 我不知道为什么会这样,因为如果我查看 mahout jar,那个类确实存在。

这是我正在做的步骤

#get new copy of mahout jar
rm iris.jar
cp /home/stephen/home/libs/mahout-distribution-0.7/core/target/mahout-core-0.7-job.jar iris.jar    
javac -cp :/home/stephen/home/libs/hadoop-1.0.4/hadoop-core-1.0.4.jar:/home/stephen/home/libs/mahout-distribution-0.7/core/target/mahout-core-0.7-job.jar -d bin/ src/edu/iris/seq/CsvToSequenceFile.java    
jar ufv iris.jar -C bin .    
hadoop jar iris.jar edu.iris.seq.CsvToSequenceFile iris-data iris-seq

这就是我的java文件的样子

public class CsvToSequenceFile {

public static void main(String[] args) throws IOException,
        InterruptedException, ClassNotFoundException {

    String inputPath = args[0];
    String outputPath = args[1];

    Configuration conf = new Configuration();
    Job job = new Job(conf);
    job.setJobName("Csv to SequenceFile");
    job.setJarByClass(Mapper.class);

    job.setMapperClass(Mapper.class);
    job.setReducerClass(Reducer.class);

    job.setNumReduceTasks(0);

    job.setOutputKeyClass(LongWritable.class);
    job.setOutputValueClass(VectorWritable.class);

    job.setOutputFormatClass(SequenceFileOutputFormat.class);
    job.setInputFormatClass(TextInputFormat.class);

    TextInputFormat.addInputPath(job, new Path(inputPath));
    SequenceFileOutputFormat.setOutputPath(job, new Path(outputPath));

    // submit and wait for completion
    job.waitForCompletion(true);
}

}

这是命令行中的错误

2/10/30 10:43:32 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/10/30 10:43:33 INFO input.FileInputFormat: Total input paths to process : 1
12/10/30 10:43:33 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/10/30 10:43:33 WARN snappy.LoadSnappy: Snappy native library not loaded
12/10/30 10:43:34 INFO mapred.JobClient: Running job: job_201210300947_0005
12/10/30 10:43:35 INFO mapred.JobClient:  map 0% reduce 0%
12/10/30 10:43:50 INFO mapred.JobClient: Task Id : attempt_201210300947_0005_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.mahout.math.VectorWritable
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:899)
    at org.apache.hadoop.mapred.JobConf.getOutputValueClass(JobConf.java:929)
    at org.apache.hadoop.mapreduce.JobContext.getOutputValueClass(JobContext.java:145)
    at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:61)
    at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:628)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:753)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.mahout.math.VectorWritable
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:867)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:891)
    ... 11 more

任何想法如何解决这个问题,或者我什至试图正确地完成这个过程?我是 hadoop 和 mahout 的新手,所以如果我正在做一些困难的事情,请告诉我。谢谢!

4

3 回答 3

2
于 2012-10-30T13:56:33.163 回答
0

尝试明确指定类路径,而不是hadoop jar iris.jar edu.iris.seq.CsvToSequenceFile iris-data iris-seq尝试类似java -cp ...

于 2012-10-30T13:46:11.127 回答
0

在创建 jar (map/reduce) 时,创建具有依赖项的 jar。

与参考。对于 maven,您可以在 pom.xml 中添加以下代码并编译代码 << mvn clean package assembly:single >> 。这将在目标文件夹中创建具有依赖关系的 jar,并且创建的 jar 可能看起来像 <>-1.0-SNAPSHOT-jar-with-dependencies.jar

<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</configuration>
</plugin>
</plugins>
</build>

希望这能回答你的疑问。

于 2015-04-21T14:32:37.383 回答