2

我正在尝试在 mahout 中使用 k-means 对一些手工制作的日期进行聚类。我创建了 6 个文件,每个文件中几乎没有 1 或 2 个单词的文本。使用 ./mahout seqdirectory 从中创建了一个序列文件。在尝试使用 ./mahout seq2sparse 命令将序列文件转换为向量时,我收到 java.lang.OutOfMemoryError: Java heap space error。序列文件的大小为 .215 KB。

命令:./mahout seq2sparse -i mokha/output -o mokha/vector -ow

错误日志:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/bitnami/mahout/mahout-distribution-0.5/m
ahout-examples-0.5-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/bitnami/mahout/mahout-distribution-0.5/l
ib/slf4j-jcl-1.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Apr 24, 2013 2:25:11 AM org.slf4j.impl.JCLLoggerAdapter warn
WARNING: No seq2sparse.props found on classpath, will use command-line arguments
 only
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Maximum n-gram size is: 1
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting mokha/vector
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Minimum LLR value: 1.0
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Number of reduce tasks: 1
Apr 24, 2013 2:25:12 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Initializing JVM Metrics with processName=JobTracker, sessionId=
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0001
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commi
ting
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate
INFO:
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0001_m_000000_0 is allowed to commit now
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapreduce.lib.output.FileOutputCommitt
er commitTask
INFO: Saved output of task 'attempt_local_0001_m_000000_0' to mokha/vector/token
ized-documents
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate
INFO:
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0001_m_000000_0' done.
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 100% reduce 0%
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0001
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 5
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:   FileSystemCounters
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_READ=1471400
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_WRITTEN=1496783
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:   Map-Reduce Framework
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     Map input records=6
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     Spilled Records=0
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     Map output records=6
Apr 24, 2013 2:25:13 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0002
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer <init>
INFO: io.sort.mb = 100
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0002
java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:
781)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.ja
va:524)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:1
77)
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0002
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 0
Apr 24, 2013 2:25:14 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0003
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer <init>
INFO: io.sort.mb = 100
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0003
java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:
781)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.ja
va:524)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:1
77)
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0003
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 0
Apr 24, 2013 2:25:16 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 0
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0004
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 0
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0004
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.RangeCheck(ArrayList.java:547)
        at java.util.ArrayList.get(ArrayList.java:322)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:1
24)
Apr 24, 2013 2:25:17 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Apr 24, 2013 2:25:17 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0004
Apr 24, 2013 2:25:17 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 0
Apr 24, 2013 2:25:17 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting mokha/vector/partial-vectors-0
Apr 24, 2013 2:25:17 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputExc
eption: Input path does not exist: file:/home/bitnami/mahout/mahout-distribution
-0.5/bin/mokha/vector/tf-vectors
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(File
InputFormat.java:224)
        at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listSta
tus(SequenceFileInputFormat.java:55)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileI
nputFormat.java:241)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:7
79)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
        at org.apache.mahout.vectorizer.tfidf.TFIDFConverter.startDFCounting(TFI
DFConverter.java:350)
        at org.apache.mahout.vectorizer.tfidf.TFIDFConverter.processTfIdf(TFIDFC
onverter.java:151)
        at org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.run(Spars
eVectorsFromSequenceFiles.java:262)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.main(Spar
seVectorsFromSequenceFiles.java:52)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(Progra
mDriver.java:68)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:187)
4

2 回答 2

1

bin/mahout 脚本读取环境变量“MAHOUT_HEAPSIZE”(以兆字节为单位)并从中设置“JAVA_HEAP_MAX”变量(如果存在)。我正在使用的 mahout 版本(0.8)将 JAVA_HEAP_MAX 设置为 3G。执行

export MAHOUT_HEAPSIZE=10000m

在树冠集群运行似乎帮助我在单台机器上运行更长时间之前。但是,我怀疑最好的解决方案是切换到在集群上运行。

作为参考,还有另一个相关的帖子: Mahout 用完堆空间

于 2014-01-09T00:24:16.483 回答
0

我不知道你是否尝试过这个,但只是发布它以防你错过它。

'Set the environment variable 'MAVEN_OPTS' to allow for more memory via 'export MAVEN_OPTS=-Xmx1024m'

请参阅(在常见问题部分)here

于 2013-04-24T03:48:51.470 回答