4

我正在尝试从(大型)文本文档(TF-IDF 向量)集合中在 MLLib 上运行 KMeans。文档通过 Lucene 英语分析器发送,稀疏向量由 HashingTF.transform() 函数创建。无论我使用的并行度如何(通过 coalesce 函数),KMeans.train 总是在下面返回一个 OutOfMemory 异常。关于如何解决这个问题的任何想法?

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at scala.reflect.ManifestFactory$$anon$12.newArray(Manifest.scala:138)
at scala.reflect.ManifestFactory$$anon$12.newArray(Manifest.scala:136)
at breeze.linalg.Vector$class.toArray(Vector.scala:80)
at breeze.linalg.SparseVector.toArray(SparseVector.scala:48)
at breeze.linalg.Vector$class.toDenseVector(Vector.scala:75)
at breeze.linalg.SparseVector.toDenseVector(SparseVector.scala:48)
at breeze.linalg.Vector$class.toDenseVector$mcD$sp(Vector.scala:74)
at breeze.linalg.SparseVector.toDenseVector$mcD$sp(SparseVector.scala:48)
at org.apache.spark.mllib.clustering.BreezeVectorWithNorm.toDense(KMeans.scala:422)
at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1.apply(KMeans.scala:285)
at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1.apply(KMeans.scala:284)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at org.apache.spark.mllib.clustering.KMeans.initKMeansParallel(KMeans.scala:284)
at org.apache.spark.mllib.clustering.KMeans.runBreeze(KMeans.scala:143)
at org.apache.spark.mllib.clustering.KMeans.run(KMeans.scala:126)
at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:338)
at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:348)
4

1 回答 1

4

经过一番调查,事实证明这个问题与new HashingTF().transform(v)方法有关。尽管使用散列技巧创建稀疏向量确实很有帮助(尤其是当特征数量未知时),但向量必须保持稀疏。HashingTF 向量的默认大小为 2^20。给定 64 位双精度,理论上每个向量在转换为密集向量时需要 8MB - 无论我们可以应用降维。

可悲的是,KMeans 使用toDense方法(至少对于集群中心),因此导致 OutOfMemory 错误(想象 k = 1000)。

  private def initRandom(data: RDD[BreezeVectorWithNorm]) : Array[Array[BreezeVectorWithNorm]] = {
    val sample = data.takeSample(true, runs * k, new XORShiftRandom().nextInt()).toSeq
    Array.tabulate(runs)(r => sample.slice(r * k, (r + 1) * k).map { v =>
      new BreezeVectorWithNorm(v.vector.toDenseVector, v.norm)
    }.toArray)
  }
于 2014-10-21T19:38:49.143 回答