2

我写了这样的代码:

val hashingTF = new HashingTF()

val tfv: RDD[Vector] = sparkContext.parallelize(articlesList.map { t => hashingTF.transform(t.words) })
tfv.cache()

val idf = new IDF().fit(tfv)
val rate: RDD[Vector] = idf.transform(tfv)

如何从每个文章列表项的“率”RDD 中获取前 5 个关键字?

添加:

文章列表包含对象:

case class ArticleInfo (val url: String, val author: String, val date: String, val keyWords: List[String], val words: List[String])

words 包含文章中的所有单词。

我不了解速率的结构,在文档中说:

@return an RDD of TF-IDF vectors
4

1 回答 1

2

我的解决方案是:

    (articlesList, rate.collect()).zipped.foreach { (art,tfidf) =>
  val keywords = new mutable.TreeSet[(String, Double)]
  art.words.foreach { word =>
      val wordHash = hashingTF.indexOf(word)
      val wordTFIDF = tfidf.apply(wordHash)

      if (keywords.size == KEYWORD_COUNT) {
        val minimum = keywords.minBy(_._2)
        if (minimum._2 < wordHash) {
          keywords.remove(minimum)
          keywords.add((word,wordTFIDF))
        }
      } else {
        keywords.add((word,wordTFIDF))
      }
    }

    art.keyWords = keywords.toList.map(_._1)
}
于 2015-01-06T06:33:09.453 回答