12

我正在尝试以最简单的方式编写一个程序来计算 Scala 语言中文件中单词的出现次数。到目前为止,我有这些代码:

import scala.io.Codec.string2codec
import scala.io.Source
import scala.reflect.io.File

object WordCounter {
    val SrcDestination: String = ".." + File.separator + "file.txt"
    val Word = "\\b([A-Za-z\\-])+\\b".r

    def main(args: Array[String]): Unit = {

        val counter = Source.fromFile(SrcDestination)("UTF-8")
                .getLines
                .map(l => Word.findAllIn(l.toLowerCase()).toSeq)
                .toStream
                .groupBy(identity)
                .mapValues(_.length)

        println(counter)
    }
}

不要打扰正则表达式。我想知道如何从这一行中检索到的序列中提取单个单词:

map(l => Word.findAllIn(l.toLowerCase()).toSeq)

为了计算每个单词的出现次数。目前我正在获取带有计数单词序列的地图。

4

5 回答 5

37

您可以通过使用正则表达式将文件行拆分为单词"\\W+"flatmap是惰性的,因此不需要将整个文件加载到内存中)。要计算出现次数,您可以使用每个单词折叠Map[String, Int]更新它(比使用更节省内存和时间groupBy

scala.io.Source.fromFile("file.txt")
  .getLines
  .flatMap(_.split("\\W+"))
  .foldLeft(Map.empty[String, Int]){
     (count, word) => count + (word -> (count.getOrElse(word, 0) + 1))
  }
于 2013-03-18T22:03:40.357 回答
16

我认为以下内容更容易理解:

Source.fromFile("file.txt").
  getLines().
  flatMap(_.split("\\W+")).
  toList.
  groupBy((word: String) => word).
  mapValues(_.length)
于 2013-08-17T22:01:20.783 回答
1

我不是 100% 确定你在问什么,但我想我看到了问题所在。尝试使用flatMap而不是map

flatMap(l => Word.findAllIn(l.toLowerCase()).toSeq)

这会将您的所有序列连接在一起,以便groupBy在单个单词而不是在行级别上完成。


关于您的正则表达式的注释

我知道您说过不要担心您的正则表达式,但是您可以进行一些更改以使其更具可读性。这是你现在拥有的:

val Word = "\\b([A-Za-z\\-])+\\b".r

首先,您可以使用 Scala 的三引号字符串,这样您就不必转义反斜杠:

val Word = """\b([A-Za-z\-])+\b""".r

其次,如果您将 放在角色类-开头,则无需转义它:

val Word = """\b([-A-Za-z])+\b""".r
于 2013-03-18T22:03:47.843 回答
1

开始Scala 2.13,除了用 检索单词之外Source,我们还可以使用groupMapReduce方法(顾名思义),它相当于 a和 reduce 步骤groupBymapValues

import scala.io.Source

Source.fromFile("file.txt")
  .getLines.to(LazyList)
  .flatMap(_.split("\\W+"))
  .groupMapReduce(identity)(_ => 1)(_ + _)

groupMapReducestage,类似于 Hadoop 的 map/reduce 逻辑,

  • groups 词本身(身份)(MapReduce 的组部分)

  • maps 每个分组的单词出现次数为 1(映射组Map Reduce 的一部分)

  • reduce通过对一组单词 ( _ + _) 中的 s 值求和(减少 groupMap Reduce的一部分)。

这是可以通过以下方式翻译的一次性版本

seq.groupBy(identity).mapValues(_.map(_ => 1).reduce(_ + _))

还要注意从IteratortoLazyList的转换,以便使用提供的集合groupMapReduce(我们不使用 a ,Stream因为 start是推荐的 s 替换)。Scala 2.13LazyListStream


基于同样的原则,也可以使用一个for-comprehension版本:

(for {
  line <- Source.fromFile("file.txt").getLines.to(LazyList)
  word <- line.split("\\W+")
} yield word)
.groupMapReduce(identity)(_ => 1)(_ + _)
于 2018-10-16T21:18:15.160 回答
0

这就是我所做的。这将切断一个文件。Hashmap 是高性能的一个很好的选择,并且将优于任何类型的排序。您还可以查看其中的更简洁的排序和切片功能。

import java.io.FileNotFoundException

/**.
 * Cohesive static method object for file handling.
 */
object WordCountFileHandler {

  val FILE_FORMAT = "utf-8"

  /**
   * Take input from file. Split on spaces.
   * @param fileLocationAndName string location of file
   * @return option of string iterator
   */
  def apply (fileLocationAndName: String) : Option[Iterator[String]] = {
    apply (fileLocationAndName, " ")
  }

  /**
   * Split on separator parameter.
   * Speculative generality :P
   * @param fileLocationAndName string location of file
   * @param wordSeperator split on this string
   * @return
   */
  def apply (fileLocationAndName: String, wordSeperator: String): Option[Iterator[String]] = {
    try{
      val words = scala.io.Source.fromFile(fileLocationAndName).getLines() //scala io.Source is a bit hackey. No need to close file.

      //Get rid of anything funky... need the double space removal for files like the README.md...
      val wordList = words.reduceLeft(_ + wordSeperator + _).replaceAll("[^a-zA-Z\\s]", "").replaceAll("  ", "").split(wordSeperator)
      //wordList.foreach(println(_))
      wordList.length match {
        case 0 => return None
        case _ => return Some(wordList.toIterator)
      }
    } catch {
      case _:FileNotFoundException => println("file not found: " + fileLocationAndName); return None
      case e:Exception => println("Unknown exception occurred during file handling: \n\n" + e.getStackTrace); return None
    }
  }
}

import collection.mutable

/**
 * Static method object.
 * Takes a processed map and spits out the needed info
 * While a small performance hit is made in not doing this during the word list analysis,
 * this does demonstrate cohesion and open/closed much better.
 * author: jason goodwin
 */
object WordMapAnalyzer {

  /**
   * get input size
   * @param input
   * @return
   */
  def getNumberOfWords(input: mutable.Map[String, Int]): Int = {
    input.size
  }

  /**
   * Should be fairly logarithmic given merge sort performance is generally about O(6nlog2n + 6n).
   * See below for more performant method.
   * @param input
   * @return
   */

  def getTopCWordsDeclarative(input: mutable.HashMap[String, Int], c: Int): Map[String, Int] = {
    val sortedInput = input.toList.sortWith(_._2 > _._2)
    sortedInput.take(c).toMap
  }

  /**
   * Imperative style is used here for much better performance relative to the above.
   * Growth can be reasoned at linear growth on random input.
   * Probably upper bounded around O(3n + nc) in worst case (ie a sorted input from small to high).
   * @param input
   * @param c
   * @return
   */
  def getTopCWordsImperative(input: mutable.Map[String, Int], c: Int): mutable.Map[String, Int] = {
    var bottomElement: (String, Int) = ("", 0)
    val topList = mutable.HashMap[String, Int]()

    for (x <- input) {
      if (x._2 >= bottomElement._2 && topList.size == c ){
        topList -= (bottomElement._1)
        topList +=((x._1, x._2))
        bottomElement = topList.toList.minBy(_._2)
      } else if (topList.size < c ){
        topList +=((x._1, x._2))
        bottomElement = topList.toList.minBy(_._2)
      }
    }
    //println("Size: " + topList.size)

    topList.asInstanceOf[mutable.Map[String, Int]]
  }
}

object WordMapCountCalculator {

  /**
   * Take a list and return a map keyed by words with a count as the value.
   * @param wordList List[String] to be analysed
   * @return HashMap[String, Int] with word as key and count as pair.
   * */

   def apply (wordList: Iterator[String]): mutable.Map[String, Int] = {
    wordList.foldLeft(new mutable.HashMap[String, Int])((word, count) => {
      word get(count) match{
        case Some(x) => word += (count -> (x+1))   //if in map already, increment count
        case None => word += (count -> 1)          //otherwise, set to 1
      }
    }).asInstanceOf[mutable.Map[String, Int]] 
}
于 2013-03-19T00:19:31.660 回答