1

当我按如下方式运行 example-6-llda-learn.scala 时没关系:

val source = CSVFile("pubmed-oa-subset.csv") ~> IDColumn(1);

val tokenizer = {
  SimpleEnglishTokenizer() ~>            // tokenize on space and punctuation
  CaseFolder() ~>                        // lowercase everything
  WordsAndNumbersOnlyFilter() ~>         // ignore non-words and non-numbers
  MinimumLengthFilter(3)                 // take terms with >=3 characters
}

val text = {
  source ~>                              // read from the source file
  Column(4) ~>                           // select column containing text
  TokenizeWith(tokenizer) ~>             // tokenize with tokenizer above
  TermCounter() ~>                       // collect counts (needed below)
  TermMinimumDocumentCountFilter(4) ~>   // filter terms in <4 docs
  TermDynamicStopListFilter(30) ~>       // filter out 30 most common terms
  DocumentMinimumLengthFilter(5)         // take only docs with >=5 terms
}

// define fields from the dataset we are going to slice against
val labels = {
  source ~>                              // read from the source file
  Column(2) ~>                           // take column two, the year
  TokenizeWith(WhitespaceTokenizer()) ~> // turns label field into an array
  TermCounter() ~>                       // collect label counts
  TermMinimumDocumentCountFilter(10)     // filter labels in < 10 docs
}

val dataset = LabeledLDADataset(text, labels);

// define the model parameters
val modelParams = LabeledLDAModelParams(dataset);

// Name of the output model folder to generate
val modelPath = file("llda-cvb0-"+dataset.signature+"-"+modelParams.signature);

// Trains the model, writing to the given output path
TrainCVB0LabeledLDA(modelParams, dataset, output = modelPath, maxIterations = 1000);
// or could use TrainGibbsLabeledLDA(modelParams, dataset, output = modelPath, maxIterations = 1500);

但是当我将最后一行更改为: TrainCVB0LabeledLDA(modelParams, dataset, output = modelPath, maxIterations = 1000); to: TrainGibbsLabeledLDA(modelParams, dataset, output = modelPath, maxIterations = 1500);

而 CVB0 的方法消耗大量内存。我训练了一个包含 10,000 个文档的语料库,每个文档大约有 10 个标签,它将消耗 30G 内存。

4

1 回答 1

0

我遇到了同样的情况,确实我相信这是一个错误。在文件夹下签入,GIbbsLabeledLDA.scala从第 204 行开始:edu.stanford.nlp.tmt.model.lldasrc/main/scala

val z = doc.labels(zI);

val pZ = (doc.theta(z)+topicSmoothing(z)) *
         (countTopicTerm(z)(term)+termSmooth) /
         (countTopic(z)+termSmoothDenom);

doc.labels是不言自明的,并doc.theta记录其标签的分布(实际上是计数),其大小与 相同doc.labels

zI是索引变量迭代doc.labels,而值z获取实际的标签号。问题来了:这个文档可能只有一个标签——比如 1000——因此zI是 0 和z1000,然后doc.theta(z)超出范围。

我想解决方案是修改doc.theta(z)doc.theta(zI).
(我正在尝试检查结果是否有意义,无论如何这个错误让我对这个工具箱没有那么自信。)

于 2014-05-22T09:21:07.643 回答