1

我正在寻找一种有效的方法来为语料库中的(每个)目标词创建一个术语共现矩阵,这样每个出现的词都会在 tcm 中构成它自己的向量(行),其中列是上下文词(即基于标记的共现模型)。这与向量语义中使用的更常见的方法形成对比,其中每个术语(类型)在对称的 tcm 中获取一行和一列,并且值在类型标记的(共同)出现中聚合。

显然,这可以使用基本 R 功能从头开始完成,或者通过过滤由执行这些操作的现有软件包之一生成的 tcm 来破解,但我正在处理的语料库数据相当大(数百万字) - 并且有R 已经可以使用不错的语料库/NLP 包,它们可以有效地完成这些任务,并将结果存储在内存友好的稀疏矩阵中——例如text2vec(function tcm)、quanteda ( fcm) 和 tidytext ( cast_dtm)。因此,尝试重新发明轮子(在迭代器、散列等方面)似乎没有意义。但是我也找不到一种直接的方法来使用这些中的任何一个来创建基于令牌的 tcm;因此这个问题。

最小的例子:

  library(text2vec)
  library(Matrix)
  library(magrittr)

  # default approach to tcm with text2vec:
  corpus = strsplit(c("here is a short document", "here is a different short document"), " ")
  it = itoken(corpus) 
  tcm = create_vocabulary(it)  %>% vocab_vectorizer() %>% create_tcm(it, . , skip_grams_window = 2, weights = rep(1,2))

  # results in this:
  print(as.matrix(forceSymmetric(tcm, "U")))

            different here short document is a
  different         0    0     1        1  1 1
  here              0    0     0        0  2 2
  short             1    0     0        2  1 2
  document          1    0     2        0  0 1
  is                1    2     1        0  0 2
  a                 1    2     2        1  2 0

尝试为目标词“short”获取基于标记的模型:

  i=0
  corpus = lapply(corpus, function(x) 
   ifelse(x == "short", {i<<-i+1;paste0("short", i)}, x  ) 
   ) # appends index to each occurrence so itoken distinguishes them
  it = itoken(corpus) 
  tcm = create_vocabulary(it)  %>% vocab_vectorizer() %>% create_tcm(it, . , skip_grams_window = 2, weights = rep(1,2))
  attempt = as.matrix(forceSymmetric(tcm, "U") %>% 
   .[grep("^short", rownames(.)), -grep("^short", colnames(.))] 
   ) # filters the resulting full tcm

  # yields intended result but is hacky/slow:
  print(attempt)

         different here document is a
  short2         1    0        1  0 1
  short1         0    0        1  1 1

What is a better/faster alternative to this approach to derive a token-based tcm like in the last example? (possibly using one of R packages that already do type-based tcms)

4

1 回答 1

1

quanteda's fcm is a very efficient way to crate feature co-occurrence matrices wither at the document level or within a user-defined context. This results in a sparse, symmetric feature-by-feature matrix. But it sounds like you want each unique feature to be its own row, and have its target words around that.

It looks from the example that you want a context window of +/- 2 words, so I have done that for the target word "short".

First, we get the context using keywords-in-context:

library("quanteda")
txt <- c("here is a short document", "here is a different short document")

(shortkwic <- kwic(txt, "short", window = 2))
#                                          
# [text1, 4]        is a | short | document
# [text2, 5] a different | short | document

Then create a corpus from the context, with the keyword as a unique document name:

shortcorp <- corpus(shortkwic, split_context = FALSE, extract_keyword = TRUE)
docnames(shortcorp) <- make.unique(docvars(shortcorp, "keyword"))
texts(shortcorp)
#                 short                      short.1 
# "is a short document" "a different short document" 

Then create a dfm, selecting all words, but removing the target:

dfm(shortcorp) %>%
  dfm_select(dfm(txt)) %>%
  dfm_remove("short")
# Document-feature matrix of: 2 documents, 5 features (40% sparse).
# 2 x 5 sparse Matrix of class "dfm"
#          features
# docs      here is a document different
#   short      0  1 1        1         0
#   short.1    0  0 1        1         1
于 2018-10-23T18:12:50.777 回答