R quanteda::tokens_lookup() 有更快的替代方案吗?
我使用 'quanteda' R 包中的 tokens() 来标记包含 2000 个文档的数据框。每个文件是 50 - 600 字。这在我的 PC 上需要几秒钟(Microsoft R Open 3.4.1,Intel MKL(使用 2 个内核))。
我有一个字典对象,由近 600 000 个单词(TERMS)及其相应的引理(PARENT)组成的数据框。有 80 000 个不同的引理。
我使用 tokens_lookup() 将令牌列表中的元素替换为在字典中找到的引理。但这至少需要 1.5 小时。这个功能对我的问题来说太慢了。有没有更快的方法,同时仍然获得令牌列表?
我想直接转换令牌列表,在使用字典之后制作 ngrams。如果我只想要 onegrams,我可以通过将文档特征矩阵与字典连接来轻松完成此操作。
我怎样才能更快地做到这一点?将令牌列表转换为数据框,加入字典,转换回有序令牌列表?
这是示例代码:
library(quanteda)
myText <- c("the man runs home", "our men ran to work")
myDF <- data.frame(myText)
myDF$myText <- as.character(myDF$myText)
tokens <- tokens(myDF$myText, what = "word",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, remove_hyphens = TRUE)
tokens
# tokens from 2 documents.
# text1 :
# [1] "the" "man" "runs" "home"
#
# text2 :
# [1] "our" "men" "ran" "to" "work"
term <- c("man", "men", "woman", "women", "run", "runs", "ran")
lemma <- c("human", "human", "human", "humen", "run", "run", "run")
dict_df <- data.frame(TERM=term, LEMMA=lemma)
dict_df
# TERM LEMMA
# 1 man human
# 2 men human
# 3 woman human
# 4 women humen
# 5 run run
# 6 runs run
# 7 ran run
dict_list <- list( "human" = c("man", "men", "woman", "women") , "run" = c("run", "runs", "ran"))
dict <- quanteda::dictionary(dict_list)
dict
# Dictionary object with 2 key entries.
# - human:
# - man, men, woman, women
# - run:
# - run, runs, ran
tokens_lemma <- tokens_lookup(tokens, dictionary=dict, exclusive = FALSE, capkeys = FALSE)
tokens_lemma
#tokens from 2 documents.
# text1 :
# [1] "the" "human" "run" "home"
#
# text2 :
# [1] "our" "human" "run" "to" "work"
tokens_ngrams <- tokens_ngrams(tokens_lemma, n = 1:2)
tokens_ngrams
#tokens from 2 documents.
# text1 :
# [1] "the" "human" "run" "home" "the_human" "human_run" "run_home"
#
# text2 :
# [1] "our" "human" "run" "to" "work" "our_human" "human_run" "run_to" "to_work"