2

我想使用大型外部字典(格式如下面的 txt 变量)对波兰语文本进行词形还原。我不幸运,可以选择使用流行的文本挖掘包的波兰语。@DmitriySelivanov的答案https://stackoverflow.com/a/45790325/3480717适用于简单的文本向量。(我还从字典和语料库中删除了波兰语变音符号。)该函数适用于文本向量。

不幸的是,它不适用于 tm 生成的语料库格式。让我粘贴 Dmitriy 的代码:

library(hashmap)
library(data.table)
txt = 
  "Abadan  Abadanem
  Abadan  Abadanie
  Abadan  Abadanowi
  Abadan  Abadanu
  abadańczyk  abadańczycy
  abadańczyk  abadańczykach
  abadańczyk  abadańczykami
  "
dt = fread(txt, header = F, col.names = c("lemma", "word"))
lemma_hm = hashmap(dt$word, dt$lemma)

lemma_hm[["Abadanu"]]
#"Abadan"


lemma_tokenizer = function(x, lemma_hashmap, 
                           tokenizer = text2vec::word_tokenizer) {
  tokens_list = tokenizer(x)
  for(i in seq_along(tokens_list)) {
    tokens = tokens_list[[i]]
    replacements = lemma_hashmap[[tokens]]
    ind = !is.na(replacements)
    tokens_list[[i]][ind] = replacements[ind]
  }
  tokens_list
}
texts = c("Abadanowi abadańczykach OutOfVocabulary", 
          "abadańczyk Abadan OutOfVocabulary")
lemma_tokenizer(texts, lemma_hm)

#[[1]]
#[1] "Abadan"          "abadańczyk"      "OutOfVocabulary"
#[[2]]
#[1] "abadańczyk"      "Abadan"          "OutOfVocabulary"

现在我想将它应用于 tm 语料库“文档”这是我将在 tm 生成的语料库上与 tm 包一起使用的示例语法。

docs <- tm_map(docs, function(x) lemma_tokenizer(x, lemma_hashmap="lemma_hm"))

我尝试的另一种语法:

LemmaTokenizer <- function(x) lemma_tokenizer(x, lemma_hashmap="lemma_hm")

docsTDM <-
  DocumentTermMatrix(docs, control = list(wordLengths = c(4, 25), tokenize=LemmaTokenizer))

它向我抛出了一个错误:

 Error in lemma_hashmap[[tokens]] : 
  attempt to select more than one element in vectorIndex 

该函数适用于文本向量,但不适用于 tm 语料库。提前感谢您的建议(如果它不适用于 tm,甚至可以将此功能与其他文本挖掘包一起使用)。

4

4 回答 4

2

尝试使用quantedadictionary()函数,在创建将每个变体映射为字典值的字典后,将引理作为字典键。下面,它会查找您的值,然后将标记粘贴回文本中。(如果您想要令牌,则不需要最后一个paste()操作。

txt <-  
    "Abadan  Abadanem
Abadan  Abadanie
Abadan  Abadanowi
Abadan  Abadanu
abadańczyk  abadańczycy
abadańczyk  abadańczykach
abadańczyk  abadańczykami"

list_temp <- strsplit(readLines(textConnection(txt)), "\\s+")
list_temp2 <- lapply(list_temp, "[", 2)
names(list_temp2) <- sapply(list_temp, "[", 1)

library("quanteda")
polish_lemma_dict <- dictionary(list_temp2)
# Dictionary object with 7 key entries.
# - Abadan:
#   - abadanem
# - Abadan:
#   - abadanie
# - Abadan:
#   - abadanowi
# - Abadan:
#   - abadanu
# - abadańczyk: 
#   - abadańczycy
# - abadańczyk:
#   - abadańczykach
# - abadańczyk:
#   - abadańczykami

texts <- c("Abadanowi abadańczykach OutOfVocabulary", 
           "abadańczyk Abadan OutOfVocabulary")

现在texts可以将其转换为标记,并使用quantedatokens_lookup()函数将字典值(变形词)替换为字典键(引理)。在最后一步中,我将标记重新粘贴在一起,如果您想要标记而不是全文,可以跳过它。

require(magrittr)
texts %>%
    tokens() %>%
    tokens_lookup(dictionary = polish_lemma_dict, exclusive = FALSE, capkeys = FALSE) %>%
    as.character() %>%
    paste(collapse = " ")
# [1] "Abadan abadańczyk OutOfVocabulary abadańczyk Abadan OutOfVocabulary"
于 2017-09-08T21:23:40.167 回答
2

我在这里看到两个问题。1)您的自定义函数返回一个列表,而它应该返回一个字符串向量;和 2)您传递了错误的 lemma_hashmap 参数。

解决第一个问题的快速解决方法是在返回函数结果之前使用 paste() 和 sapply()。

lemma_tokenizer = function(x, lemma_hashmap, 
                           tokenizer = text2vec::word_tokenizer) {
  tokens_list = tokenizer(x)
  for(i in seq_along(tokens_list)) {
    tokens = tokens_list[[i]]
    replacements = lemma_hashmap[[tokens]]
    ind = !is.na(replacements)
    tokens_list[[i]][ind] = replacements[ind]
  }

  # paste together, return a vector
  sapply(tokens_list, (function(i){paste(i, collapse = " ")}))
}

我们可以运行您帖子的相同示例。

texts = c("Abadanowi abadańczykach OutOfVocabulary", 
          "abadańczyk Abadan OutOfVocabulary")
lemma_tokenizer(texts, lemma_hm)
[1] "Abadan abadańczyk OutOfVocabulary" "abadańczyk Abadan OutOfVocabulary"

现在,我们可以使用 tm_map。只需确保使用 lemma_hm(即变量)而不是“lemma_hm”(字符串)作为参数。

docs <- SimpleCorpus(VectorSource(texts))
out <- tm_map(docs, (function(x) {lemma_tokenizer(x, lemma_hashmap=lemma_hm)}))
out[[1]]$content
[1] "Abadan abadańczyk OutOfVocabulary"
于 2017-09-08T20:18:29.660 回答
2

有关波兰语词形还原,请参阅此脚本 https://github.com/MarcinKosinski/trigeR5/blob/master/R/lematyzacja.R使用此 polmorfologik 字典https://github.com/MarcinKosinski/trigeR5/tree/master/ dicts(也包括停用词)。

于 2017-09-09T20:16:52.950 回答
1

这是我使用答案的完整代码。感谢很多人,我在底部描述了所有来源。我意识到这很粗糙,但它对我来说很糟糕,即。我可以使用 txt lemmes 词典和我的停用词对波兰语文本进行分类。感谢 Damiano Fantini、Dmitriy Selivanov 和其他许多人。

#----1. Set up. ----
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))


library(readtext)
library(tm)
library(proxy)
library(stringi)
library(stringr)
library(hashmap)
library(data.table)
library(text2vec)

# For reading n-grams
library(RWeka) #(*)
BigramTokenizer <- 
           function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3)) #(*)


#----2. Read data. ----
stopwordsPL <- as.vector(str_split(readLines("polish.stopwords.text",encoding = "UTF-8"), pattern = " ",simplify = T))


docs <- VCorpus(DirSource(pattern="txt"))
titles <- rownames(summary(docs))

docs <- tm_map(docs, removeWords, words=stopwordsPL)
docs <- tm_map(docs, tolower)
docs <- tm_map(docs, function(x) stri_trans_general(x, "Latin-ASCII"))
docs <- tm_map(docs, removeNumbers)
docs <- tm_map(docs, removePunctuation)
docs <- tm_map(docs, stripWhitespace)

# for English texts it would be simpler
# docs <- tm_map(docs, removeWords, stopwords("english")) #can add other words to remove
# docs <- tm_map(docs, stemDocument, "english")

#====3. Lemmatize ====
# # Dictionary from http://www.lexiconista.com/datasets/lemmatization/
# lemmadict_file = "lemmatization-pl.text"
# dt = fread(file= lemmadict_file, header = F, col.names = c("lemma", "word"), data.table=T, encoding="UTF-8")
# # I threw away Polish letters, maybe changing locales may help.
# dt$lemma <- stri_trans_general(dt$lemma, "Latin-ASCII;lower")
# dt$word <- stri_trans_general(dt$word, "Latin-ASCII;lower")
# dt <- unique(dt)
# 
# # Creating hash dictionary
# lemma_hm = hashmap(dt$word, dt$lemma)
# 
# # Test if it works
# lemma_hm[["mnozyl"]]
# # [1] "mnozyc"
# 
# save_hashmap(lemma_hm, file="lemma_hm", overwrite = TRUE, compress = TRUE)

lemma_hm <- load_hashmap(file="lemma_hm")

lemma_tokenizer = function(x, lemma_hashmap, 
                           tokenizer = text2vec::word_tokenizer) {
  tokens_list = tokenizer(x)
  for(i in seq_along(tokens_list)) {
    tokens = tokens_list[[i]]
    replacements = lemma_hashmap[[tokens]]
    ind = !is.na(replacements)
    tokens_list[[i]][ind] = replacements[ind]
  }
  # paste together, return a vector
  sapply(tokens_list, (function(i){paste(i, collapse = " ")}))
}

docs <- tm_map(docs, (function(x) {lemma_tokenizer(x, lemma_hashmap=lemma_hm)}))
docs <- tm_map(docs, PlainTextDocument)

#====4. Create document term matrix====

docsTDM <-
  DocumentTermMatrix(docs, control = list(wordLengths = c(5, 25),tokenize = BigramTokenizer))  #  tokenize=LemmaTokenizer, tokenize = BigramTokenizer (*)


docsTDM$dimnames

#====5. Remove sparse and common words====

docsTDM <- removeSparseTerms(docsTDM, .90)

# https://stackoverflow.com/questions/25905144/removing-overly-common-words-occur-in-more-than-80-of-the-documents-in-r

removeCommonTerms <- function (x, pct) 
{
  stopifnot(inherits(x, c("DocumentTermMatrix", "TermDocumentMatrix")), 
            is.numeric(pct), pct > 0, pct < 1)
  m <- if (inherits(x, "DocumentTermMatrix")) 
    t(x)
  else x
  t <- table(m$i) < m$ncol * (pct)
  termIndex <- as.numeric(names(t[t]))
  if (inherits(x, "DocumentTermMatrix")) 
    x[, termIndex]
  else x[termIndex, ]
}


docsTDM <-
  removeCommonTerms(docsTDM, .8) #remove terms that are in >=80% of the documents
docsTDM$dimnames


#====6. Cluster data (hclust). ====


docsdissim <- dist(as.matrix(docsTDM), method = "cosine")

docsdissim2 <- as.matrix(docsdissim)
dim(docsdissim2)

rownames(docsdissim2) <- titles
colnames(docsdissim2) <- titles

h <- hclust(docsdissim, method = "ward.D2")

plot(h, labels = titles, sub = "")

# Library hclust with p-values (pvclust)

library(pvclust)

h_pv <- pvclust(docsdissim2, method.hclust = "ward.D2", method.dist ="correlation")

plot(h_pv)

data.frame(cutree(tree = h_pv$hclust, k = 4))


# pvclust provides two types of p-values: AU (Approximately Unbiased) p-value and BP (Bootstrap Probability) value. 
# AU p-value, which is computed by multiscale bootstrap resampling, is a better approximation to unbiased p-value 
# than BP value computed by normal bootstrap resampling.
# AU p-value > 0.95 we can assume the clusters exist and may stably be 
# observed if we increase the number of observations. 
# (http://stat.sys.i.kyoto-u.ac.jp/prog/pvclust/)

#==== Literature:====
# Original article:
# http://www.rexamine.com/2014/06/text-mining-in-r-automatic-categorization-of-wikipedia-articles/

# Updates to make it work after some functions became obsolete:
# https://stackoverflow.com/questions/34423823/r-automatic-categorization-of-wikipedia-articles
# https://stackoverflow.com/questions/34372166/error-using-termdocumentmatrix-and-dist-functions-in-r
#
# Based on that:
# http://brazenly.blogspot.co.uk/2015/02/r-categorization-clustering-of.html
#
# Sparse terms:
# https://stackoverflow.com/questions/28763389/how-does-the-removesparseterms-in-r-work

# Lemmatizing function:
# https://stackoverflow.com/questions/46122591/a-lemmatizing-function-using-a-hash-dictionary-does-not-work-with-tm-package-in
# https://stackoverflow.com/questions/45762559/lemmatization-using-txt-file-with-lemmes-in-r/45790325#45790325
于 2017-09-11T11:54:12.430 回答