2

我正在尝试从我的数据文本分析中删除拼写错误。所以我正在使用 quanteda 包的字典功能。它适用于 Unigram。但它为 Bigrams 提供了意想不到的输出。不知道如何处理拼写错误,以免它们潜入我的 Bigrams 和 Trigrams。

ZTestCorp1 <- c("The new law included a capital gains tax, and an inheritance tax.", 
                "New York City has raised a taxes: an income tax and a sales tax.")

ZcObj <- corpus(ZTestCorp1)

mydict <- dictionary(list("the"="the", "new"="new", "law"="law", 
                      "capital"="capital", "gains"="gains", "tax"="tax", 
                      "inheritance"="inheritance", "city"="city")) 

Zdfm1 <- dfm(ZcObj, ngrams=2, concatenator=" ", 
         what = "fastestword", 
         toLower=TRUE, removeNumbers=TRUE,
         removePunct=TRUE, removeSeparators=TRUE,
         removeTwitter=TRUE, stem=FALSE,
         ignoredFeatures=NULL,
         language="english", 
         dictionary=mydict, valuetype="fixed")

wordsFreq1 <- colSums(sort(Zdfm1))

电流输出

> wordsFreq1
    the         new         law     capital       gains         tax inheritance        city 
      0           0           0           0           0           0           0           0 

不使用字典,输出如下:

> wordsFreq
    tax and         the new         new law    law included      included a       a capital 
          2               1               1               1               1               1 
capital gains       gains tax          and an  an inheritance inheritance tax        new york 
          1               1               1               1               1               1 
  york city        city has      has raised        raised a         a taxes        taxes an 
          1               1               1               1               1               1 
  an income      income tax           and a         a sales       sales tax 
          1               1               1               1               1

预期的 Bigram

The new
new law
law capital
capital gains
gains tax
tax inheritance
inheritance city  

ps 我假设在字典匹配后完成标记化。但根据我看到的结果,情况似乎并非如此。

另一方面,我尝试将我的字典对象创建为

mydict <- dictionary(list(mydict=c("the", "new", "law", "capital", "gains", 
                      "tax", "inheritance", "city"))) 

但它没有用。所以我不得不使用我认为效率不高的方法。

更新 根据 Ken 的解决方案添加了输出:

> (myDfm1a <- dfm(ZcObj, verbose = FALSE, ngrams=2, 
+                keptFeatures = c("the", "new", "law", "capital", "gains",  "tax", "inheritance", "city")))
Document-feature matrix of: 2 documents, 14 features.
2 x 14 sparse Matrix of class "dfmSparse" features
docs    the_new new_law law_included a_capital capital_gains gains_tax   tax_and an_inheritance
text1       1       1            1         1             1         1       1               1
text2       0       0            0         0             0         0       1              0
   features
docs    inheritance_tax new_york york_city city_has income_tax sales_tax
text1               1        0         0        0          0         0
text2               0        1         1        1          1         1
4

1 回答 1

5

更新 2017-12-21 以获得较新版本的 quanteda

很高兴看到您正在使用该软件包!我认为你正在努力解决两个问题。首先是如何在形成 ngram 之前应用特征选择。第二个是一般如何定义特征选择(使用 quanteda)。

一个问题:如何在形成 ngram 之前应用特征选择。在这里,您已经定义了一个字典来执行此操作。(正如我将在下面展示的,这在此处不是必需的。)您想删除所有不在选择列表中的术语,然后形成二元组。quanteda 默认情况下不这样做,因为它不是“bigram”的标准形式,其中单词不是根据邻接严格定义的某个窗口来搭配的。例如,在您的预期结果中,law capital不是一对相邻的项,这是二元组的通常定义。

但是,我们可以通过更“手动”构建文档特征矩阵来覆盖此行为。

首先,标记文本。

# tokenize the original
toks <- tokens(ZcObj, removePunct = TRUE, removeNumbers = TRUE) %>%
  tokens_tolower()
toks
## tokens object from 2 documents.
## text1 :
##  [1] "the"         "new"         "law"         "included"    "a"           "capital"     "gains"       "tax"         "and"         "an"          "inheritance" "tax"        
## 
## text2 :
##  [1] "new"    "york"   "city"   "has"    "raised" "a"      "taxes"  "an"     "income" "tax"    "and"    "a"      "sales"  "tax"  

现在,我们使用以下命令将您的字典mydict应用于标记化文本tokens_select()

(toksDict <- tokens_select(toks, mydict, selection = "keep"))
## tokens object from 2 documents.
## text1 :
##  [1] "the"         "new"         "law"         "capital"     "gains"       "tax"         "inheritance" "tax"        
## 
## text2 :
##  [1] "new"  "city" "tax"  "tax" 

从这组选定的标记中,我们现在可以形成二元组(或者我们可以toksDict直接提供给dfm()):

(toks2 <- tokens_ngrams(toksDict, n = 2, concatenator = " "))
## tokens object from 2 documents.
## text1 :
##  [1] "the new"         "new law"         "law capital"     "capital gains"   "gains tax"       "tax inheritance" "inheritance tax"
## 
## text2 :
##  [1] "new city" "city tax" "tax tax" 

# now create the dfm
(myDfm2 <- dfm(toks2))
## Document-feature matrix of: 2 documents, 10 features.
## 2 x 10 sparse Matrix of class "dfm"
##        features
## docs    the new new law law capital capital gains gains tax tax inheritance inheritance tax new city city tax tax tax
##   text1       1       1           1             1         1               1               1        0        0       0
##   text2       0       0           0             0         0               0               0        1        1       1
topfeatures(myDfm2)
#     the new         new law     law capital   capital gains       gains tax tax inheritance inheritance tax        new city        city tax         tax tax 
#           1               1               1               1               1               1               1               1               1               1 

功能列表现在非常接近您想要的。

第二个问题是为什么您的字典方法似乎效率低下。这是因为您正在创建一个字典来执行特征选择,但并未真正将其用作字典 - 换句话说,每个键等于其自己的键作为值的字典并不是真正的字典。只需给它一个选择标记的字符向量,它就可以正常工作,例如:

(myDfm1 <- dfm(ZcObj, verbose = FALSE, 
               keptFeatures = c("the", "new", "law", "capital", "gains", "tax", "inheritance", "city")))
## Document-feature matrix of: 2 documents, 8 features.
## 2 x 8 sparse Matrix of class "dfm"
##        features
## docs    the new law capital gains tax inheritance city
##   text1   1   1   1       1     1   2           1    0
##   text2   0   1   0       0     0   2           0    1
于 2015-12-28T09:31:38.817 回答