12

我正在尝试在 R 中做一些词干处理,但它似乎只适用于单个文档。我的最终目标是一个术语文档矩阵,它显示文档中每个术语的频率。

这是一个例子:

require(RWeka)
require(tm)
require(Snowball)

worder1<- c("I am taking","these are the samples",
"He speaks differently","This is distilled","It was placed")
df1 <- data.frame(id=1:5, words=worder1)

> df1
  id                 words
1  1           I am taking
2  2 these are the samples
3  3 He speaks differently
4  4     This is distilled
5  5         It was placed

此方法适用于词干部分,但不适用于术语文档矩阵部分:

> corp1 <- Corpus(VectorSource(df1$words))
> inspect(corp1)
A corpus with 5 text documents

The metadata consists of 2 tag-value pairs and a data frame
Available tags are:
  create_date creator 
Available variables in the data frame are:
  MetaID 

[[1]]
I am taking

[[2]]
these are the samples

[[3]]
He speaks differently

[[4]]
This is distilled

[[5]]
It was placed

> corp1 <- tm_map(corp1, SnowballStemmer)
> inspect(corp1)
A corpus with 5 text documents

The metadata consists of 2 tag-value pairs and a data frame
Available tags are:
  create_date creator 
Available variables in the data frame are:
  MetaID 

[[1]]
[1] I am tak

[[2]]
[1] these are the sampl

[[3]]
[1] He speaks differ

[[4]]
[1] This is distil

[[5]]
[1] It was plac

>  class(corp1)
[1] "VCorpus" "Corpus"  "list"   
> tdm1 <- TermDocumentMatrix(corp1)
Error in UseMethod("Content", x) : 
  no applicable method for 'Content' applied to an object of class "character"

因此,我尝试先创建术语文档矩阵,但这次没有词干:

> corp1 <- Corpus(VectorSource(df1$words))
> tdm1 <- TermDocumentMatrix(corp1, control=list(stemDocument=TRUE))
>  as.matrix(tdm1)
             Docs
Terms         1 2 3 4 5
  are         0 1 0 0 0
  differently 0 0 1 0 0
  distilled   0 0 0 1 0
  placed      0 0 0 0 1
  samples     0 1 0 0 0
  speaks      0 0 1 0 0
  taking      1 0 0 0 0
  the         0 1 0 0 0
  these       0 1 0 0 0
  this        0 0 0 1 0
  was         0 0 0 0 1

这里的词显然不是词干的。

有什么建议么?

4

4 回答 4

9

CRAN 上的RTextTools包允许您执行此操作。

library(RTextTools)
worder1<- c("I am taking","these are the samples",
"He speaks differently","This is distilled","It was placed")
df1 <- data.frame(id=1:5, words=worder1)

matrix <- create_matrix(df1, stemWords=TRUE, removeStopwords=FALSE, minWordLength=2)
colnames(matrix) # SEE THE STEMMED TERMS

这将返回一个DocumentTermMatrix可以与 package 一起使用的tm。您可以使用其他参数(例如删除停用词、更改最小字长、使用不同语言的词干分析器)来获得您需要的结果。当显示as.matrix该示例时,会生成以下术语矩阵:

                         Terms
Docs                      am are differ distil he is it place sampl speak take the these this was
  1 I am taking            1   0      0      0  0  0  0     0     0     0    1   0     0    0   0
  2 these are the samples  0   1      0      0  0  0  0     0     1     0    0   1     1    0   0
  3 He speaks differently  0   0      1      0  1  0  0     0     0     1    0   0     0    0   0
  4 This is distilled      0   0      0      1  0  1  0     0     0     0    0   0     0    1   0
  5 It was placed          0   0      0      0  0  0  1     1     0     0    0   0     0    0   1
于 2012-08-14T19:46:59.923 回答
3

这在0.6 版中R按预期工作。tm您有一些小错误阻止了词干正常工作,也许它们来自旧版本的tm? 无论如何,这里是如何使它工作:

require(RWeka)
require(tm)

词干包不是你的Snowball但是SnowballC

require(SnowballC)

worder1<- c("I am taking","these are the samples",
            "He speaks differently","This is distilled","It was placed")
df1 <- data.frame(id=1:5, words=worder1)
corp1 <- Corpus(VectorSource(df1$words))
inspect(corp1)

像这样在下一行更改SnowballStemmer为:stemDocument

corp1 <- tm_map(corp1, stemDocument)
inspect(corp1)

正如预期的那样,单词是词干的:

<<VCorpus (documents: 5, metadata (corpus/indexed): 0/0)>>

[[1]]
<<PlainTextDocument (metadata: 7)>>
I am take

[[2]]
<<PlainTextDocument (metadata: 7)>>
these are the sampl

[[3]]
<<PlainTextDocument (metadata: 7)>>
He speak differ

[[4]]
<<PlainTextDocument (metadata: 7)>>
This is distil

[[5]]
<<PlainTextDocument (metadata: 7)>>
It was place

现在做术语文档矩阵:

corp1 <- Corpus(VectorSource(df1$words))

更改stemDocumentstemming

tdm1 <- TermDocumentMatrix(corp1, control=list(stemming=TRUE))
as.matrix(tdm1)

正如预期的那样,我们得到了一个词干的 tdm:

        Docs
Terms    1 2 3 4 5
  are    0 1 0 0 0
  differ 0 0 1 0 0
  distil 0 0 0 1 0
  place  0 0 0 0 1
  sampl  0 1 0 0 0
  speak  0 0 1 0 0
  take   1 0 0 0 0
  the    0 1 0 0 0
  these  0 1 0 0 0
  this   0 0 0 1 0
  was    0 0 0 0 1

所以你去。也许更仔细地阅读tm文档可能会为您节省一些时间;)

于 2014-11-02T06:15:50.550 回答
1

是的,用于在您需要的语料库中提取文档的单词RwekaSnowballtm打包。

使用以下说明

> library (tm)
#set your directory Suppose u have set "F:/St" then next command is 
> a<-Corpus(DirSource("/st"), 
            readerControl=list(language="english")) # "/st" it is path of your directory
> a<-tm_map(a, stemDocument, language="english")
> inspect(a)

确保你会找到你想要的结果。

于 2012-08-23T05:12:22.440 回答
0

另一种解决方案是硬编码。它只是拆分文本和词干,然后重新集中:

library(SnowballC)
i=1
#Snowball stemming
while(i<=nrow(veri)){
  metin=veri[i,2]
  stemmed_metin="";
  parcali=unlist(strsplit(metin,split=" ")) #split the text
  for(klm in parcali){
    stemmed_klm=wordStem(klm,language = "turkish") #stem word by word
    stemmed_metin=sprintf("%s %s",stemmed_metin,stemmed_klm) #reconcantrate
  }

  veri[i,4]=stemmed_metin #write to new column

  i=i+1
}
于 2017-08-15T13:52:23.470 回答