我想使用 R 中的 tm 包对纯文本文档语料库中的文档进行词干处理。当我将 SnowballStemmer 函数应用于语料库的所有文档时,只有每个文档的最后一个单词会被词干。
library(tm)
library(Snowball)
library(RWeka)
library(rJava)
path <- c("C:/path/to/diretory")
corp <- Corpus(DirSource(path),
readerControl = list(reader = readPlain, language = "en_US",
load = TRUE))
tm_map(corp,SnowballStemmer) #stemDocument has the same problem
我认为这与将文档读入语料库的方式有关。用一些简单的例子来说明这一点:
> vec<-c("running runner runs","happyness happies")
> stemDocument(vec)
[1] "running runner run" "happyness happi"
> vec2<-c("running","runner","runs","happyness","happies")
> stemDocument(vec2)
[1] "run" "runner" "run" "happy" "happi" <-
> corp<-Corpus(VectorSource(vec))
> corp<-tm_map(corp, stemDocument)
> inspect(corp)
A corpus with 2 text documents
The metadata consists of 2 tag-value pairs and a data frame
Available tags are:
create_date creator
Available variables in the data frame are:
MetaID
[[1]]
run runner run
[[2]]
happy happi
> corp2<-Corpus(DirSource(path),readerControl=list(reader=readPlain,language="en_US" , load=T))
> corp2<-tm_map(corp2, stemDocument)
> inspect(corp2)
A corpus with 2 text documents
The metadata consists of 2 tag-value pairs and a data frame
Available tags are:
create_date creator
Available variables in the data frame are:
MetaID
$`1.txt`
running runner runs
$`2.txt`
happyness happies