我的文件如下:
doc1 = very good, very bad, you are great
doc2 = very bad, good restaurent, nice place to visit
我想让我的语料库分开,,
以便我的最终DocumentTermMatrix
变成:
terms
docs very good very bad you are great good restaurent nice place to visit
doc1 tf-idf tf-idf tf-idf 0 0
doc2 0 tf-idf 0 tf-idf tf-idf
我知道如何计算DocumentTermMatrix
单个单词,但不知道如何separated for each phrase
在 R中制作语料库。R
首选解决方案,但Python
也欢迎解决方案。
我尝试过的是:
> library(tm)
> library(RWeka)
> BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3))
> options(mc.cores=1)
> texts <- c("very good, very bad, you are great","very bad, good restaurent, nice place to visit")
> corpus <- Corpus(VectorSource(texts))
> a <- TermDocumentMatrix(corpus, control = list(tokenize = BigramTokenizer))
> as.matrix(a)
我正进入(状态:
Docs
Terms 1 2
bad good restaurent 0 1
bad you are 1 0
good restaurent nice 0 1
good very bad 1 0
nice place to 0 1
place to visit 0 1
restaurent nice place 0 1
very bad good 0 1
very bad you 1 0
very good very 1 0
you are great 1 0
我想要的不是单词的组合,而是我在矩阵中显示的短语。