0

我正在尝试在 R 中的语料库上实现 quanteda,但我得到:

Error in data.frame(texts = x, row.names = names(x), check.rows = TRUE,  : 
  duplicate row.names: character(0)

我在这方面没有太多经验。这是数据集的下载:https ://www.dropbox.com/s/ho5tm8lyv06jgxi/TwitterSelfDriveShrink.csv?dl=0

这是代码:

tweets = read.csv("TwitterSelfDriveShrink.csv", stringsAsFactors=FALSE)
corpus = Corpus(VectorSource(tweets$Tweet))
corpus = tm_map(corpus, tolower)
corpus = tm_map(corpus, PlainTextDocument)
corpus <- tm_map(corpus, removePunctuation)
corpus = tm_map(corpus, removeWords, c(stopwords("english")))
corpus = tm_map(corpus, stemDocument)

quanteda.corpus <- corpus(corpus)
4

2 回答 2

1

您对 tm 进行的处理正在为 tm 准备一个对象,而 quanteda 不知道如何处理它...... quanteda 自己完成所有这些步骤,帮助(“dfm”),从选项。

如果您尝试以下操作,您可以继续前进:

dfm(tweets$Tweet,verbose = TRUE,toLower = TRUE,removeNumbers = TRUE,removePunct = TRUE,removeTwitter = TRUE,语言 =“英语”,忽略功能 = 停用词(“英语”),词干 = 真)

从字符向量创建 dfm ... ... 小写 ... 标记化 ... 索引文档:6,943 个文档 ... 索引功能: 15,164 个功能类型 ... 从 174 个提供的(glob)功能类型中删除了 161 个功能... 词干提取功能(英文),修剪了 2175 个功能变体 ... 创建了 6943 x 12828 稀疏 dfm ... 完整。经过时间:0.756 秒。高温高压

于 2016-04-14T10:30:50.523 回答
1

不需要从tm包开始,甚至根本不需要使用read.csv()- 这就是quanteda配套包readtext的用途。

因此,要读取数据,您可以将创建的对象readtext::readtext()直接发送到语料库构造函数:

myCorpus <- corpus(readtext("~/Downloads/TwitterSelfDriveShrink.csv", text_field = "Tweet"))
summary(myCorpus, 5)
## Corpus consisting of 6943 documents, showing 5 documents.
## 
## Text Types Tokens Sentences Sentiment Sentiment_Confidence
## text1    19     21         1         2               0.7579
## text2    18     20         2         2               0.8775
## text3    23     24         1        -1               0.6805
## text5    17     19         2         0               1.0000
## text4    18     19         1        -1               0.8820
## 
## Source:  /Users/kbenoit/Dropbox/GitHub/quanteda/* on x86_64 by kbenoit
## Created: Thu Apr 14 09:22:11 2016
## Notes: 

从那里,您可以直接在调用中执行所有预处理词干dfm(),包括选择 ngram:

# just unigrams
dfm1 <- dfm(myCorpus, stem = TRUE, remove = stopwords("english"))
## Creating a dfm from a corpus ...
## ... lowercasing
## ... tokenizing
## ... indexing documents: 6,943 documents
## ... indexing features: 15,577 feature types
## ... removed 161 features, from 174 supplied (glob) feature types
## ... stemming features (English), trimmed 2174 feature variants
## ... created a 6943 x 13242 sparse dfm
## ... complete. 
## Elapsed time: 0.662 seconds.

# just bigrams
dfm2 <- dfm(myCorpus, stem = TRUE, remove = stopwords("english"), ngrams = 2)
## Creating a dfm from a corpus ...
## ... lowercasing
## ... tokenizing
## ... indexing documents: 6,943 documents
## ... indexing features: 52,433 feature types
## ... removed 24,002 features, from 174 supplied (glob) feature types
## ... stemming features (English), trimmed 572 feature variants
## ... created a 6943 x 27859 sparse dfm
## ... complete. 
## Elapsed time: 1.419 seconds.
于 2016-04-14T13:27:24.450 回答