21

我一直在使用 tm 包来运行一些文本分析。我的问题是创建一个包含单词及其频率的列表

library(tm)
library(RWeka)

txt <- read.csv("HW.csv",header=T) 
df <- do.call("rbind", lapply(txt, as.data.frame))
names(df) <- "text"

myCorpus <- Corpus(VectorSource(df$text))
myStopwords <- c(stopwords('english'),"originally", "posted")
myCorpus <- tm_map(myCorpus, removeWords, myStopwords)

#building the TDM

btm <- function(x) NGramTokenizer(x, Weka_control(min = 3, max = 3))
myTdm <- TermDocumentMatrix(myCorpus, control = list(tokenize = btm))

我通常使用以下代码生成频率范围内的单词列表

frq1 <- findFreqTerms(myTdm, lowfreq=50)

有什么方法可以自动执行此操作,以便我们获得包含所有单词及其频率的数据框?

我面临的另一个问题是将术语文档矩阵转换为数据框。当我处理大量数据样本时,我遇到了内存错误。有一个简单的解决方案吗?

4

6 回答 6

22

尝试这个

data("crude")
myTdm <- as.matrix(TermDocumentMatrix(crude))
FreqMat <- data.frame(ST = rownames(myTdm), 
                      Freq = rowSums(myTdm), 
                      row.names = NULL)
head(FreqMat, 10)
#            ST Freq
# 1       "(it)    1
# 2     "demand    1
# 3  "expansion    1
# 4        "for    1
# 5     "growth    1
# 6         "if    1
# 7         "is    2
# 8        "may    1
# 9       "none    2
# 10      "opec    2
于 2013-11-17T13:11:54.087 回答
11

我在 R 中有以下几行可以帮助创建词频并将它们放在一个表中,它读取 .txt 格式的文本文件并创建词频,我希望这可以帮助任何有兴趣的人。

avisos<- scan("anuncio.txt", what="character", sep="\n")
avisos1 <- tolower(avisos)
avisos2 <- strsplit(avisos1, "\\W")
avisos3 <- unlist(avisos2)
freq<-table(avisos3)
freq1<-sort(freq, decreasing=TRUE)
temple.sorted.table<-paste(names(freq1), freq1, sep="\\t")
cat("Word\tFREQ", temple.sorted.table, file="anuncio.txt", sep="\n")
于 2015-05-20T17:18:29.637 回答
9

查看 的来源findFreqTerms似乎该函数slam::row_sums在术语文档矩阵上调用时起到了作用。尝试,例如:

data(crude)
slam::row_sums(TermDocumentMatrix(crude))
于 2015-07-18T16:43:22.617 回答
5

根据您的需要,使用某些tidyverse函数可能是一个粗略的解决方案,它在处理大小写、标点符号和停用词方面提供了一些灵活性:

text_string <- 'I have been using the tm package to run some text analysis. My problem is with creating a list with words and their frequencies associated with the same. I typically use the following code for generating list of words in a frequency range. Is there any way to automate this such that we get a dataframe with all words and their frequency?
The other problem that i face is with converting the term document matrix into a data frame. As i am working on large samples of data, I run into memory errors. Is there a simple solution for this?'

stop_words <- c('a', 'and', 'for', 'the') # just a sample list of words I don't care about

library(tidyverse)
data_frame(text = text_string) %>% 
  mutate(text = tolower(text)) %>% 
  mutate(text = str_remove_all(text, '[[:punct:]]')) %>% 
  mutate(tokens = str_split(text, "\\s+")) %>%
  unnest() %>% 
  count(tokens) %>% 
  filter(!tokens %in% stop_words) %>% 
  mutate(freq = n / sum(n)) %>% 
  arrange(desc(n))


# A tibble: 64 x 3
  tokens      n   freq
  <chr>   <int>  <dbl>
1 i           5 0.0581
2 with        5 0.0581
3 is          4 0.0465
4 words       3 0.0349
5 into        2 0.0233
6 list        2 0.0233
7 of          2 0.0233
8 problem     2 0.0233
9 run         2 0.0233
10 that       2 0.0233
# ... with 54 more rows
于 2018-07-13T02:18:52.090 回答
2
a = scan(file='~/Desktop//test.txt',what="list")
a1 = data.frame(lst=a)
count(a1,vars="lst")

似乎可以得到简单的频率。我使用 scan 因为我有一个 txt 文件,但它也应该与 read.csv 一起使用。

于 2013-08-07T10:39:08.840 回答
2

是否提供apply(myTdm, 1, sum)rowSums(as.matrix(myTdm))提供您所追求的 ngram 计数?

于 2013-08-22T16:07:10.133 回答