让我从Tidytext 简介@CRAN中的以下完整工作代码开始
library(janeaustenr)
library(dplyr)
library(stringr)
original_books <- austen_books() %>%
group_by(book) %>%
mutate(linenumber = row_number(),
chapter = cumsum(str_detect(text, regex("^chapter [\\divxlc]",
ignore_case = TRUE)))) %>%
ungroup()
original_books
library(tidytext)
tidy_books <- original_books %>%
unnest_tokens(word, text)
tidy_books
data("stop_words")
cleaned_books <- tidy_books %>%
anti_join(stop_words)
到目前为止一切都很好。我有六本简·奥斯汀的小说,去掉了标准的垃圾词。
unique(cleaned_books$book)
这让我明白了:《理智与情感》、《傲慢与偏见》、《曼斯菲尔德公园》、《艾玛》、《诺桑格修道院》、《说服》。
所以如果我想做一个所有六个的标准TF词云,没问题。就像这样(添加颜色):
library(wordcloud)
library(RColorBrewer)
dark2 <- brewer.pal(8, "Dark2")
cleaned_books %>%
count(word) %>%
with(wordcloud(word, n, color = dark2, max.words = 100))
工作精美。但是,我该如何对所有六本小说进行commonality.cloud ()以及对相同内容的 compare.cloud() 呢?
我需要的所有数据都在clean_books中——但我不知道如何重塑它。感谢您的帮助!
知道了。谢谢。
如果其他人有类似的问题,将离开。
上面的代码 &
set1 <- brewer.pal(8, "Set1") ## a second color just for other cloud type
library(reshape2)
# title size and scale optional, obviously
cleaned_books %>%
group_by(book) %>%
count(word) %>%
acast(word ~ book, value.var = "n", fill = 0) %>%
comparison.cloud(color = dark2, title.size = 1, scale = c(3, 0.3), random.order = FALSE, max.words = 100)
cleaned_books %>%
group_by(book) %>%
count(word) %>%
acast(word ~ book, value.var = "n", fill = 0) %>%
commonality.cloud(color = set1, title.size = 1, scale = c(3, 0.3), random.order = FALSE, max.words = 100)
效果很好。