2

我试图找出一种方法来计算文档中特定术语的单词接近度以及平均接近度(按单词)。我知道关于 SO 有类似的问题,但没有什么能给我我需要的答案,甚至没有给我指出有用的地方。假设我有以下文字:

song <- "Far over the misty mountains cold To dungeons deep and caverns old We 
must away ere break of day To seek the pale enchanted gold. The dwarves of 
yore made mighty spells, While hammers fell like ringing bells In places deep, 
where dark things sleep, In hollow halls beneath the fells. For ancient king 
and elvish lord There many a gleaming golden hoard They shaped and wrought, 
and light they caught To hide in gems on hilt of sword. On silver necklaces 
they strung The flowering stars, on crowns they hung The dragon-fire, in 
twisted wire They meshed the light of moon and sun. Far over the misty 
mountains cold To dungeons deep and caverns old We must away, ere break of 
day, To claim our long-forgotten gold. Goblets they carved there for 
themselves And harps of gold; where no man delves There lay they long, and 
many a song Was sung unheard by men or elves. The pines were roaring on the 
height, The winds were moaning in the night. The fire was red, it flaming 
spread; The trees like torches blazed with light. The bells were ringing in 
the dale And men they looked up with faces pale; The dragon’s ire more fierce 
than fire Laid low their towers and houses frail. The mountain smoked beneath 
the moon; The dwarves they heard the tramp of doom. They fled their hall to 
dying fall Beneath his feet, beneath the moon. Far over the misty mountains 
grim To dungeons deep and caverns dim We must away, ere break of day,
To win our harps and gold from him!"

我希望能够看到每次出现在“火”(也可互换)单词两侧(左侧 15 和右侧 15)的 15 个(我希望这个数字是可互换的)单词内出现的单词出现。我想查看每个“火”实例的每个单词及其在这 15 个单词跨度中出现的次数。因此,例如,“火”被使用了 3 次。在这 3 次中,“光”一词在两侧各 15 个字以内出现两次。我想要一个显示单词的表格,它在指定的 15 附近出现的次数、最大距离(在本例中为 12)、最小距离(为 7)和平均距离(其中是 9.5)。

我想我需要几个步骤和包来完成这项工作。我的第一个想法是使用 quanteda 的“kwic”功能,因为它允许您围绕特定术语选择一个“窗口”。然后,基于 kwic 结果的术语频率计数并不难(去除了频率的停用词,但没有去除单词邻近度度量)。我真正的问题是找到与焦点术语的最大、最小和平均距离,然后将结果放入一个漂亮的整洁表中,这些术语按频率降序排列,列给我频率计数、最大距离、最小距离和平均距离。

这是我到目前为止所拥有的:

library(quanteda)
library(tm)

mysong <- char_tolower(song)

toks <- tokens(mysong, remove_hyphens = TRUE, remove_punct = TRUE, 
remove_numbers = TRUE, remove_symbols = TRUE)

mykwic <- kwic(toks, "fire", window = 15, valuetype ="fixed")
thekwic <- as.character(mykwic)

thekwic <- removePunctuation(thekwic)
thekwic <- removeNumbers(thekwic)
thekwic <- removeWords(thekwic, stopwords("en"))

kwicFreq <- termFreq(thekwic)

任何帮助深表感谢。

4

2 回答 2

3

我建议结合我的tidytextfuzzyjoin包来解决这个问题。

您可以首先将其标记为每字一行的数据框,添加一position列并删除停用词:

library(tidytext)
library(dplyr)

all_words <- data_frame(text = song) %>%
  unnest_tokens(word, text) %>%
  mutate(position = row_number()) %>%
  filter(!word %in% tm::stopwords("en"))

然后,您可以只找到单词fire,并使用difference_inner_join()fromfuzzyjoin 查找这些行的 15 个单词内的所有行。然后,您可以使用group_by()summarize()获取每个单词的所需统计信息。

library(fuzzyjoin)

nearby_words <- all_words %>%
  filter(word == "fire") %>%
  select(focus_term = word, focus_position = position) %>%
  difference_inner_join(all_words, by = c(focus_position = "position"), max_dist = 15) %>%
  mutate(distance = abs(focus_position - position))

words_summarized <- nearby_words %>%
  group_by(word) %>%
  summarize(number = n(),
            maximum_distance = max(distance),
            minimum_distance = min(distance),
            average_distance = mean(distance)) %>%
  arrange(desc(number))

在这种情况下输出:

# A tibble: 49 × 5
       word number maximum_distance minimum_distance average_distance
      <chr>  <int>            <dbl>            <dbl>            <dbl>
 1     fire      3                0                0              0.0
 2    light      2               12                7              9.5
 3     moon      2               13                9             11.0
 4    bells      1               14               14             14.0
 5  beneath      1               11               11             11.0
 6   blazed      1               10               10             10.0
 7   crowns      1                5                5              5.0
 8     dale      1               15               15             15.0
 9   dragon      1                1                1              1.0
10 dragon’s      1                5                5              5.0
# ... with 39 more rows

请注意,此方法还允许您一次对多个焦点词执行分析。您所要做的就是更改filter(word == "fire")filter(word %in% c("fire", "otherword")),然后更改group_by(word)group_by(focus_word, word)

于 2017-05-18T21:19:01.680 回答
2

tidytext答案是一个很好的答案,但是 quanteda 中有一些工具可以适应这个问题。在窗口内计数的主要功能kwic()不是fcm()(特征共现矩阵)。

require(quanteda)

# tokenize so that intra-word hyphens and punctuation are removed
toks <- tokens(song, remove_punct = TRUE, remove_hyphens = TRUE)

# all co-occurrences
head(fcm(toks, window = 15, context = "window", count = "frequency")[, "fire"])
## Feature co-occurrence matrix of: 155 by 1 feature.
## (showing first 6 documents and first feature)
##            features
## features    fire
##   Far          1
##   over         1
##   the          5
##   misty        1
##   mountains    0
##   cold         0

head(fcm(toks, window = 15, context = "window", count = "frequency")["light", "fire"])
## Feature co-occurrence matrix of: 1 by 1 feature.
## 1 x 1 sparse Matrix of class "fcm"
##         features
## features fire
##    light    2

要获得单词与目标的平均距离,需要对距离的权重函数进行一些修改。下面,根据位置应用权重来考虑计数,当这些计数相加然后除以窗口内的总频率时,它提供加​​权平均值。对于您的“光”示例,例如:

# average distance
fcm(toks, window = 15, context = "window", count = "weighted", weights = 1:15)["light", "fire"] /
    fcm(toks, window = 15, context = "window", count = "frequency")["light", "fire"]
## 1 x 1 Matrix of class "dgeMatrix"
##         features
##    light  9.5
## features fire

获得最小和最大位置有点复杂,虽然我可以找到一种方法来“破解”它,使用权重的组合在每个位置放置一个二进制掩码,然后将其转换为距离。(太难展示了,所以我推荐整洁的解决方案,除非我想到更优雅的方式。)

于 2017-05-20T22:04:31.230 回答