5

我正在尝试处理 tf-idf 加权语料库(我希望 tf 是按文档而不是简单计数的比例)。我希望所有经典文本挖掘库都返回相同的值,但我得到不同的值。我的代码中是否有错误(例如,我是否需要转置一个对象?)或者 tf-idf 计数的默认参数是否因包而异?

library(tm)
library(tidyverse) 
library(quanteda)
df <- as.data.frame(cbind(doc = c("doc1", "doc2"), text = c("the quick brown fox jumps over the lazy dog", "The quick brown foxy ox jumps over the lazy god")), stringsAsFactors = F)

df.count1 <- df %>% unnest_tokens(word, text) %>% 
  count(doc, word) %>% 
  bind_tf_idf(word, doc, n) %>% 
  select(doc, word, tf_idf) %>% 
  spread(word, tf_idf, fill = 0) 

df.count2 <- df %>% unnest_tokens(word, text) %>% 
  count(doc, word) %>% 
  cast_dtm(document = doc,term = word, value = n, weighting = weightTfIdf) %>% 
  as.matrix() %>% as.data.frame()

df.count3 <- df %>% unnest_tokens(word, text) %>% 
  count(doc, word) %>% 
  cast_dfm(document = doc,term = word, value = n) %>% 
  dfm_tfidf() %>% as.data.frame()

   > df.count1
# A tibble: 2 x 12
  doc   brown    dog    fox   foxy    god jumps  lazy  over     ox quick   the
  <chr> <dbl>  <dbl>  <dbl>  <dbl>  <dbl> <dbl> <dbl> <dbl>  <dbl> <dbl> <dbl>
1 doc1      0 0.0770 0.0770 0      0          0     0     0 0          0     0
2 doc2      0 0      0      0.0693 0.0693     0     0     0 0.0693     0     0

> df.count2
     brown       dog       fox jumps lazy over quick the foxy god  ox
doc1     0 0.1111111 0.1111111     0    0    0     0   0  0.0 0.0 0.0
doc2     0 0.0000000 0.0000000     0    0    0     0   0  0.1 0.1 0.1

> df.count3
     brown     dog     fox jumps lazy over quick the    foxy     god      ox
doc1     0 0.30103 0.30103     0    0    0     0   0 0.00000 0.00000 0.00000
doc2     0 0.00000 0.00000     0    0    0     0   0 0.30103 0.30103 0.30103
4

1 回答 1

17

您偶然发现了计算术语频率的差异。

标准定义:

TF:词频:TF(t) =(词条 t 在文档中出现的次数)/(文档中词条的总数)。

IDF:逆文档频率:IDF(t)= log(文档总数/其中包含术语 t 的文档数)

TF-idf 重量是这些数量 TF * IDF 的乘积

看起来很简单,其实不然。让我们计算 doc1 中单词 dog 的 tf_idf。

狗的第一个 TF:即文档中的 1 个术语 / 9 个术语 = 0.11111

1/9 = 0.1111111

现在用于狗的 IDF:(2 个文档/1 个术语)的日志。现在有多种可能性,即:log(或自然对数)、log2 或 log10!

log(2) = 0.6931472
log2(2) = 1
log10(2) = 0.30103

#tf_idf on log:
1/9 * log(2) = 0.07701635

#tf_idf on log2:
1/9 * log2(2)  = 0.11111

#tf_idf on log10:
1/9 * log10(2) = 0.03344778

现在它变得有趣了。Tidytext根据对数为您提供正确的权重。tm根据 log2 返回 tf_idf。我预计 quanteda 的值为 0.03344778,因为它们的基数是 log10。

但是查看 quanteda,它会正确返回结果,但默认使用计数而不是比例计数。要获得应有的一切,请尝试以下代码:

df.count3 <- df %>% unnest_tokens(word, text) %>% 
  count(doc, word) %>% 
  cast_dfm(document = doc,term = word, value = n)


dfm_tfidf(df.count3, scheme_tf = "prop", scheme_df = "inverse")
Document-feature matrix of: 2 documents, 11 features (22.7% sparse).
2 x 11 sparse Matrix of class "dfm"
      features
docs   brown        fox        god jumps lazy over quick the      dog     foxy       ox
  doc1     0 0.03344778 0.03344778     0    0    0     0   0 0        0        0       
  doc2     0 0          0              0    0    0     0   0 0.030103 0.030103 0.030103

这看起来更好,这是基于 log10。

如果使用quantedawith 调整参数,则可以通过更改参数来获得tidytextor结果。tmbase

# same as tidytext the natural log
dfm_tfidf(df.count3, scheme_tf = "prop", scheme_df = "inverse", base = exp(1))

# same as tm
dfm_tfidf(df.count3, scheme_tf = "prop", scheme_df = "inverse", base = 2)
于 2018-02-15T21:03:57.583 回答