1

我有一个很大的语料库(大约 40 万个独特的句子)。我只想获得每个单词的 TF-IDF 分数。我试图通过扫描每个单词并计算频率来计算每个单词的分数,但它花费的时间太长。

我用了 :

  X= tfidfVectorizer(corpus)

来自 sklearn 但它直接返回句子的向量表示。有什么方法可以获得语料库中每个单词的 TF-IDF 分数?

4

1 回答 1

26

使用sklearn.feature_extraction.text.TfidfVectorizer(取自文档):

>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> corpus = [
...     'This is the first document.',
...     'This document is the second document.',
...     'And this is the third one.',
...     'Is this the first document?',
... ]
>>> vectorizer = TfidfVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
>>> print(X.shape)
(4, 9)

现在,如果我打印X.toarray()

[[0.         0.46979139 0.58028582 0.38408524 0.         0.
  0.38408524 0.         0.38408524]
 [0.         0.6876236  0.         0.28108867 0.         0.53864762
  0.28108867 0.         0.28108867]
 [0.51184851 0.         0.         0.26710379 0.51184851 0.
  0.26710379 0.51184851 0.26710379]
 [0.         0.46979139 0.58028582 0.38408524 0.         0.
  0.38408524 0.         0.38408524]]

这个二维数组中的每一行指的是一个文档,行中的每个元素指的是相应单词的 TF-IDF 分数。要知道每个元素代表什么单词,请查看.get_feature_names()函数。它将打印出一个单词列表。例如,在这种情况下,查看第一个文档的行:

[0., 0.46979139, 0.58028582, 0.38408524, 0., 0., 0.38408524, 0., 0.38408524]

在示例中,.get_feature_names()返回:

['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']

因此,您将分数映射到这样的单词:

dict(zip(vectorizer.get_feature_names(), X.toarray()[0]))
{'and': 0.0, 'document': 0.46979139, 'first': 0.58028582, 'is': 0.38408524, 'one': 0.0, 'second': 0.0, 'the': 0.38408524, 'third': 0.0, 'this': 0.38408524}
于 2018-11-14T07:03:51.500 回答