1

我是文本挖掘和 python 的新手,我正在尝试做一个简单的任务。我想从句子中创建 TF 矩阵:['This is the first sentence','This is the second sentence','This is the third sentence']

并在循环中(或以某种方式)将新句子与该矩阵进行比较。

在stackoverflow上,我找到了很好的例子,但在我的例子中,它每次都会计算样本句子和新句子的TF矩阵。它在大型数据集上运行会有点慢。

from sklearn.feature_extraction.text import TfidfVectorizer

vect = TfidfVectorizer()
text = []
text = ['This is the first sentence','This is the second sentence', 'This is the third sentence']
text.append('new sentence')
tfidf = vect.fit_transform(text)

# Get an array of results
results = ( tfidf * tfidf.T ).A

我想知道如何以其他更准确的方式做到这一点,谢谢。

4

1 回答 1

0

我们可以先对原始句子进行拟合

from sklearn.feature_extraction.text import TfidfVectorizer
vect = TfidfVectorizer()
text = ['This is the first test ','This is the sentence', 'this is a third sentence']
vect.fit(text)

tfidf = vect.transform(text).A
>>> tfidf
array([[0.55249005, 0.32630952, 0.        , 0.55249005, 0.42018292,
    0.        , 0.32630952],
   [0.        , 0.43370786, 0.55847784, 0.        , 0.55847784,
    0.        , 0.43370786],
   [0.        , 0.39148397, 0.50410689, 0.        , 0.        ,
    0.66283998, 0.39148397]])

然后用它来转换新的:

new = vect.transform(['this sentence 1','new sentence 2']).A
>>> new
array([[0.        , 0.        , 0.78980693, 0.        , 0.        ,
        0.        , 0.61335554],
       [0.        , 0.        , 1.        , 0.        , 0.        ,
        0.        , 0.        ]])

然后使用一些距离度量来计算句子之间的相似度:

import scipy
>>> scipy.spatial.distance.cdist(tfidf, new, 'euclidean')
array([[1.26479741, 1.41421356],
       [0.76536686, 0.93970438],
       [0.85056925, 0.99588464]])
于 2018-09-29T11:39:33.190 回答