我拿了一堆文档并为所有文档中的每个标记计算 tf*idf 并为每个文档创建向量(每个 n 维,n 是语料库中唯一单词的数量)。我无法弄清楚如何创建使用 sklearn.cluster.MeanShift 从向量中聚类
问问题
891 次
1 回答
1
TfidfVectorizer 将文档转换为数字的“稀疏矩阵”。MeanShift 要求传递给它的数据是“密集的”。下面,我将展示如何在管道中进行转换(credit),但是,如果内存允许,您可以使用toarray()
or将稀疏矩阵转换为密集矩阵todense()
。
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import MeanShift
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
documents = ['this is document one',
'this is document two',
'document one is fun',
'document two is mean',
'document is really short',
'how fun is document one?',
'mean shift... what is that']
pipeline = Pipeline(
steps=[
('tfidf', TfidfVectorizer()),
('trans', FunctionTransformer(lambda x: x.todense(), accept_sparse=True)),
('clust', MeanShift())
])
pipeline.fit(documents)
pipeline.named_steps['clust'].labels_
result = [(label,doc) for doc,label in zip(documents, pipeline.named_steps['clust'].labels_)]
for label,doc in sorted(result):
print(label, doc)
印刷:
0 document two is mean
0 this is document one
0 this is document two
1 document one is fun
1 how fun is document one?
2 mean shift... what is that
3 document is really short
您可以修改“超参数”,但这会给您一个我认为的总体思路。
于 2017-09-13T04:22:16.523 回答