我正在使用 scikit-learn 从“词袋”文本(在单个词上标记的文本)中提取文本特征。为此,我使用TfidfVectorizer来减少非常频繁的单词(即:“a”、“the”等)的权重。
text = 'Some text, with a lot of words...'
tfidf_vectorizer = TfidfVectorizer(
min_df=1, # min count for relevant vocabulary
max_features=4000, # maximum number of features
strip_accents='unicode', # replace all accented unicode char
# by their corresponding ASCII char
analyzer='word', # features made of words
token_pattern=r'\w{4,}', # tokenize only words of 4+ chars
ngram_range=(1, 1), # features made of a single tokens
use_idf=True, # enable inverse-document-frequency reweighting
smooth_idf=True, # prevents zero division for unseen words
sublinear_tf=False)
# vectorize and re-weight
desc_vect = tfidf_vectorizer.fit_transform([text])
我现在希望能够将每个预测特征与其对应的tfidf
浮点值链接起来,将其存储在一个字典中
{'feature1:' tfidf1, 'feature2': tfidf2, ...}
我通过使用实现了它
d = dict(zip(tfidf_vectorizer.get_feature_names(), desc_vect.data))
我想知道是否有更好的 scikit-learn 本地方式来做这样的事情。
非常感谢你。