1

我想知道是否有任何方法可以CountVectorizer()忽略所有文档中出现少于 x 次且字符少于 y 的单词。类似于( ) 中的wordlengthandbounds参数。RDocumentTermMatrixtm

例子

这个语料库:

corpus = [
    'This is the first document.',
    'This document is the second document.',
    'And this is the third one.',
    'Is this the first document?',
]

现在变成了这样:

>>> vectorizer = CountVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
>>> print(X.toarray())
[[0 1 1 1 0 0 1 0 1]
 [0 2 0 1 0 1 1 0 1]
 [1 0 0 1 1 0 1 1 1]
 [0 1 1 1 0 0 1 0 1]]

将 x 和 y 设置为 2,我想要这个:

>>> vectorizer = CountVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and', 'document', 'first', 'one', 'second', 'the', 'third', 'this']
>>> print(X.toarray())
[[1 1 1 1]
 [2 0 1 1]
 [0 0 1 1]
 [1 1 1 1]]
4

1 回答 1

1

您可能希望:

  • 设置min_df=2将照顾x
  • 定义token_pattern=r"(?u)\b[a-zA-Z]{3,}\b"将要注意的y(您可以尝试token_pattern=r"(?u)\b[a-zA-Z0-9_]{3,}\b"在令牌定义中包含数字和下划线)

演示:

from sklearn.feature_extraction.text import CountVectorizer

corpus = [
    "This is the first document.",
    "This document is the second document.",
    "And this is the third one.",
    "Is this the first document?",
]

vectorizer = CountVectorizer(min_df=2, token_pattern=r"(?u)\b[a-zA-Z]{3,}\b")
X = vectorizer.fit_transform(corpus)
print(X.toarray())

[[1 1 1 1]
 [2 0 1 1]
 [0 0 1 1]
 [1 1 1 1]]
于 2021-01-04T16:59:23.057 回答