0
from stemming.porter2 import stem

documents = ['got',"get"]

documents = [[stem(word) for word in sentence.split(" ")] for sentence in documents]
print(documents)

结果是:

[['got'], ['get']]

有人可以帮忙解释一下吗?谢谢 !

4

1 回答 1

2

您想要的是词形还原器而不是词干分析器。差异是微妙的。

通常,词干分析器会尽可能地删除后缀,并且在某些情况下,通过简单地删除后缀来处理无法找到规范化形式的单词的异常列表。

词形还原器试图找到单词的“基本”/根/不定式形式,通常,它需要针对不同语言的专门规则。


使用 morphy lemmatizer 的 NLTK 实现的词形化需要正确的词性 (POS) 标签才能相当准确。

避免(或实际上从不)尝试孤立地对单个词进行词形还原。尝试将一个完全 POS 标记的句子进行词形还原,例如

from nltk import word_tokenize, pos_tag
from nltk import wordnet as wn

def penn2morphy(penntag, returnNone=False, default_to_noun=False):
    morphy_tag = {'NN':wn.NOUN, 'JJ':wn.ADJ,
                  'VB':wn.VERB, 'RB':wn.ADV}
    try:
        return morphy_tag[penntag[:2]]
    except:
        if returnNone:
            return None
        elif default_to_noun:
            return 'n'
        else:
            return ''

使用 penn2morphy 辅助函数,您需要将 POS 标签从转换pos_tag()为 morphy 标签,然后您可以:

>>> from nltk.stem import WordNetLemmatizer
>>> wnl = WordNetLemmatizer()
>>> sent = "He got up in bed at 8am."
>>> [(token, penn2morphy(tag)) for token, tag in pos_tag(word_tokenize(sent))]
[('He', ''), ('got', 'v'), ('up', ''), ('in', ''), ('bed', 'n'), ('at', ''), ('8am', ''), ('.', '')]
>>> [wnl.lemmatize(token, pos=penn2morphy(tag, default_to_noun=True)) for token, tag in pos_tag(word_tokenize(sent))]
['He', 'get', 'up', 'in', 'bed', 'at', '8am', '.']

为方便起见,您还可以尝试使用pywsdlemmatizer

>>> from pywsd.utils import lemmatize_sentence
Warming up PyWSD (takes ~10 secs)... took 7.196984529495239 secs.
>>> sent = "He got up in bed at 8am."
>>> lemmatize_sentence(sent)
['he', 'get', 'up', 'in', 'bed', 'at', '8am', '.']

另请参阅https://stackoverflow.com/a/22343640/610569

于 2018-08-26T07:02:31.600 回答