1

在大量文本语料库中,我有兴趣提取句子中某处具有(动词-名词)或(形容词-名词)特定列表的每个句子。我有一个很长的清单,但这里有一个示例。在我的 MWE 中,我试图用“write/wrote/writing/writes”和“book/s”提取句子。我有大约 30 对这样的词。

这是我尝试过的,但它没有捕捉到大多数句子:

import spacy
nlp = spacy.load('en_core_web_sm')
from spacy.matcher import Matcher
matcher = Matcher(nlp.vocab)

doc = nlp(u'Graham Greene is his favorite author. He wrote his first book when he was a hundred and fifty years old.\
While writing this book, he had to fend off aliens and dinosaurs. Greene\'s second book might not have been written by him. \
Greene\'s cat in its deathbed testimony alleged that it was the original writer of the book. The fact that plot of the book revolves around \
rats conquering the world, lends credence to the idea that only a cat could have been the true writer of such an inane book.')

matcher = Matcher(nlp.vocab)
pattern1 = [{"LEMMA": "write"},{"TEXT": {"REGEX": ".+"}},{"LEMMA": "book"}]
matcher.add("testy", None, pattern)

for sent in doc.sents:
    if matcher(nlp(sent.lemma_)):
        print(sent.text)

不幸的是,我只有一场比赛:

“在写这本书时,他必须抵御外星人和恐龙。”

然而,我也希望得到“他写了他的第一本书”这句话。其他写书将作家作为名词,其好处是不匹配。

4

1 回答 1

1

问题是在 Matcher 中,默认情况下,模式中的每个字典都对应一个 token。所以你的正则表达式不匹配任何数量的字符,它匹配任何一个标记,这不是你想要的。

要获得您想要的,您可以使用该OP值来指定您要匹配任意数量的标记。请参阅文档中的运算符或量词部分

但是,鉴于您的问题,您可能希望实际使用依赖匹配器,所以我重写了您的代码以使用它。试试这个:

import spacy
nlp = spacy.load('en_core_web_sm')
from spacy.matcher import Matcher
matcher = Matcher(nlp.vocab)

doc = nlp("""
Graham Greene is his favorite author. He wrote his first book when he was a hundred and fifty years old.
While writing this book, he had to fend off aliens and dinosaurs. Greene's second book might not have been written by him. 
Greene's cat in its deathbed testimony alleged that it was the original writer of the book. The fact that plot of the book revolves around 
rats conquering the world, lends credence to the idea that only a cat could have been the true writer of such an inane book.""")

matcher = Matcher(nlp.vocab)
pattern = [{"LEMMA": "write"},{"OP": "*"},{"LEMMA": "book"}]
matcher.add("testy", [pattern])

print("----- Using Matcher -----")
for sent in doc.sents:
    if matcher(sent):
        print(sent.text)

print("----- Using Dependency Matcher -----")

deppattern = [
        {"RIGHT_ID": "wrote", "RIGHT_ATTRS": {"LEMMA": "write"}},
        {"LEFT_ID": "wrote", "REL_OP": ">", "RIGHT_ID": "book", 
            "RIGHT_ATTRS": {"LEMMA": "book"}}
        ]

from spacy.matcher import DependencyMatcher

dmatcher = DependencyMatcher(nlp.vocab)

dmatcher.add("BOOK", [deppattern])

for _, (start, end) in dmatcher(doc):
    print(doc[start].sent)

另一件不太重要的事情——你调用匹配器的方式有点奇怪。您可以传递匹配器 Docs 或 Spans,但它们绝对应该是自然文本,因此调用.lemma_句子并根据您的情况创建一个新的文档,但通常应该避免。

于 2021-05-29T08:53:20.603 回答