如何使用 spacy 从文本中提取名词短语?
我不是指词性标签。在文档中,我找不到有关名词短语或常规解析树的任何信息。
问问题
32092 次
5 回答
64
如果您想要基本 NP,即没有协调、介词短语或关系从句的 NP,您可以在 Doc 和 Span 对象上使用 noun_chunks 迭代器:
>>> from spacy.en import English
>>> nlp = English()
>>> doc = nlp(u'The cat and the dog sleep in the basket near the door.')
>>> for np in doc.noun_chunks:
>>> np.text
u'The cat'
u'the dog'
u'the basket'
u'the door'
如果你需要别的东西,最好的方法是遍历句子的单词并考虑句法上下文来确定单词是否支配你想要的短语类型。如果是,则产生它的子树:
from spacy.symbols import *
np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj]) # Probably others too
def iter_nps(doc):
for word in doc:
if word.dep in np_labels:
yield word.subtree
于 2015-11-04T01:26:34.933 回答
4
import spacy
nlp = spacy.load("en_core_web_sm")
doc =nlp('Bananas are an excellent source of potassium.')
for np in doc.noun_chunks:
print(np.text)
'''
Bananas
an excellent source
potassium
'''
for word in doc:
print('word.dep:', word.dep, ' | ', 'word.dep_:', word.dep_)
'''
word.dep: 429 | word.dep_: nsubj
word.dep: 8206900633647566924 | word.dep_: ROOT
word.dep: 415 | word.dep_: det
word.dep: 402 | word.dep_: amod
word.dep: 404 | word.dep_: attr
word.dep: 443 | word.dep_: prep
word.dep: 439 | word.dep_: pobj
word.dep: 445 | word.dep_: punct
'''
from spacy.symbols import *
np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj])
print('np_labels:', np_labels)
'''
np_labels: {416, 422, 429, 430, 439}
'''
https://www.geeksforgeeks.org/use-yield-keyword-instead-return-keyword-python/
def iter_nps(doc):
for word in doc:
if word.dep in np_labels:
yield(word.dep_)
iter_nps(doc)
'''
<generator object iter_nps at 0x7fd7b08b5bd0>
'''
## Modified method:
def iter_nps(doc):
for word in doc:
if word.dep in np_labels:
print(word.text, word.dep_)
iter_nps(doc)
'''
Bananas nsubj
potassium pobj
'''
doc = nlp('BRCA1 is a tumor suppressor protein that functions to maintain genomic stability.')
for np in doc.noun_chunks:
print(np.text)
'''
BRCA1
a tumor suppressor protein
genomic stability
'''
iter_nps(doc)
'''
BRCA1 nsubj
that nsubj
stability dobj
'''
于 2019-12-21T00:52:29.617 回答
3
你也可以从这样的句子中得到名词:
import spacy
nlp=spacy.load("en_core_web_sm")
doc=nlp("When Sebastian Thrun started working on self-driving cars at "
"Google in 2007, few people outside of the company took him "
"seriously. “I can tell you very senior CEOs of major American "
"car companies would shake my hand and turn away because I wasn’t "
"worth talking to,” said Thrun, in an interview with Recode earlier "
"this week.")
#doc text is from spacy website
for x in doc :
if x.pos_ == "NOUN" or x.pos_ == "PROPN" or x.pos_=="PRON":
print(x)
# here you can get Nouns, Proper Nouns and Pronouns
于 2021-03-13T12:00:59.910 回答
0
from spacy.en import English
可能会给你一个错误
没有名为“spacy.en”的模块
所有语言数据已移至spacy.lang
spacy2.0+中的子模块
请用spacy.lang.en import English
然后按照@syllogism_ 的回答执行所有剩余步骤
于 2021-03-12T08:21:33.623 回答