0

有什么方法可以使用 Wikipedia2Vec 从文本中提取所有维基百科实体?或者有没有其他方法可以做到这一点。

例子:

Text : "Scarlett Johansson is an American actress."  
Entities : [ 'Scarlett Johansson' , 'American' ]

我想用 Python 来做

谢谢

4

2 回答 2

1

这是一个 NLTK 版本(可能不如 SpaCy):

from nltk import Tree
from nltk import ne_chunk, pos_tag, word_tokenize

def get_continuous_chunks(text, chunk_func=ne_chunk):
    chunked = chunk_func(pos_tag(word_tokenize(text)))
    continuous_chunk = []
    current_chunk = []

    for subtree in chunked:
        if type(subtree) == Tree:
            current_chunk.append(" ".join([token for token, pos in subtree.leaves()]))
        elif current_chunk:
            named_entity = " ".join(current_chunk)
            if named_entity not in continuous_chunk:
                continuous_chunk.append(named_entity)
                current_chunk = []
        else:
            continue

    return continuous_chunk


text = 'Scarlett Johansson is an American actress.'
get_continuous_chunks(text)
于 2019-04-23T07:17:07.903 回答
1

您可以使用spacy

import spacy
import en_core_web_sm
nlp = en_core_web_sm.load()
doc = doc = nlp('Scarlett Johansson is an American actress.')
print([(X.text, X.label_) for X in doc.ents])

你得到:

[('Scarlett Johansson', 'PERSON'), ('American', 'NORP')]

在spacy 文档中查找更多信息。

于 2019-04-18T08:53:23.700 回答