0

我想从这个文本中用一句话来生成一个摘要。我正在使用 textacy.py。这是我的代码:

import textacy
import textacy.keyterms
import textacy.extract
import spacy
nlp = spacy.load('en_core_web_sm')
text = '''Sauti said, 'O thou that art blest with longevity, I shall narrate the history of Astika as I heard it from my father. 
          O Brahmana, in the golden age, Prajapati had two daughters. 
          O sinless one, the sisters were endowed with wonderful beauty. 
          Named Kadru and Vinata, they became the wives of Kasyapa. 
          Kasyapa derived great pleasure from his two wedded wives and being gratified he, resembling Prajapati himself, offered to give each of them a boon. 
          Hearing that their lord was willing to confer on them their choice blessings, those excellent ladies felt transports of joy. 
          Kadru wished to have for sons a thousand snakes all of equal splendour. 
          And Vinata wished to bring forth two sons surpassing the thousand offsprings of Kadru in strength, energy, size of body, and prowess. 
          Unto Kadru her lord gave that boon about a multitude of offspring. 
          And unto Vinata also, Kasyapa said, 'Be it so!' Then Vinata, having; obtained her prayer, rejoiced greatly. 
          Obtaining two sons of superior prowess, she regarded her boon fulfilled. 
          Kadru also obtained her thousand sons of equal splendour. 
          'Bear the embryos carefully,' said Kasyapa, and then he went into the forest, leaving his two wives pleased with his blessings.'''

doc = textacy.make_spacy_doc(text, 'en_core_web_sm')
sentobj = nlp(text)
sentences = textacy.extract.subject_verb_object_triples(sentobj)
summary=''
for i, x in enumerate(sentences):
    subject, verb, fact = x
    print('Fact ' + str(i+1) + ': ' + str(subject) + ' : ' + str(verb) + ' : ' + str(fact))
    summary += 'Fact ' + str(i+1) + ': ' + (str(fact))

Results are as follows:
    Fact 1: I : shall narrate : history
    Fact 2: I : heard : it
    Fact 3: they : became : wives
    Fact 4: Kasyapa : derived : pleasure
    Fact 5: ladies : felt : transports
    Fact 6: Kadru : wished : have
    Fact 7: Vinata : wished : to bring
    Fact 8: lord : gave : boon
    Fact 9: Kasyapa : said : Be
    Fact 10: Vinata : obtained : prayer
    Fact 11: she : regarded : boon
    Fact 12: Kadru : obtained : sons

我试过了

textacy.extract.words
textacy.extract.entities
textacy.extract.ngrams
textacy.extract.noun_chunks
textacy.ke.textrank

一切都按照书上的要求进行,但结果并不完美。我想要类似“Kasyapa 与 Kadru 和 Vinata 姐妹结婚”或“Kasyapa 给 Kadru 和 Vinata 的刺绣”之类的东西。你能建议我怎么做吗?或者建议我使用一些替代包?

4

1 回答 1

1

只是一个更新。我已经能够对“Sauti”句子进行页面排序。以下是按 pagerank 降序排列的结果:

(0.0869526908422304, ['O', 'Brahmana', ',', 'in', 'the', 'golden', 'age', ',', 'Prajapati', 'had', 'two', 'daughters', '.']), 
(0.08675152795526771, ['Named', 'Kadru', 'and', 'Vinata', ',', 'they', 'became', 'the', 'wives', 'of', 'Kasyapa', '.']), 
(0.08607926397402169, ['And', 'Vinata', 'wished', 'to', 'bring', 'forth', 'two', 'sons', 'surpassing', 'the', 'thousand', 'offsprings', 'of', 'Kadru', 'in', 'strength', ',', 'energy', ',', 'size', 'of', 'body', ',', 'and', 'prowess', '.']), 
(0.08096858541855065, ['Kasyapa', 'derived', 'great', 'pleasure', 'from', 'his', 'two', 'wedded', 'wives', 'and', 'being', 'gratified', 'he', ',', 'resembling', 'Prajapati', 'himself', ',', 'offered', 'to', 'give', 'each', 'of', 'them', 'a', 'boon', '.']), 
(0.08025844559654187, ['And', 'unto', 'Vinata', 'also', ',', 'Kasyapa', 'said', ',', '("\'Be",', "'VBD", 'it', 'so', '!', '("\'",', '"\'\'"),', 'Then', 'Vinata', ',', 'having', ';', 'obtained', 'her', 'prayer', ',', 'rejoiced', 'greatly', '.']), 
(0.07764697882919071, ['Obtaining', 'two', 'sons', 'of', 'superior', 'prowess', ',', 'she', 'regarded', 'her', 'boon', 'fulfilled', '.']), 
(0.07717129674341844, ['("\'Bear",', "'IN", 'the', 'embryos', 'carefully', ',', '("\'",', '"\'\'"),', 'said', 'Kasyapa', ',', 'and', 'then', 'he', 'went', 'into', 'the', 'forest', ',', 'leaving', 'his', 'two', 'wives', 'pleased', 'with', 'his', 'blessings', '.']), 
(0.0768816552210493, ['Kadru', 'also', 'obtained', 'her', 'thousand', 'sons', 'of', 'equal', 'splendour', '.']), 
(0.07172005226142254, ['Kadru', 'wished', 'to', 'have', 'for', 'sons', 'a', 'thousand', 'snakes', 'all', 'of', 'equal', 'splendour', '.']), 
(0.06953411123175395, ['Unto', 'Kadru', 'her', 'lord', 'gave', 'that', 'boon', 'about', 'a', 'multitude', 'of', 'offspring', '.']), 
(0.06943939082844, ['Sauti\\', 'said', ',', '("\'",', '"\'\'"),', 'O', 'thou', 'that', 'art', 'blest', 'with', 'longevity', ',', 'I', 'shall', 'narrate', 'the', 'history', 'of', 'Astika', 'as', 'I', 'heard', 'it', 'from', 'my', 'father', '.']), 
(0.06888390365265022, ['O', 'sinless', 'one', ',', 'the', 'sisters', 'were', 'endowed', 'with', 'wonderful', 'beauty', '.']), 
(0.0677120974454628, ['Hearing', 'that', 'their', 'lord', 'was', 'willing', 'to', 'confer', 'on', 'them', 'their', 'choice', 'blessings', ',', 'those', 'excellent', 'ladies', 'felt', 'transports', 'of', 'joy', '.'])]   

结果不是我想要的,但令人印象深刻。我使用了以下这些库:

import nltk.tokenize as tk 
from nltk import sent_tokenize, word_tokenize
from nltk.cluster.util import cosine_distance
from nltk.corpus import brown, stopwords
import networkx as nx

只是想和大家分享这个。

谢谢

于 2020-08-27T16:09:26.417 回答