3

假设我有一段。我通过 sent_tokenize 将其分成句子:

variable = ['By the 1870s the scientific community and much of the general public had accepted evolution as a fact.',
    'However, many favoured competing explanations and it was not until the emergence of the modern evolutionary synthesis from the 1930s to the 1950s that a broad consensus developed in which natural selection was the basic mechanism of evolution.',
    'Darwin published his theory of evolution with compelling evidence in his 1859 book On the Origin of Species, overcoming scientific rejection of earlier concepts of transmutation of species.']

现在我将每个句子分成单词并将其附加到某个变量中。我怎样才能找到具有最多相同单词的两组句子。我不知道该怎么做。如果我有 10 个句子,那么我将有 90 个检查(每个句子之间)。谢谢。

4

2 回答 2

5

您可以使用 python sets的交集。

如果你有这样的三个句子:

a = "a b c d"
b = "a c x y"
c = "a q v"

您可以通过以下方式检查两个句子中出现了多少相同的单词:

sameWords = set.intersection(set(a.split(" ")), set(c.split(" ")))
numberOfWords = len(sameWords)

有了这个,您可以遍历您的句子列表,并找到其中最相同单词的两个。这给了我们:

sentences = ["a b c d", "a d e f", "c x y", "a b c d x"]

def similar(s1, s2):
    sameWords = set.intersection(set(s1.split(" ")), set(s2.split(" ")))
    return len(sameWords)

currentSimilar = 0
s1 = ""
s2 = ""

for sentence in sentences:
    for sentence2 in sentences:
        if sentence is sentence2:
            continue
        similiarity = similar(sentence, sentence2)
        if (similiarity > currentSimilar):
            s1 = sentence
            s2 = sentence2
            currentSimilar = similiarity

print(s1, s2)

如果性能是一个问题,可能会有一些动态编程解决这个问题。

于 2013-11-07T15:57:33.833 回答
1
import itertools

sentences = ["There is no subtle meaning in this.", "Don't analyze this!", "What is this sentence?"]
decomposedsentences = ((index, set(sentence.strip(".?!,").split(" "))) for index, sentence in enumerate(sentences))
s1,s2 = max(itertools.combinations(decomposedsentences, 2), key = lambda sentences: len(sentences[0][1]&sentences[1][1]))
print("The two sentences with the most common words", sentences[s1[0]], sentences[s2[0]])
于 2013-11-07T16:21:08.367 回答