11

来自编程珍珠的第 15.2 节

C代码可以在这里查看:http ://www.cs.bell-labs.com/cm/cs/pearls/longdup.c

当我使用后缀数组在 Python 中实现它时:

example = open("iliad10.txt").read()
def comlen(p, q):
    i = 0
    for x in zip(p, q):
        if x[0] == x[1]:
            i += 1
        else:
            break
    return i

suffix_list = []
example_len = len(example)
idx = list(range(example_len))
idx.sort(cmp = lambda a, b: cmp(example[a:], example[b:]))  #VERY VERY SLOW

max_len = -1
for i in range(example_len - 1):
    this_len = comlen(example[idx[i]:], example[idx[i+1]:])
    print this_len
    if this_len > max_len:
        max_len = this_len
        maxi = i

我发现这idx.sort一步很慢。我认为这很慢,因为 Python 需要按值而不是按指针传递子字符串(如上面的 C 代码)。

测试文件可以从这里下载

C 代码只需 0.3 秒即可完成。

time cat iliad10.txt |./longdup 
On this the rest of the Achaeans with one voice were for
respecting the priest and taking the ransom that he offered; but
not so Agamemnon, who spoke fiercely to him and sent him roughly
away. 

real    0m0.328s
user    0m0.291s
sys 0m0.006s

但是对于 Python 代码,它永远不会在我的计算机上结束(我等了 10 分钟并杀死了它)

有谁知道如何使代码高效?(例如,少于 10 秒)

4

4 回答 4

15

我的解决方案基于Suffix arrays。它是由Prefix 将 Longest common prefix 加倍构造。最坏情况的复杂度为 O(n (log n)^2)。文件“iliad.mb.txt”在我的笔记本电脑上需要 4 秒。该longest_common_substring函数很短并且可以很容易地修改,例如用于搜索 10 个最长的非重叠子字符串。 如果重复字符串超过 10000 个字符,则此 Python 代码比问题中的原始 C 代码更快。

from itertools import groupby
from operator import itemgetter

def longest_common_substring(text):
    """Get the longest common substrings and their positions.
    >>> longest_common_substring('banana')
    {'ana': [1, 3]}
    >>> text = "not so Agamemnon, who spoke fiercely to "
    >>> sorted(longest_common_substring(text).items())
    [(' s', [3, 21]), ('no', [0, 13]), ('o ', [5, 20, 38])]

    This function can be easy modified for any criteria, e.g. for searching ten
    longest non overlapping repeated substrings.
    """
    sa, rsa, lcp = suffix_array(text)
    maxlen = max(lcp)
    result = {}
    for i in range(1, len(text)):
        if lcp[i] == maxlen:
            j1, j2, h = sa[i - 1], sa[i], lcp[i]
            assert text[j1:j1 + h] == text[j2:j2 + h]
            substring = text[j1:j1 + h]
            if not substring in result:
                result[substring] = [j1]
            result[substring].append(j2)
    return dict((k, sorted(v)) for k, v in result.items())

def suffix_array(text, _step=16):
    """Analyze all common strings in the text.

    Short substrings of the length _step a are first pre-sorted. The are the 
    results repeatedly merged so that the garanteed number of compared
    characters bytes is doubled in every iteration until all substrings are
    sorted exactly.

    Arguments:
        text:  The text to be analyzed.
        _step: Is only for optimization and testing. It is the optimal length
               of substrings used for initial pre-sorting. The bigger value is
               faster if there is enough memory. Memory requirements are
               approximately (estimate for 32 bit Python 3.3):
                   len(text) * (29 + (_size + 20 if _size > 2 else 0)) + 1MB

    Return value:      (tuple)
      (sa, rsa, lcp)
        sa:  Suffix array                  for i in range(1, size):
               assert text[sa[i-1]:] < text[sa[i]:]
        rsa: Reverse suffix array          for i in range(size):
               assert rsa[sa[i]] == i
        lcp: Longest common prefix         for i in range(1, size):
               assert text[sa[i-1]:sa[i-1]+lcp[i]] == text[sa[i]:sa[i]+lcp[i]]
               if sa[i-1] + lcp[i] < len(text):
                   assert text[sa[i-1] + lcp[i]] < text[sa[i] + lcp[i]]
    >>> suffix_array(text='banana')
    ([5, 3, 1, 0, 4, 2], [3, 2, 5, 1, 4, 0], [0, 1, 3, 0, 0, 2])

    Explanation: 'a' < 'ana' < 'anana' < 'banana' < 'na' < 'nana'
    The Longest Common String is 'ana': lcp[2] == 3 == len('ana')
    It is between  tx[sa[1]:] == 'ana' < 'anana' == tx[sa[2]:]
    """
    tx = text
    size = len(tx)
    step = min(max(_step, 1), len(tx))
    sa = list(range(len(tx)))
    sa.sort(key=lambda i: tx[i:i + step])
    grpstart = size * [False] + [True]  # a boolean map for iteration speedup.
    # It helps to skip yet resolved values. The last value True is a sentinel.
    rsa = size * [None]
    stgrp, igrp = '', 0
    for i, pos in enumerate(sa):
        st = tx[pos:pos + step]
        if st != stgrp:
            grpstart[igrp] = (igrp < i - 1)
            stgrp = st
            igrp = i
        rsa[pos] = igrp
        sa[i] = pos
    grpstart[igrp] = (igrp < size - 1 or size == 0)
    while grpstart.index(True) < size:
        # assert step <= size
        nextgr = grpstart.index(True)
        while nextgr < size:
            igrp = nextgr
            nextgr = grpstart.index(True, igrp + 1)
            glist = []
            for ig in range(igrp, nextgr):
                pos = sa[ig]
                if rsa[pos] != igrp:
                    break
                newgr = rsa[pos + step] if pos + step < size else -1
                glist.append((newgr, pos))
            glist.sort()
            for ig, g in groupby(glist, key=itemgetter(0)):
                g = [x[1] for x in g]
                sa[igrp:igrp + len(g)] = g
                grpstart[igrp] = (len(g) > 1)
                for pos in g:
                    rsa[pos] = igrp
                igrp += len(g)
        step *= 2
    del grpstart
    # create LCP array
    lcp = size * [None]
    h = 0
    for i in range(size):
        if rsa[i] > 0:
            j = sa[rsa[i] - 1]
            while i != size - h and j != size - h and tx[i + h] == tx[j + h]:
                h += 1
            lcp[rsa[i]] = h
            if h > 0:
                h -= 1
    if size > 0:
        lcp[0] = 0
    return sa, rsa, lcp

我更喜欢这个解决方案而不是更复杂的 O(n log n),因为 Python 有一个非常快速的列表排序算法(Timsort)。Python 的排序可能比那篇文章中的方法中必要的线性时间操作要快,在随机字符串和小字母表(典型用于 DNA 基因组分析)的非常特殊的假设下,这应该是 O(n)。我在Gog 2011中读到,我的算法的最坏情况 O(n log n) 实际上比许多不能使用 CPU 内存缓存的 O(n) 算法更快。

如果文本包含 8 kB 长的重复字符串,则基于grow_chains的另一个答案中的代码比问题中的原始示例慢 19 倍。长时间重复的文本在古典文学中并不典型,但它们经常出现在例如“独立”学校作业集中。该程序不应该冻结它。

我为 Python 2.7、3.3 - 3.6编写了一个示例并使用相同的代码进行了测试。

于 2012-12-03T23:45:41.637 回答
4

主要问题似乎是python通过复制进行切片:https ://stackoverflow.com/a/5722068/538551

您必须使用memoryview来获取引用而不是副本。当我这样做时,程序在函数之后挂起(非常idx.sort快)。

我敢肯定,只要做一点工作,你就可以得到其余的工作。

编辑:

上述更改不能作为直接替换,因为cmpstrcmp. 例如,尝试以下 C 代码:

#include <stdio.h>
#include <string.h>

int main() {
    char* test1 = "ovided by The Internet Classics Archive";
    char* test2 = "rovided by The Internet Classics Archive.";
    printf("%d\n", strcmp(test1, test2));
}

并将结果与​​此 python 进行比较:

test1 = "ovided by The Internet Classics Archive";
test2 = "rovided by The Internet Classics Archive."
print(cmp(test1, test2))

C 代码打印-3在我的机器上,而 python 版本打印-1. 看起来示例C代码正在滥用(毕竟strcmp它被用于)的返回值。qsort我找不到任何关于何时strcmp返回除 之外的其他内容的文档[-1, 0, 1],但在原始代码中添加printftopstrcmp显示了该范围之外的许多值(3、-31、5 是前 3 个值)。

为了确保这-3不是一些错误代码,如果我们反转 test1 和 test2,我们会得到3.

编辑:

以上是有趣的琐事,但在影响任一代码块方面实际上并不正确。当我关闭笔记本电脑并离开 wifi 区域时,我意识到了这一点......真的应该在我点击之前仔细检查所有内容Save

FWIW,cmp最肯定适用于对象(按预期memoryview打印):-1

print(cmp(memoryview(test1), memoryview(test2)))

我不确定为什么代码没有按预期工作。在我的机器上打印出列表看起来不像预期的那样。我将对此进行调查并尝试找到更好的解决方案,而不是抓住稻草。

于 2012-11-26T07:30:28.267 回答
4

The translation of the algorithm into Python:

from itertools import imap, izip, starmap, tee
from os.path   import commonprefix

def pairwise(iterable): # itertools recipe
    a, b = tee(iterable)
    next(b, None)
    return izip(a, b)

def longest_duplicate_small(data):
    suffixes = sorted(data[i:] for i in xrange(len(data))) # O(n*n) in memory
    return max(imap(commonprefix, pairwise(suffixes)), key=len)

buffer() allows to get a substring without copying:

def longest_duplicate_buffer(data):
    n = len(data)
    sa = sorted(xrange(n), key=lambda i: buffer(data, i)) # suffix array
    def lcp_item(i, j):  # find longest common prefix array item
        start = i
        while i < n and data[i] == data[i + j - start]:
            i += 1
        return i - start, start
    size, start = max(starmap(lcp_item, pairwise(sa)), key=lambda x: x[0])
    return data[start:start + size]

It takes 5 seconds on my machine for the iliad.mb.txt.

In principle it is possible to find the duplicate in O(n) time and O(n) memory using a suffix array augmented with a lcp array.


Note: *_memoryview() is deprecated by *_buffer() version

More memory efficient version (compared to longest_duplicate_small()):

def cmp_memoryview(a, b):
    for x, y in izip(a, b):
        if x < y:
            return -1
        elif x > y:
            return 1
    return cmp(len(a), len(b))

def common_prefix_memoryview((a, b)):
    for i, (x, y) in enumerate(izip(a, b)):
        if x != y:
            return a[:i]
    return a if len(a) < len(b) else b

def longest_duplicate(data):
    mv = memoryview(data)
    suffixes = sorted((mv[i:] for i in xrange(len(mv))), cmp=cmp_memoryview)
    result = max(imap(common_prefix_memoryview, pairwise(suffixes)), key=len)
    return result.tobytes()

It takes 17 seconds on my machine for the iliad.mb.txt. The result is:

On this the rest of the Achaeans with one voice were for respecting
the priest and taking the ransom that he offered; but not so Agamemnon,
who spoke fiercely to him and sent him roughly away. 

I had to define custom functions to compare memoryview objects because memoryview comparison either raises an exception in Python 3 or produces wrong result in Python 2:

>>> s = b"abc"
>>> memoryview(s[0:]) > memoryview(s[1:])
True
>>> memoryview(s[0:]) < memoryview(s[1:])
True

Related questions:

Find the longest repeating string and the number of times it repeats in a given string

finding long repeated substrings in a massive string

于 2012-11-26T23:18:44.520 回答
0

This version takes about 17 secs on my circa-2007 desktop using totally different algorithm:

#!/usr/bin/env python

ex = open("iliad.mb.txt").read()

chains = dict()

# populate initial chains dictionary
for (a,b) in enumerate(zip(ex,ex[1:])) :
    s = ''.join(b)
    if s not in chains :
        chains[s] = list()

    chains[s].append(a)

def grow_chains(chains) :
    new_chains = dict()
    for (string,pos) in chains :
        offset = len(string)
        for p in pos :
            if p + offset >= len(ex) : break

            # add one more character
            s = string + ex[p + offset]

            if s not in new_chains :
                new_chains[s] = list()

            new_chains[s].append(p)
    return new_chains

# grow and filter, grow and filter
while len(chains) > 1 :
    print 'length of chains', len(chains)

    # remove chains that appear only once
    chains = [(i,chains[i]) for i in chains if len(chains[i]) > 1]

    print 'non-unique chains', len(chains)
    print [i[0] for i in chains[:3]]

    chains = grow_chains(chains)

The basic idea is to create a list of substrings and positions where they occure, thus eliminating the need to compare same strings again and again. The resulting list look like [('ind him, but', [466548, 739011]), (' bulwark bot', [428251, 428924]), (' his armour,', [121559, 124919, 193285, 393566, 413634, 718953, 760088])]. Unique strings are removed. Then every list member grows by 1 character and new list is created. Unique strings are removed again. And so on and so forth...

于 2012-11-26T10:05:04.950 回答