19

我想从汉明距离为(比如说)1的单词列表中构建一个图表,或者换句话说,如果两个单词仅与一个字母不同(lo l -> lo t),则它们是连接的。

所以给定

words = [ lol, lot, bot ]

该图将是

{
  'lol' : [ 'lot' ],
  'lot' : [ 'lol', 'bot' ],
  'bot' : [ 'lot' ]
}

简单的方法是将列表中的每个单词相互比较并计算不同的字符;可悲的是,这是一种O(N^2)算法。

我可以使用哪种算法/ds/策略来获得更好的性能?

另外,让我们假设只有拉丁字符,并且所有单词的长度都相同。

4

4 回答 4

21

假设您将字典存储在 a 中set(),因此查找平均为O(1)(最坏情况为O(n)

您可以从一个单词生成汉明距离为 1 的所有有效单词:

>>> def neighbours(word):
...     for j in range(len(word)):
...         for d in string.ascii_lowercase:
...             word1 = ''.join(d if i==j else c for i,c in enumerate(word))
...             if word1 != word and word1 in words: yield word1
...
>>> {word: list(neighbours(word)) for word in words}
{'bot': ['lot'], 'lol': ['lot'], 'lot': ['bot', 'lol']}

如果M是单词的长度,L是字母表的长度(即 26),则使用这种方法查找相邻单词的最坏情况时间复杂度是O(L*M*N)

“简单方法”方法的时间复杂度为O(N^2)

这种方法什么时候更好?When L*M < N,即如果只考虑小写字母,when M < N/26。(我在这里只考虑最坏的情况)

注意:一个英文单词的平均长度是 5.1 个字母。因此,如果您的字典大小大于 132 个单词,您应该考虑这种方法。

可能有可能获得比这更好的性能。然而,这真的很容易实现。

实验基准:

“简单方法”算法(A1):

from itertools import zip_longest
def hammingdist(w1,w2): return sum(1 if c1!=c2 else 0 for c1,c2 in zip_longest(w1,w2))
def graph1(words): return {word: [n for n in words if hammingdist(word,n) == 1] for word in words}

该算法(A2):

def graph2(words): return {word: list(neighbours(word)) for word in words}

基准代码:

for dict_size in range(100,6000,100):
    words = set([''.join(random.choice(string.ascii_lowercase) for x in range(3)) for _ in range(dict_size)])
    t1 = Timer(lambda: graph1()).timeit(10)
    t2 = Timer(lambda: graph2()).timeit(10)
    print('%d,%f,%f' % (dict_size,t1,t2))

输出:

100,0.119276,0.136940
200,0.459325,0.233766
300,0.958735,0.325848
400,1.706914,0.446965
500,2.744136,0.545569
600,3.748029,0.682245
700,5.443656,0.773449
800,6.773326,0.874296
900,8.535195,0.996929
1000,10.445875,1.126241
1100,12.510936,1.179570
...

数据图

我用较小的 N 步长运行了另一个基准测试,以便更接近它:

10,0.002243,0.026343
20,0.010982,0.070572
30,0.023949,0.073169
40,0.035697,0.090908
50,0.057658,0.114725
60,0.079863,0.135462
70,0.107428,0.159410
80,0.142211,0.176512
90,0.182526,0.210243
100,0.217721,0.218544
110,0.268710,0.256711
120,0.334201,0.268040
130,0.383052,0.291999
140,0.427078,0.312975
150,0.501833,0.338531
160,0.637434,0.355136
170,0.635296,0.369626
180,0.698631,0.400146
190,0.904568,0.444710
200,1.024610,0.486549
210,1.008412,0.459280
220,1.056356,0.501408
...

数据图2

您会看到权衡非常低(长度为 3 的单词字典为 100)。对于小型词典,O(N^2) 算法的性能稍好一些,但随着 N 的增长,O(LMN) 算法很容易被击败。

对于具有较长单词的字典,O(LMN) 算法在 N 中保持线性,它只是具有不同的斜率,因此权衡稍微向右移动(长度 = 5 时为 130)。

于 2015-06-28T15:12:12.647 回答
6

无需依赖字母大小。例如,给定一个单词bot,将其插入到 key 下的单词列表字典中?ot, b?t, bo?。然后,对于每个单词列表,连接所有对。

import collections


d = collections.defaultdict(list)
with open('/usr/share/dict/words') as f:
    for line in f:
        for word in line.split():
            if len(word) == 6:
                for i in range(len(word)):
                    d[word[:i] + ' ' + word[i + 1:]].append(word)
pairs = [(word1, word2) for s in d.values() for word1 in s for word2 in s if word1 < word2]
print(len(pairs))
于 2015-06-28T15:57:48.130 回答
5

三元搜索树很好地支持近邻搜索。

如果您的字典存储为 TST,那么我相信,在构建图表时查找的平均复杂度将接近O(N*log(N))真实世界的单词字典。

并检查Efficient auto-complete with a ternary search tree article

于 2015-06-28T16:31:35.917 回答
1

这是线性 O(N) 算法,但具有很大的常数因子 (R * L * 2)。R 是基数(拉丁字母是 26)。L 是中等长度的单词。2 是添加/替换通配符的一个因素。所以 abc 和 aac 和 abca 是两个运算,导致汉明距离为 1。

它是用 Ruby 编写的。对于 240k 字,平均硬件需要 ~250Mb RAM 和 136 秒

图实现蓝图

class Node
  attr_reader :val, :edges

  def initialize(val)
    @val = val
    @edges = {}
  end

  def <<(node)
    @edges[node.val] ||= true
  end

  def connected?(node)
    @edges[node.val]
  end

  def inspect
    "Val: #{@val}, edges: #{@edges.keys * ', '}"
  end
end

class Graph
  attr_reader :vertices
  def initialize
    @vertices = {}
  end

  def <<(val)
    @vertices[val] = Node.new(val)
  end

  def connect(node1, node2)
    # print "connecting #{size} #{node1.val}, #{node2.val}\r"
    node1 << node2
    node2 << node1
  end

  def each
    @vertices.each do |val, node|
      yield [val, node]
    end
  end

  def get(val)
    @vertices[val]
  end
end

算法本身

CHARACTERS = ('a'..'z').to_a
graph = Graph.new

# ~ 240 000 words
File.read("/usr/share/dict/words").each_line.each do |word|
  word = word.chomp
  graph << word.downcase
end

graph.each do |val, node|
  CHARACTERS.each do |char|
    i = 0
    while i <= val.size
      node2 = graph.get(val[0, i] + char + val[i..-1])
      graph.connect(node, node2) if node2
      if i < val.size
        node2 = graph.get(val[0, i] + char + val[i+1..-1])
        graph.connect(node, node2) if node2
      end
      i += 1
    end
  end
end
于 2015-06-28T15:40:24.970 回答