按照M. O'Neill 的精彩论文,我尝试在 Python 中实现一些惰性的、无限版本的 Eratosthenes 筛。我惊讶地发现,论文声称应该运行得更快的基于堆的版本实际上对我来说慢了两倍多。
该论文包含两个示例,一个基于 dict,我已将其翻译(来自 Haskell),因此:
from itertools import count
def dict_sieve():
yield 2
yield 3
candidates = count(5, 2)
composites = {9:{3}} # map composites to their prime factors
for candidate in candidates:
try:
factors = composites.pop(candidate)
except KeyError: # if it's not in the dict, it's prime
yield candidate
composites[candidate**2] = {candidate} # Euler's optimization: start from prime**2
else:
for prime in factors: # go through the prime factors and increment their keys
try:
composites[candidate+prime*2].add(prime) # use prime*2 because we are ignoring evens
except KeyError:
composites[candidate+prime*2] = {prime}
论文中的第二个示例演示了优先级队列作为数据结构的使用。它还使用惰性列表,而不是简单的增量,为了公平测试,我没有这样做。(此外,我在惰性列表中使用了实例,itertools.count
但我发现它的运行速度稍慢)。
from itertools import count
from heapq import heappush, heapreplace
def heap_sieve():
yield 2
yield 3
candidates = count(5,2)
composites = [(9, 3)] # a priority queue of composite/factor pairs, keyed by composite
for candidate in candidates:
prime_flag = True
while composites[0][0] == candidate: # loop because there may be duplicates
prime_flag = False # signal to the if statement below
composite, prime = composites[0]
heapreplace(composites, (composite + prime*2, prime))
if prime_flag:
yield candidate
heappush(composites, (candidate**2, candidate))
我对这两个进行了计时,以及一个“渴望”的版本,这里没有复制,它产生了一个低于限制的所有素数的列表:
In [44]: from itertools import islice
In [45]: %timeit list(islice(dict_sieve(), 100000))
...: %timeit list(islice(heap_sieve(), 100000))
...: %timeit eager_sieve(1299710) # 1299709 is the 100,000th prime
...:
1 loops, best of 3: 2.12 s per loop
1 loops, best of 3: 4.94 s per loop
1 loops, best of 3: 677 ms per loop
'eager' 版本更快并不奇怪——它基本上是在内存使用、必须指定上限的不便和 CPU 时间之间进行权衡。然而,当论文声称它更高效时,我确实发现版本慢得多,这让我感到惊讶。heapq
我的实施有问题吗?或者仅仅是因为我们都知道,dicts 是超快的(而且heapq
相对较慢)?