9

我有这两个实现来计算有限生成器的长度,同时保留数据以供进一步处理:

def count_generator1(generator):
    '''- build a list with the generator data
       - get the length of the data
       - return both the length and the original data (in a list)
       WARNING: the memory use is unbounded, and infinite generators will block this'''
    l = list(generator)
    return len(l), l

def count_generator2(generator):
    '''- get two generators from the original generator
       - get the length of the data from one of them
       - return both the length and the original data, as returned by tee
       WARNING: tee can use up an unbounded amount of memory, and infinite generators will block this'''
    for_length, saved  = itertools.tee(generator, 2)
    return sum(1 for _ in for_length), saved

两者都有缺点,都可以完成工作。有人可以评论他们,甚至提供更好的选择吗?

4

3 回答 3

13

如果你必须这样做,第一种方法要好得多——因为你消耗了所有的值,itertools.tee()无论如何都必须存储所有的值,这意味着列表会更有效。

引用文档

此迭代工具可能需要大量辅助存储(取决于需要存储多少临时数据)。一般来说,如果一个迭代器在另一个迭代器启动之前使用了大部分或全部数据,那么使用 list() 而不是 tee() 会更快。

于 2013-08-02T10:18:01.183 回答
4

timeit我在我能想到的几种方法上运行了 Windows 64 位 Python 3.4.3 :

>>> from timeit import timeit
>>> from textwrap import dedent as d
>>> timeit(
...     d("""
...     count = -1
...     for _ in s:
...         count += 1
...     count += 1
...     """),
...     "s = range(1000)",
... )
50.70772041983173
>>> timeit(
...     d("""
...     count = -1
...     for count, _ in enumerate(s):
...         pass
...     count += 1
...     """),
...     "s = range(1000)",
... )
42.636973504498656
>>> timeit(
...     d("""
...     count, _ = reduce(f, enumerate(range(1000)), (-1, -1))
...     count += 1
...     """),
...     d("""
...     from functools import reduce
...     def f(_, count):
...         return count
...     s = range(1000)
...     """),
... )
121.15513102540672
>>> timeit("count = sum(1 for _ in s)", "s = range(1000)")
58.179126025925825
>>> timeit("count = len(tuple(s))", "s = range(1000)")
19.777029680237774
>>> timeit("count = len(list(s))", "s = range(1000)")
18.145157531932
>>> timeit("count = len(list(1 for _ in s))", "s = range(1000)")
57.41422175998332

令人震惊的是,最快的方法是使用 a list(甚至不是 a tuple)来耗尽迭代器并从那里获取长度:

>>> timeit("count = len(list(s))", "s = range(1000)")
18.145157531932

当然,这会带来内存问题的风险。最好的低内存替代方案是在 NOOP 循环上使用 enumerate for

>>> timeit(
...     d("""
...     count = -1
...     for count, _ in enumerate(s):
...         pass
...     count += 1
...     """),
...     "s = range(1000)",
... )
42.636973504498656

干杯!

于 2015-07-10T21:17:45.317 回答
0

如果您在处理数据之前不需要迭代器的长度,则可以使用带有未来的辅助方法将计数添加到迭代器/流的处理中:

import asyncio
def ilen(iter):
    """
    Get future with length of iterator
    The future will hold the length once the iteartor is exhausted
    @returns: <iter, cnt-future>
    """
    def ilen_inner(iter, future):
        cnt = 0
        for row in iter:
            cnt += 1
            yield row
        future.set_result(cnt)
    cnt_future = asyncio.Future()
    return ilen_inner(iter, cnt_future), cnt_future

用法是:

data = db_connection.execute(query)
data, cnt = ilen(data)
solve_world_hunger(data)
print(f"Processed {cnt.result()} items")
于 2020-10-27T12:30:27.537 回答