对于一批令人尴尬的并行作业,从 for 循环到并行的过程几乎是自然的。
>>> import multiprocess as mp
>>> # build a target function
>>> def doit(x):
... return x**2 - 1
...
>>> x = range(10)
>>> # the for loop
>>> y = []
>>> for i in x:
... y.append(doit(i))
...
>>> y
[-1, 0, 3, 8, 15, 24, 35, 48, 63, 80]
那么如何并行处理这个函数呢?
>>> # convert the for loop to a map (still serial)
>>> y = map(doit, x)
>>> y
[-1, 0, 3, 8, 15, 24, 35, 48, 63, 80]
>>>
>>> # build a worker pool for parallel tasks
>>> p = mp.Pool()
>>> # do blocking parallel
>>> y = p.map(doit, x)
>>> y
[-1, 0, 3, 8, 15, 24, 35, 48, 63, 80]
>>>
>>> # use an iterator (non-blocking)
>>> y = p.imap(doit, x)
>>> y
<multiprocess.pool.IMapIterator object at 0x10358d150>
>>> print list(y)
[-1, 0, 3, 8, 15, 24, 35, 48, 63, 80]
>>> # do asynchronous parallel
>>> y = p.map_async(doit, x)
>>> y
<multiprocess.pool.MapResult object at 0x10358d1d0>
>>> print y.get()
[-1, 0, 3, 8, 15, 24, 35, 48, 63, 80]
>>>
>>> # or if you like for loops, there's always this…
>>> y = p.imap_unordered(doit, x)
>>> z = []
>>> for i in iter(y):
... z.append(i)
...
>>> z
[-1, 0, 3, 8, 15, 24, 35, 48, 63, 80]
最后一种形式是无序迭代器,它往往是最快的……但您不必关心返回结果的顺序——它们是无序的,并且不能保证以提交时的相同顺序返回。
另请注意,我使用multiprocess
(a fork) 而不是multiprocessing
...,但纯粹是因为multiprocess
在处理交互式定义的函数时更好。否则,上面的代码对于multiprocessing
.