2

这段代码显示了我正在尝试做的事情的结构。

import multiprocessing
from foo import really_expensive_to_compute_object

## Create a really complicated object that is *hard* to initialise.
T = really_expensive_to_compute_object(10) 

def f(x):
  return T.cheap_calculation(x)

P = multiprocessing.Pool(processes=64)
results = P.map(f, range(1000000))

print results

问题在于,每个进程都从花费大量时间重新计算 T 而不是使用计算一次的原始 T 开始。有没有办法防止这种情况?T 有一个快速(深度)复制方法,所以我可以让 Python 使用它而不是重新计算吗?

4

2 回答 2

2

multiprocessing文件建议

将资源显式传递给子进程

所以你的代码可以重写为这样的:

import multiprocessing
import time
import functools

class really_expensive_to_compute_object(object):
    def __init__(self, arg):
        print 'expensive creation'
        time.sleep(3)

    def cheap_calculation(self, x):
        return x * 2

def f(T, x):
    return T.cheap_calculation(x)

if __name__ == '__main__':
    ## Create a really complicated object that is *hard* to initialise.
    T = really_expensive_to_compute_object(10)
    ## helper, to pass expensive object to function
    f_helper = functools.partial(f, T)
    # i've reduced count for tests 
    P = multiprocessing.Pool(processes=4)
    results = P.map(f_helper, range(100))

    print results
于 2012-04-07T14:40:51.760 回答
1

为什么不f采用T参数而不是引用全局,并自己做副本?

import multiprocessing, copy
from foo import really_expensive_to_compute_object

## Create a really complicated object that is *hard* to initialise.
T = really_expensive_to_compute_object(10) 

def f(t, x):
  return t.cheap_calculation(x)

P = multiprocessing.Pool(processes=64)
results = P.map(f, (copy.deepcopy(T) for _ in range(1000000)), range(1000000))

print results
于 2012-04-07T14:39:56.130 回答