在每种情况下,我都必须使用来自数学公式的浮点数创建并填充巨大的数组(例如96 Go,72000 行 * 72000 列)。数组将在之后计算。
import itertools, operator, time, copy, os, sys
import numpy
from multiprocessing import Pool
def f2(x): # more complex mathematical formulas that change according to values in *i* and *x*
temp=[]
for i in combine:
temp.append(0.2*x[1]*i[1]/64.23)
return temp
def combinations_with_replacement_counts(n, r): #provide all combinations of r balls in n boxes
size = n + r - 1
for indices in itertools.combinations(range(size), n-1):
starts = [0] + [index+1 for index in indices]
stops = indices + (size,)
yield tuple(map(operator.sub, stops, starts))
global combine
combine = list(combinations_with_replacement_counts(3, 60)) #here putted 60 but need 350 instead
print len(combine)
if __name__ == '__main__':
t1=time.time()
pool = Pool() # start worker processes
results = [pool.apply_async(f2, (x,)) for x in combine]
roots = [r.get() for r in results]
print roots [0:3]
pool.close()
pool.join()
print time.time()-t1
- 创建和填充如此巨大的 numpy 数组的最快方法是什么?填充列表然后聚合然后转换为numpy数组?
- 我们可以并行计算,知道二维数组的案例/列/行是独立的以加速数组的填充吗?使用多处理优化此类计算的线索/线索?