1

Trying to use the code here https://stackoverflow.com/a/15390953/378594 to convert a numpy array into a shared memory array and back. Running the following code:

shared_array = shmarray.ndarray_to_shm(my_numpy_array)

and then passing the shared_array as an argument in the list of argument for a multiprocessing pool:

pool.map(my_function, list_of_args_arrays)

Where list_of_args_arrays contains my shared array and other arguments.

It results in the following error

PicklingError: Can't pickle <class 'multiprocessing.sharedctypes.c_double_Array_<array size>'>: attribute lookup multiprocessing.sharedctypes.c_double_Array_<array size> failed

Where <array_size> is the linear size of my numpy array.

I guess something has changed in numpy ctypes or something like that?

Further details:

I only need access to shared information. No editing will be done by the processes.

The function that calls the pool lies within a class. The class is initiated and the function is called by a main.py file.

4

2 回答 2

1

显然,使用时multiprocessing.Pool所有参数都被腌制,因此使用multiprocessing.Array. 更改代码以使其使用一系列进程就可以了。

于 2013-05-01T02:10:13.997 回答
0

我认为你把事情复杂化了:没有必要腌制数组(特别是如果它们是只读的):

你只需要通过一些全局变量来保持它们可以访问:

(已知在linux下工作,但在windows下可能不工作,不知道)

import numpy as np,multiprocessing as mp
class si:
  arrs=None

def summer(i):
    return si.arrs[i].sum()

def main():
    si.arrs=[np.zeros(100) for _ in range(1000)]
    pool = mp.Pool(16)
    res=pool.map(summer,range(1000))
    print res

if __name__ == '__main__':
    main()

如果你的数组需要读写,你需要使用这个: 共享只读数据是否被复制到不同的进程以进行 Python 多处理?

于 2013-04-30T15:27:49.013 回答