0

我有一个大约 2 GB 的巨大 dict 变量。我正在对这个字典进行一些科学计算(只读)。但是,共享字典的阅读速度比普通字典慢很多,即使它可以节省大量内存。有没有更快的方法在多处理作业中共享只读数据?这是我的代码

import multiprocessing as mp
import numpy as np
import time
if __name__ == "__main__":
    origin_data = {
        "data" : np.random.rand(1000,1000)
        
        
        }
    
    m1 = mp.Manager()
    shm_origin_data = m1.dict(origin_data)
    
    t1 = time.time()
    for i in range(100):
        origin_data["data"]+origin_data["data"]
    t2 = time.time()
    print("local dict time is "+ str(t2-t1))
    
    t1 = time.time()
    for i in range(100):
       shm_origin_data["data"] + shm_origin_data["data"]
    t2 = time.time()
    print("shared dict time is "+ str(t2-t1))

结果是

local dict time is 0.7529358863830566
shared dict time  is 9.097671508789062
4

0 回答 0