0

我必须将 4 个 keras 模型并行加载到 python 字典中以减少加载时间。我的代码如下

import *
from threading import Thread
models_out = {}

def model_loading(arg,model_num):
    ###code to fetch model_object based on model_num###
    models_out.update(model_num: model_object)

def prediction():
    thread0 = Thread(target=model_loading, args=(arg, "model_one",))
    thread1 = Thread(target=model_loading, args=(arg, "model_two",))
    thread2 = Thread(target=model_loading, args=(arg, "model_three",))
    thread3 = Thread(target=model_loading, args=(arg, "model_four",))

    thread0.start()
    thread1.start()
    thread2.start()
    thread3.start()
    thread0.join()
    thread1.join()
    thread2.join()
    thread3.join()

if __name__ == '__main__':
    
    prediction()

我的 models_out 变量应该是

{"model_one":model_object,"model_two":model_object,"model_three":model_object,"model_four":model_object}

它导致以下错误:

TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(115, 10), dtype=float32) is not an element of this graph.
4

1 回答 1

0

一种解决方案可能是稍微改变代码的结构:

  1. 更新函数model_loading以获取文件路径(即要加载的模型检查点)并返回加载的模型(而不是更新model_out变量)
  2. 使用concurrent.futures内置并为每个模型创建一个线程来加载
import concurrent.futures as cf

def model_loading(file_path):
   ... # code to load your model
   
   return model_loaded

def prediction():
   model_file_paths = ["model_file_path1", "model_file_path2"]

   with cf.ThreadPoolExecutor(max_workers=len(model_file_paths)) as executor:
      models_loaded = executor.map(model_loading, model_file_paths)

   models_out = dict(zip(model_file_paths, models_loaded)) # mapping from file_path -> model object
   
   return models_out
于 2021-01-19T15:42:21.697 回答