3

遇到从子进程运行 tensorrt 的问题。我不确定这是一个 tensorrt 错误还是我做错了什么。如果这是一个集成错误,我想知道这是否已经在 tensorflow 1.7 的新版本中得到解决。
这是错误的摘要以及如何重现它。

具有单个进程的工作TensorRT 示例 Python 代码:

import pycuda.driver as cuda
import pycuda.autoinit
import argparse
import numpy as np
import time
import tensorrt as trt
from tensorrt.parsers import uffparser

uff_model = open('resnet_v2_50_dc.uff', 'rb').read()

parser = uffparser.create_uff_parser()
parser.register_input("input", (3, 224, 224), 0)
parser.register_output("resnet_v2_50/predictions/Reshape_1")


trt_logger = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)

engine = trt.utils.uff_to_trt_engine(logger=trt_logger,
                                 stream=uff_model,
                                 parser=parser,
                                 max_batch_size=4,
                                 max_workspace_size= 1 << 30,
                                 datatype=trt.infer.DataType.FLOAT)


trt.utils.uff_to_trt_engine()从子进程调用的非工作TensorRT 示例 Python 代码:

import pycuda.driver as cuda
import pycuda.autoinit
import argparse
import numpy as np
import time
import tensorrt as trt
from tensorrt.parsers import uffparser
import multiprocessing
from multiprocessing import sharedctypes, Queue

def inference_process():
  uff_model = open('resnet_v2_50_dc.uff', 'rb').read()

  parser = uffparser.create_uff_parser()
  parser.register_input("input", (3, 224, 224), 0)
  parser.register_output("resnet_v2_50/predictions/Reshape_1")

  trt_logger = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)
  engine = trt.utils.uff_to_trt_engine(logger=trt_logger,
                                     stream=uff_model,
                                     parser=parser,
                                     max_batch_size=4,
                                     max_workspace_size= 1 << 30,
                                     datatype=trt.infer.DataType.FLOAT)

inference_p = multiprocessing.Process(target=inference_process, args=( ))
inference_p.start()

控制台错误消息:

[TensorRT] ERROR: cudnnLayerUtils.cpp (288) - Cuda Error in smVersion: 3
terminate called after throwing an instance of 'nvinfer1::CudaError'
what():  std::exception
4

1 回答 1

0

你应该在子进程中导入 tensorRT!

或许:

import pycuda.driver as cuda
import pycuda.autoinit
import argparse
import numpy as np
import time
import multiprocessing
from multiprocessing import sharedctypes, Queue

def inference_process():
  import tensorrt as trt
  from tensorrt.parsers import uffparser

  uff_model = open('resnet_v2_50_dc.uff', 'rb').read()

  parser = uffparser.create_uff_parser()
  parser.register_input("input", (3, 224, 224), 0)
  parser.register_output("resnet_v2_50/predictions/Reshape_1")

  trt_logger = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)
  engine = trt.utils.uff_to_trt_engine(logger=trt_logger,
                                     stream=uff_model,
                                     parser=parser,
                                     max_batch_size=4,
                                     max_workspace_size= 1 << 30,
                                     datatype=trt.infer.DataType.FLOAT)

inference_p = multiprocessing.Process(target=inference_process, args=( ))
inference_p.start()
于 2019-07-10T02:05:58.997 回答