3

我遇到了关于 TensorRT 和 Tensorflow 的问题。我正在使用 NVIDIA jetson nano,并尝试将简单的 Tensorflow 模型转换为 TensorRT 优化模型。我正在使用 tensorflow 2.1.0 和 python 3.6.9。我尝试使用NVIDIA-guide中的 t.his 代码示例:

from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)

为了测试这一点,我从 tensorflow 网站上举了一个简单的例子。要将模型转换为 TensorRT 模型,我将模型保存为“savedModel”并将其加载到 trt.TrtGraphConverterV2 函数中:

#https://www.tensorflow.org/tutorials/quickstart/beginner

import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import os

#mnist = tf.keras.datasets.mnist

#(x_train, y_train), (x_test, y_test) = mnist.load_data()
#x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  #tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10)
])

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy'])


# create paths to save models
model_name = "simpleModel"
pb_model  = os.path.join(os.path.dirname(os.path.abspath(__file__)),(model_name+"_pb")) 
trt_model = os.path.join(os.path.dirname(os.path.abspath(__file__)),(model_name+"_trt")) 

if not os.path.exists(pb_model):
    os.mkdir(pb_model)

if not os.path.exists(trt_model):
    os.mkdir(trt_model)

tf.saved_model.save(model, pb_model)


# https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usage-example
print("\nconverting to trt-model")
converter = trt.TrtGraphConverterV2(input_saved_model_dir=pb_model )
print("\nconverter.convert")
converter.convert()
print("\nconverter.save")
converter.save(trt_model)

print("trt-model saved under: ",trt_model)

当我运行此代码时,它会保存 trt 优化模型,但无法使用该模型。例如,当我加载模型并尝试 model.summary() 时,它会告诉我:

Traceback (most recent call last):
  File "/home/al/Code/Benchmark_70x70/test-load-pb.py", line 45, in <module>
    model.summary()
AttributeError: '_UserObject' object has no attribute 'summary'

这是转换器脚本的完整输出:

2020-04-01 20:38:07.395780: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:11.837436: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-04-01 20:38:11.879775: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
2020-04-01 20:38:17.015440: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-04-01 20:38:17.054065: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.061718: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: 
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-04-01 20:38:17.061853: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:17.061989: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-04-01 20:38:17.145546: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-04-01 20:38:17.252192: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-04-01 20:38:17.368195: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-04-01 20:38:17.433245: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-04-01 20:38:17.433451: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-01 20:38:17.433761: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.434112: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.434418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-01 20:38:17.483529: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-04-01 20:38:17.504302: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x13e7b0f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-01 20:38:17.504407: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-04-01 20:38:17.713898: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.714293: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x13de1210 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-04-01 20:38:17.714758: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2020-04-01 20:38:17.715405: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.715650: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: 
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-04-01 20:38:17.715796: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:17.715941: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-04-01 20:38:17.716057: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-04-01 20:38:17.716174: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-04-01 20:38:17.716252: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-04-01 20:38:17.716311: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-04-01 20:38:17.716418: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-01 20:38:17.716687: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.716994: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.717111: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-01 20:38:17.736625: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:30.190208: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-01 20:38:30.315240: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102]      0 
2020-04-01 20:38:30.315482: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0:   N 
2020-04-01 20:38:30.832895: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:31.002925: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:31.005861: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 32 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-04-01 20:38:34.803674: W tensorflow/python/util/util.cc:319] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

converting to trt-model
2020-04-01 20:38:37.808143: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6

converter.convert
2020-04-01 20:38:39.618691: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:39.618842: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2020-04-01 20:38:39.619224: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-04-01 20:38:39.712117: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:39.712437: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: 
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-04-01 20:38:39.712594: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:39.744930: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-04-01 20:38:40.056630: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-04-01 20:38:40.153461: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-04-01 20:38:40.176047: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-04-01 20:38:40.214052: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-04-01 20:38:40.231552: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-01 20:38:40.231927: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.232253: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.232388: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-01 20:38:40.232538: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-01 20:38:40.232587: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102]      0 
2020-04-01 20:38:40.232618: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0:   N 
2020-04-01 20:38:40.232890: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.233546: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.233761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 32 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-04-01 20:38:40.579950: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841] Optimization results for grappler item: graph_to_optimize
2020-04-01 20:38:40.580104: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   function_optimizer: Graph size after: 26 nodes (19), 43 edges (36), time = 179.825ms.
2020-04-01 20:38:40.580157: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   function_optimizer: function_optimizer did nothing. time = 0.152ms.
2020-04-01 20:38:40.941994: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.942217: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2020-04-01 20:38:40.942412: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-04-01 20:38:40.943756: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.943916: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: 
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-04-01 20:38:40.944010: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:40.944073: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-04-01 20:38:40.944148: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-04-01 20:38:40.944209: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-04-01 20:38:40.944266: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-04-01 20:38:40.944320: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-04-01 20:38:40.944372: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-01 20:38:40.944572: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.944816: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.944911: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-01 20:38:40.944993: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-01 20:38:40.945031: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102]      0 
2020-04-01 20:38:40.945059: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0:   N 
2020-04-01 20:38:40.945283: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.945569: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.945714: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 32 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-04-01 20:38:41.037807: I tensorflow/compiler/tf2tensorrt/segment/segment.cc:460] There are 6 ops of 3 different types in the graph that are not converted to TensorRT: Identity, NoOp, Placeholder, (For more information see https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#supported-ops).
2020-04-01 20:38:41.043736: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:636] Number of TensorRT candidate segments: 1
2020-04-01 20:38:41.046312: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:737] Replaced segment 0 consisting of 12 nodes by TRTEngineOp_0.
2020-04-01 20:38:41.073078: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841] Optimization results for grappler item: tf_graph
2020-04-01 20:38:41.073159: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 22 nodes (-4), 35 edges (-8), time = 14.454ms.
2020-04-01 20:38:41.073188: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   layout: Graph size after: 22 nodes (0), 35 edges (0), time = 20.565ms.
2020-04-01 20:38:41.073214: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 22 nodes (0), 35 edges (0), time = 5.644ms.
2020-04-01 20:38:41.073238: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   TensorRTOptimizer: Graph size after: 11 nodes (-11), 14 edges (-21), time = 28.58ms.
2020-04-01 20:38:41.073265: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 11 nodes (0), 14 edges (0), time = 2.904ms.
2020-04-01 20:38:41.073289: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841] Optimization results for grappler item: TRTEngineOp_0_native_segment
2020-04-01 20:38:41.073312: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 14 nodes (0), 15 edges (0), time = 2.875ms.
2020-04-01 20:38:41.073335: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   layout: Graph size after: 14 nodes (0), 15 edges (0), time = 2.389ms.
2020-04-01 20:38:41.073358: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 14 nodes (0), 15 edges (0), time = 2.834ms.
2020-04-01 20:38:41.073382: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   TensorRTOptimizer: Graph size after: 14 nodes (0), 15 edges (0), time = 0.218ms.
2020-04-01 20:38:41.073405: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 14 nodes (0), 15 edges (0), time = 5.268ms.

converter.save
2020-04-01 20:38:46.730260: W tensorflow/core/framework/op_kernel.cc:1655] OP_REQUIRES failed at trt_engine_resource_ops.cc:183 : Not found: Container TF-TRT does not exist. (Could not find resource: TF-TRT/TRTEngineOp_0)
trt-model saved under:  /home/al/Code/Benchmark_70x70/simpleModel_trt

4

3 回答 3

0

'''

将 tensorflow 模型转换为 tensor RT 模型的步骤

  1. 使用 model.load_weights(.h5_file_dir) 加载模型(.h5 或.hdf5)
  2. 使用 tf.saved_model.save(your_model, destn_dir) 保存模型 它将使用资产和变量文件夹以 .pb 格式保存模型,保持原样。

使用 Linux 机器将 .pb 模型转换为 tensorRT,同时转换记住只需给出 pb 文件和其他文件夹(资产和变量)所在文件夹的路径。然后开始转换。

'''

于 2021-07-16T07:53:29.220 回答
0

非常感谢您的回复。它包含我需要的一切。为了测试转换器脚本,我在 colab 中运行了代码,它运行良好,所以我想我需要检查我的环境是否有错误。关于model.summary()问题:
正如您正确指出的那样,转换模型时似乎删除了 Keras API 中的方法。我特别需要model.predict()方法来使用新模型进行预测。幸运的是,还有其他方法可以运行推理。除了您发布的那个之外,我还找到了本教程中描述的那个并使用了它。我在这个笔记本中总结了整个例子和解释

loaded = tf.saved_model.load('./model_trt')  # loading the converted model

print("The signature keys are: ",list(loaded.signatures.keys())) 
infer = loaded.signatures["serving_default"]

im_select = 0 # choose train-image you want to classify
labeling = infer(tf.constant(train_images[im_select],dtype=float))['LastLayer']   ## Here, the Image classification happens; we need the name of the last layer we defined in the beginning


#Display result
print("Image ",im_select," is classified as a ",class_names[int(tf.argmax(labeling,axis=1))] )
plt.imshow(train_images[im_select])
于 2020-04-07T21:55:16.757 回答
-1

看来转换已经成功了,
我尝试使用 Keras 和 TensorRT 的 .pb 文件。

下面是示例代码

saved_model_loaded = tf.saved_model.load(
    'path to trt converted model') # path to keras .pb or TensorRT .pb
#for layer in saved_model_loaded.keras_api.layers:

graph_func = saved_model_loaded.signatures['serving_default']
frozen_func = convert_variables_to_constants_v2(
    graph_func)

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

#convert to tensors
input_tensors = tf.cast(x_test, dtype=tf.float32)

output = frozen_func(input_tensors[:1])[0].numpy()
print(output) 

注意:我已经尝试过 keras 和 TensorRT 的两个模型,结果是一样的。

关于model.summary()错误,似乎一旦模型被转换,它就会删除一些方法,如.summary () 但是如果你想从 tensorRT 转换的模型中检查图形,你可以使用 Tensorboard 作为替代方案
下面是示例代码

import argparse
import sys
import tensorflow as tf
%load_ext tensorboard
from tensorflow.python.platform import app
from tensorflow.python.summary import summary

def import_to_tensorboard(model_dir, log_dir):
  """View an imported protobuf model (`.pb` file) as a graph in Tensorboard.

  Args:
    model_dir: The location of the protobuf (`pb`) model to visualize
    log_dir: The location for the Tensorboard log to begin visualization from.

  Usage:
    Call this function with your model location and desired log directory.
    Launch Tensorboard by pointing it to the log directory.
    View your imported `.pb` model as a graph.
  """

  with tf.compat.v1.Session(graph=tf.Graph()) as sess:
    tf.compat.v1.saved_model.loader.load(
        sess, [tf.compat.v1.saved_model.tag_constants.SERVING], model_dir)

    pb_visual_writer = summary.FileWriter(log_dir)
    pb_visual_writer.add_graph(sess.graph)
    print("Model Imported. Visualize by running: "
          "tensorboard --logdir={}".format(log_dir))

调用函数

import_to_tensorboard('path to trt model', '/logs/')

打开张量板

%tensorboard --logdir='path to logs'

让我知道这是否有帮助。

于 2020-04-06T10:22:31.473 回答