0

我对从 Pytorch -> Onnx -> Tensorflow 转换的 Tensorflow 模型有疑问。问题是转换后的 Tensorflow 模型需要 Pytorch 格式的输入(批量大小、通道数、高度、宽度),但不是 Tensorflow 格式(批量大小、高度、宽度、通道数)。因此,我无法使用该模型进一步处理 Vitis AI。

所以我想问有没有什么方法可以通过使用 Onnx、Tensorflow 1 或其他工具将这种 Pytorch 输入格式转换为 Tensorflow 格式?

我的代码如下:

Pytorch -> Onnx

from hardnet import hardnet
import torch
import onnx

ckpt = torch.load('../hardnet.pth')
model_state_dict = ckpt['model_state_dict']
optimizer_state_dict = ckpt['optimizer_state_dict']

model = hardnet(11)
model.load_state_dict(model_state_dict)
model.eval()     

dummy_input = torch.randn(1, 3, 1080, 1920)
input_names = ['input0']
output_names = ['output0']

output_file = 'hardnet.onnx'
torch.onnx.export(model, dummy_input, output_file, verbose=True,
    input_names=input_names, output_names=output_names,
    opset_version=11, keep_initializers_as_inputs=True)

onnx_model = onnx.load(output_file)
onnx.checker.check_model(onnx_model)
print('Passed Onnx')

Onnx -> Tensorflow 1(使用 Tensorflow 1.15)

import cv2
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import onnx
from onnx_tf.backend import prepare

output_file = 'hardnet.onnx'
onnx_model = onnx.load(output_file)
output = prepare(onnx_model)
output.export_graph('hardnet.pb')
tf.compat.v1.disable_eager_execution()

def load_pb(path_to_pb: str):
    """From: https://stackoverflow.com/questions/51278213/what-is-the-use-of-a-pb-file-in-tensorflow-and-how-does-it-work
    """
    with tf.gfile.GFile(path_to_pb, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def, name='')
        return graph


graph = load_pb('hardnet.pb')
input = graph.get_tensor_by_name('input0:0')
output = graph.get_tensor_by_name('output0:0')
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
img = cv2.imread('train_0.jpg', cv2.IMREAD_COLOR)
img = cv2.resize(img, (1920,  1080))

img = img/255
img = img - mean
img = img/std
img = np.expand_dims(img, -1)
# To Pytorch format.
img = np.transpose(img, (3, 2, 0, 1))
img = img

with tf.Session(graph=graph) as sess:
    pred = sess.run(output, {input: img})
4

1 回答 1

1

您可以将您的 Pytorch 模型包装到另一个可以执行您想要在 TensorFlow 中进行的转置的模型中。请参见以下示例:

假设您有以下玩具 NN:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.rnn = nn.LSTM(10, 20, 2)

    def forward(self, x):
        h0 = torch.zeros(2, 3, 20)
        c0 = torch.zeros(2, 3, 20)
        return self.rnn(x, (h0, c0))

示例 pytorch/tensorflow 输入形状将是:

>> pytorch_input  = torch.randn(5, 3, 10)
>> tf_input  = torch.transpose(pytorch_input, 1, 2)

>> print("PyTorch input shape: ", pytorch_input.shape)
>> print("TensorFlow input shape: ", tf_input.shape)

PyTorch input shape:  torch.Size([5, 3, 10])
TensorFlow input shape:  torch.Size([5, 10, 3])

现在,包装器将首先转置输入,然后将转置的输入传递给某个模型:

class NetTensorFlowWrapper(nn.Module):
    def __init__(self, main_module: nn.Module):
        super(NetTensorFlowWrapper, self).__init__()
        self.main_module = main_module
        
    def forward(self, x):
        x = torch.transpose(x, 1, 2)
        return self.main_module(x)

那么,这是可能的:

net = Net()
net_wrapper = NetTensorFlowWrapper(net)

net(pytorch_input)
net_wrapper(tf_input)

然后,当您最终像以前一样保存模型并通过包(不是)torch.onnx.export读取它们的图形时,您将拥有...onnxtorch.onnx

  • for Net- 输入 5x3x10 且转置层
graph torch-jit-export (
  %input0[FLOAT, 5x3x10]
 {
  %76 = Shape(%input0)
  %77 = Constant[value = <Scalar Tensor []>]()
  • for NetTensorFlowWrapper- 输入 5x10x3 和转置层
graph torch-jit-export (
  %input0[FLOAT, 5x10x3]
{
  %9 = Transpose[perm = [0, 2, 1]](%input0)
  %77 = Shape(%9)
  %78 = Constant[value = <Scalar Tensor []>]()
...
于 2020-12-18T10:11:04.007 回答