1

我制作并训练了一个预测 sin() 值的 pytorch v1.4 模型(基于网上找到的示例)。推理有效。然后我尝试在带有 x86 cpu 的 Ubuntu 上使用 TVM v0.8dev0 和 llvm 10 编译它。我按照 TVM 设置指南运行了一些适用于 onnx 的教程。我主要使用 TVM 上现有的教程来弄清楚下面的过程。请注意,我不是 ML 也不是 DataScience 工程师。这些是我的步骤:

import tvm, torch, os
from tvm import relay
state = torch.load("/home/dude/tvm/tst_state.pt") # load the trained pytorch state
import tst
m = tst.Net()
m.load_state_dict(state) # init the model with its trained state
m.eval()

sm = torch.jit.trace(m, torch.tensor([3.1415 / 4]))  # convert to a scripted model

# the model only takes 1 input for inference hence [("input0", (1,))]
mod, params = tvm.relay.frontend.from_pytorch(sm, [("input0", (1,))])
mod.astext # outputs some small relay(?) script

with tvm.transform.PassContext(opt_level=1):
  lib = relay.build(mod, target="llvm", target_host="llvm", params=params)

最后一行给了我这个错误,我不知道如何解决也不知道哪里出错了。我希望有人能指出我的错误......

    ... removed some lines here ...
  [bt] (3) /home/dude/tvm/build/libtvm.so(TVMFuncCall+0x5f) [0x7f5cd65660af]
  [bt] (2) /home/dude/tvm/build/libtvm.so(+0xb4f8a7) [0x7f5cd5f318a7]
  [bt] (1) /home/dude/tvm/build/libtvm.so(tvm::GenericFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1ab) [0x7f5cd5f315cb]
  [bt] (0) /home/tvm/build/libtvm.so(+0x1180cab) [0x7f5cd6562cab]
  File "/home/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun
    rv = local_pyfunc(*pyargs)
  File "/home/tvm/python/tvm/relay/op/strategy/x86.py", line 311, in dense_strategy_cpu
    m, _ = inputs[0].shape
ValueError: not enough values to unpack (expected 2, got 1)
4

0 回答 0