0

我正在尝试在使用 OnnxRuntime 和 OpenVINO 连接到 Raspberry Pi 4B 的 Intel Compute Stick 2(MyriadX 芯片)上运行推理。我已经设置好一切,openvino 提供程序被 onnxruntime 识别,我可以在可用设备列表中看到无数。

但是,在尝试对无数个进行推理时,我总是会遇到某种内存损坏。我不确定这是从哪里来的。如果我使用默认的 CPU 推理而不是 openvino,一切正常。也许我创建Ort::MemoryInfo对象的方式不正确。

输出

Available execution providers:
        CPUExecutionProvider
        OpenVINOExecutionProvider
Available OpenVINO devices:
        MYRIAD
Starting Session
[...]
2020-12-11 13:43:13.962093843 [I:onnxruntime:, openvino_execution_provider.h:124 OpenVINOExecutionProviderInfo] [OpenVINO-EP]Choosing Device: MYRIAD , Precision: FP16
[...]
2020-12-11 13:43:13.972813082 [I:onnxruntime:, capability_2021_1.cc:854 GetCapability_2021_1] [OpenVINO-EP] Model is fully supported by OpenVINO
[...]
Loading data
Running Inference
2020-12-11 13:43:21.838737814 [I:onnxruntime:, sequential_executor.cc:157 Execute] Begin execution
2020-12-11 13:43:21.838892108 [I:onnxruntime:, backend_manager.cc:253 Compute] [OpenVINO-EP] Creating concrete backend for key: MYRIAD|50,28,28,1,|10,|84,10,|84,|120,84,|6,1,5,5,|16,|6,|400,120,|16,6,5,5,|120,|
2020-12-11 13:43:21.838926959 [I:onnxruntime:, backend_manager.cc:255 Compute] [OpenVINO-EP] Backend created for graph OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0
2020-12-11 13:43:21.845913973 [I:onnxruntime:, backend_utils.cc:65 CreateCNNNetwork] ONNX Import Done
malloc(): unsorted double linked list corrupted
Aborted

这是我正在使用的代码

#include <iostream>
#include <iomanip>
#include <chrono>
#include <array>
#include <cmath>
#include <MNIST-Loader/MNIST.h>
#include <onnxruntime_cxx_api.h>
#include <core/framework/allocator.h>
#include <ie_core.hpp> //openvino inference_engine

int main()
{
        constexpr const char* modelPath = "/home/pi/data/lenet_mnist.onnx";
        constexpr const char* mnistPath = "/home/pi/data/mnist/";
        constexpr size_t batchSize = 50;

        std::cout << "Available execution providers:\n";
        for(const auto& s : Ort::GetAvailableProviders()) std::cout << '\t' << s << '\n';

        std::cout << "Available OpenVINO devices:\n";
        { //new scope so the core gets destroyed when leaving
                InferenceEngine::Core ieCore;
                for(const auto& d : ieCore.GetAvailableDevices()) std::cout << '\t' << d << '\n';
        }

        // ----------- create session -----------
        std::cout << "Starting Session\n";
        Ort::Env env(ORT_LOGGING_LEVEL_INFO);
        OrtOpenVINOProviderOptions ovOptions;
        ovOptions.device_type = "MYRIAD_FP16";
        Ort::SessionOptions sessionOptions;
        sessionOptions.SetExecutionMode(ORT_SEQUENTIAL);
        sessionOptions.SetGraphOptimizationLevel(ORT_DISABLE_ALL);
        sessionOptions.AppendExecutionProvider_OpenVINO(ovOptions);
        Ort::Session session(env, modelPath, sessionOptions);

        // ----------- load data -----------
        std::cout << "Loading data\n";
        MNIST data(mnistPath);
        const std::array<int64_t, 4> inputShape{batchSize, 28, 28, 1};
        //const auto memoryInfo = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPUInput);
        auto openvinoMemInfo = new OrtMemoryInfo("OpenVINO", OrtDeviceAllocator);
        const Ort::MemoryInfo memoryInfo(openvinoMemInfo);
        std::array<float, batchSize*28*28> batch;
        for(size_t i = 0; i < batchSize; ++i)
        {
                const auto pixels = data.trainingData.at(i).pixelData;
                for(size_t k = 0; k < 28*28; ++k)
                {
                        batch[k + (i*28*28)] = (pixels[k] == 0) ? 0.f : 1.f;
                }
        }
        const Ort::Value inputValues[] = {Ort::Value::CreateTensor<float>(memoryInfo, batch.data(), batch.size(), inputShape.data(), inputShape.size())};
        
        // ----------- run inference -----------
        std::cout << "Running Inference\n";
        Ort::RunOptions runOptions;
        Ort::AllocatorWithDefaultOptions alloc;
        const char* inputNames [] = { session.GetInputName (0, alloc) };
        const char* outputNames[] = { session.GetOutputName(0, alloc) };
        const auto start = std::chrono::steady_clock::now();
        auto results = session.Run(runOptions, inputNames, inputValues, 1, outputNames, 1);
        const auto end = std::chrono::steady_clock::now();
        std::cout << "\nRuntime: " << std::chrono::duration_cast<std::chrono::milliseconds>(end-start).count() << "ms\n";

        // ----------- print results -----------
        std::cout << "Results:" << std::endl;
        for(Ort::Value& r : results)
        {
                const auto dims = r.GetTensorTypeAndShapeInfo().GetShape();
                for(size_t i = 0; i < dims[0]; ++i)
                {
                        std::cout << "Label: " << data.trainingData.at(i).label << "\tprediction: [ " << std::fixed << std::setprecision(3);
                        for(size_t k = 0; k < dims[1]; ++k) std::cout << r.At<float>({i, k}) << ' ';
                        std::cout << "]\n";
                }
        }
        std::cout.flush();
}
4

1 回答 1

0

此组件(OpenVINO 执行提供程序)不是 OpenVINO 工具包的一部分,因此我们要求您在 ONNX 运行时 GitHub 上发布您的问题,因为它将帮助我们识别与主要 OpenVINO 工具包分开的 OpenVINO 执行提供程序的问题。

我们代表您在 Github 上打开了一个案例,我们应该很快就会在这个线程中得到回复 - https://github.com/microsoft/onnxruntime/issues/6304

于 2021-01-12T14:40:51.637 回答