1

我正在尝试在英特尔神经计算棒硬件上运行 FP16 person-detection-retail-0013 和 person-reidentification-retail-0079,但是一旦我运行应用程序在设备上加载网络,我就会得到这个异常:

[INFERENCE ENGINE EXCEPTION] Dynamic batch is not supported

我已经加载了网络,将最大批量大小设置为 1,并且我已经从行人跟踪器演示到 OpenVINO 工具包中开始了我的项目:

main.cpp --> CreatePedestrianTracker

    CnnConfig reid_config(reid_model, reid_weights);
    reid_config.max_batch_size = 16;

    try {
        if (ie.GetConfig(deviceName, CONFIG_KEY(DYN_BATCH_ENABLED)).as<std::string>() != 
            PluginConfigParams::YES) {
            reid_config.max_batch_size = 1;
            std::cerr << "[DEBUG] Dynamic batch is not supported for " << deviceName << ". Fall back 
            to batch 1." << std::endl;
        }
    }
    catch (const InferenceEngine::details::InferenceEngineException& e) {
        reid_config.max_batch_size = 1;
        std::cerr << e.what() << " for " << deviceName << ". Fall back to batch 1." << std::endl;
    }

Cnn.cpp --> void CnnBase::InferBatch

void CnnBase::InferBatch(
const std::vector<cv::Mat>& frames,
std::function<void(const InferenceEngine::BlobMap&, size_t)> fetch_results) const {
const size_t batch_size = input_blob_->getTensorDesc().getDims()[0];

size_t num_imgs = frames.size();
for (size_t batch_i = 0; batch_i < num_imgs; batch_i += batch_size) {

    const size_t current_batch_size = std::min(batch_size, num_imgs - batch_i);

    for (size_t b = 0; b < current_batch_size; b++) {
        matU8ToBlob<uint8_t>(frames[batch_i + b], input_blob_, b); 
    }

    if ((deviceName_.find("MYRIAD") == std::string::npos) && (deviceName_.find("HDDL") == 
        std::string::npos)) {
        infer_request_.SetBatch(current_batch_size); 
    }

    infer_request_.Infer();

    fetch_results(outputs_, current_batch_size);
 }
}

我想问题可能是检测网络的拓扑结构,但我问是否有人遇到过同样的问题并解决了这个问题。
谢谢。

4

1 回答 1

1

恐怕,万能插件不支持动态批处理。请尝试更新版本的演示。例如,您可以在此处找到它:https ://github.com/opencv/open_model_zoo/tree/master/demos/pedestrian_tracker_demo 该演示已更新为完全不使用动态批处理。

于 2020-02-27T04:55:41.887 回答