我正在使用 OpenVINO 2019,我需要为我的项目检测 CPU 和 VPU。之前在 2018 版本中,我使用了一些 API,但现在新版本中缺少它们。
那么在cpp代码中检测openvino设备的正确方法是什么。
以下路径中有一个 hello 查询设备示例。
C:\Program Files (x86)\IntelSWTools\openvino_\inference_engine\samples\hello_query_device
它查询推理引擎设备并打印它们的指标和默认配置值。该示例展示了如何使用查询设备 API 功能。
注意:本主题描述了查询设备示例的 C++ 实现的用法。对于 Python* 实现,请参阅 Hello Query Device Python* 示例
要查看所需信息,请运行以下命令:
./hello_query_device
该应用程序打印所有可用设备及其支持的指标和配置参数的默认值:
Available devices:
Device: CPU
Metrics:
AVAILABLE_DEVICES : [ 0 ]
SUPPORTED_METRICS : [ AVAILABLE_DEVICES SUPPORTED_METRICS FULL_DEVICE_NAME OPTIMIZATION_CAPABILITIES SUPPORTED_CONFIG_KEYS RANGE_FOR_ASYNC_INFER_REQUESTS RANGE_FOR_STREAMS ]
FULL_DEVICE_NAME : Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
OPTIMIZATION_CAPABILITIES : [ WINOGRAD FP32 INT8 BIN ]
SUPPORTED_CONFIG_KEYS : [ CPU_BIND_THREAD CPU_THREADS_NUM CPU_THROUGHPUT_STREAMS DUMP_EXEC_GRAPH_AS_DOT DYN_BATCH_ENABLED DYN_BATCH_LIMIT EXCLUSIVE_ASYNC_REQUESTS PERF_COUNT ]
...
Default values for device configuration keys:
CPU_BIND_THREAD : YES
CPU_THREADS_NUM : 0
CPU_THROUGHPUT_STREAMS : 1
DUMP_EXEC_GRAPH_AS_DOT : ""
DYN_BATCH_ENABLED : NO
DYN_BATCH_LIMIT : 0
EXCLUSIVE_ASYNC_REQUESTS : NO
PERF_COUNT : NO
Device: FPGA
Metrics:
AVAILABLE_DEVICES : [ 0 ]
SUPPORTED_METRICS : [ AVAILABLE_DEVICES SUPPORTED_METRICS SUPPORTED_CONFIG_KEYS FULL_DEVICE_NAME OPTIMIZATION_CAPABILITIES RANGE_FOR_ASYNC_INFER_REQUESTS ]
SUPPORTED_CONFIG_KEYS : [ DEVICE_ID PERF_COUNT EXCLUSIVE_ASYNC_REQUESTS DLIA_IO_TRANSFORMATIONS_NATIVE DLIA_ARCH_ROOT_DIR DLIA_PERF_ESTIMATION ]
FULL_DEVICE_NAME : a10gx_2ddr : Intel Vision Accelerator Design with Intel Arria 10 FPGA (acla10_1150_sg10)
OPTIMIZATION_CAPABILITIES : [ FP16 ]
RANGE_FOR_ASYNC_INFER_REQUESTS : { 2, 5, 1 }
Default values for device configuration keys:
DEVICE_ID : [ 0 ]
PERF_COUNT : true
EXCLUSIVE_ASYNC_REQUESTS : false
DLIA_IO_TRANSFORMATIONS_NATIVE : false
DLIA_PERF_ESTIMATION : true
ExecutableNetworkInternal::Ptr clDNNEngine::LoadExeNetworkImpl(InferenceEngine::ICNNNetwork &network,
const std::map<std::string, std::string> &config) {
auto specifiedDevice = network.getTargetDevice();
auto supportedDevice = InferenceEngine::TargetDevice::eGPU;
if (specifiedDevice != InferenceEngine::TargetDevice::eDefault && specifiedDevice != supportedDevice) {
THROW_IE_EXCEPTION << "The plugin doesn't support target device: " << getDeviceName(specifiedDevice) << ".\n" <<
"Supported target device: " << getDeviceName(supportedDevice);
}