0

我已经建立了一个带有两个输入的 Keras 模型,我想在我的手机上使用 SNPE 进行预测。我已经成功转换了它,这只是我现在遇到问题的 C++ 代码。我能够预测一个带有任何形状一维数组的输入的模型,但我现在有一个模型,它需要两个大小为 1 的一维数组。

所以在 Keras 中,预测是这样的:model.predict([np.array([.4]), np.array([.6])])

我必须预测的 SNPE 代码:

void init_model(){
  zdl::DlSystem::Runtime_t runt=checkRuntime();
  initializeSNPE(runt);
}

float run_model(float a, float b){
  std::vector<float> inputVec;
  std::vector<float> inputVec2;
  inputVec.push_back(a);
  inputVec2.push_back(b);
  std::unique_ptr<zdl::DlSystem::ITensor> inputTensor = loadInputTensor(snpe, inputVec);
  std::unique_ptr<zdl::DlSystem::ITensor> inputTensor2 = loadInputTensor(snpe, inputVec2);  // what do I do with this?
  zdl::DlSystem::ITensor* oTensor = executeNetwork(snpe, inputTensor);
  return returnOutput(oTensor);
}

我正在使用的功能是从 SNPE 的网站修改的。它适用于我之前对单个数组进行预测的用途:

zdl::DlSystem::Runtime_t checkRuntime()
{
    static zdl::DlSystem::Version_t Version = zdl::SNPE::SNPEFactory::getLibraryVersion();
    static zdl::DlSystem::Runtime_t Runtime;
    std::cout << "SNPE Version: " << Version.asString().c_str() << std::endl; //Print Version number
    std::cout << "\ntest";
    if (zdl::SNPE::SNPEFactory::isRuntimeAvailable(zdl::DlSystem::Runtime_t::GPU)) {
        Runtime = zdl::DlSystem::Runtime_t::GPU;
    } else {
        Runtime = zdl::DlSystem::Runtime_t::CPU;
    }

    return Runtime;
}

void initializeSNPE(zdl::DlSystem::Runtime_t runtime) {
  std::unique_ptr<zdl::DlContainer::IDlContainer> container;
  container = zdl::DlContainer::IDlContainer::open("/path/to/model.dlc");
  //printf("loaded model\n");
  int counter = 0;
  zdl::SNPE::SNPEBuilder snpeBuilder(container.get());
  snpe = snpeBuilder.setOutputLayers({})
                      .setRuntimeProcessor(runtime)
                      .setUseUserSuppliedBuffers(false)
                      .setPerformanceProfile(zdl::DlSystem::PerformanceProfile_t::HIGH_PERFORMANCE)
                      .build();
}

std::unique_ptr<zdl::DlSystem::ITensor> loadInputTensor(std::unique_ptr<zdl::SNPE::SNPE> &snpe, std::vector<float> inputVec) {
  std::unique_ptr<zdl::DlSystem::ITensor> input;
  const auto &strList_opt = snpe->getInputTensorNames();
  if (!strList_opt) throw std::runtime_error("Error obtaining Input tensor names");
  const auto &strList = *strList_opt;

  const auto &inputDims_opt = snpe->getInputDimensions(strList.at(0));
  const auto &inputShape = *inputDims_opt;

  input = zdl::SNPE::SNPEFactory::getTensorFactory().createTensor(inputShape);
  std::copy(inputVec.begin(), inputVec.end(), input->begin());

  return input;
}

float returnOutput(const zdl::DlSystem::ITensor* tensor) {
  float op = *tensor->cbegin();
  return op;
}

zdl::DlSystem::ITensor* executeNetwork(std::unique_ptr<zdl::SNPE::SNPE>& snpe,
                    std::unique_ptr<zdl::DlSystem::ITensor>& input) {
  static zdl::DlSystem::TensorMap outputTensorMap;
  snpe->execute(input.get(), outputTensorMap);
  zdl::DlSystem::StringList tensorNames = outputTensorMap.getTensorNames();

  const char* name = tensorNames.at(0);  // only should the first
  auto tensorPtr = outputTensorMap.getTensor(name);
  return tensorPtr;
}

但我不知道如何将我已经使用的两个输入张量与executeNetwork函数结合起来。任何帮助,将不胜感激。

4

1 回答 1

1

您可以使用 zdl::DlSystem::TensorMap 并将其设置为执行功能。

zdl::DlSystem::TensorMap inputTensorMap;
zdl::DlSystem::TensorMap outputTensorMap;
zdl::DlSystem::ITensor *inputTensor1;
zdl::DlSystem::ITensor *inputTensor2;
inputTensorMap.add("input_1", inputTensor1);
inputTensorMap.add("input_2", inputTensor2);
model->execute(inputTensorMap, outputTensorMap);

请注意,您必须在之后迭代 inputTensorMap 并使用 delete 自己删除ITensor

于 2020-02-21T10:24:02.173 回答