0

I trained and quantized a Tensorflow model on a Ubuntu 18.04 machine and I converted it to tflite format. Then I deployed it on a Linux Yocto board equipped with a NPU accelerator, tflite_runtime and NNAPI. I noticed that the same tflite model outputs different predictions when using the CPU on my PC and the NPU+NNAPI on the board for inference. The predictions are often similar, but in some cases they are completely different. I tried to disable NNAPI on the board and to make inference using the CPU and the results were the same as on the PC CPU. So I think that the problem is the NNAPI. However, I don't know why this happens. Is there a way to prevent it or to make the network more robust during training?

4

1 回答 1

0

费里昂,

NNAPI 团队也有兴趣了解更多信息。

小的变化是可以预料的,但完全不同的结果不应该是这样。

你试过不同的设备吗?你看到相同的变化吗?

您提到了 Linux Yocto 构建。您是在 Yocto 上使用 Android 运行测试,还是使用 NNAPI 的 Linux 构建?

于 2021-06-03T04:51:40.347 回答