我正在尝试在 Google Coral 上部署对象检测模型。我使用以下配置文件训练了模型,我尝试将其与此处描述的 docker 映像中的演示配置文件紧密匹配。
我成功地训练了我的模型,然后./convert_checkpoint_to_edgetpu_tflite.sh
似乎成功地运行了脚本,输出如下:
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/.venv/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/models/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py:183: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2019-09-12 11:15:11.539092: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-09-12 11:15:11.707588: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x68382b0 executing computations on platform CUDA. Devices:
2019-09-12 11:15:11.707625: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
2019-09-12 11:15:11.728473: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3298290000 Hz
2019-09-12 11:15:11.729431: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x68a1b90 executing computations on platform Host. Devices:
2019-09-12 11:15:11.729473: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-09-12 11:15:11.729783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.635
pciBusID: 0000:05:00.0
totalMemory: 10.73GiB freeMemory: 10.34GiB
2019-09-12 11:15:11.729823: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-09-12 11:15:11.732474: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-12 11:15:11.732509: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-09-12 11:15:11.732523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-09-12 11:15:11.732730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10057 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:05:00.0, compute capability: 7.5)
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/.venv/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2019-09-12 11:15:15.451695: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-09-12 11:15:15.451741: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-12 11:15:15.451748: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-09-12 11:15:15.451753: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-09-12 11:15:15.451857: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10057 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:05:00.0, compute capability: 7.5)
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/.venv/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py:232: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/.venv/lib/python3.5/site-packages/tensorflow/python/framework/graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
2019-09-12 11:15:17.880135: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying strip_unused_nodes
CONVERTING frozen graph to TF Lite file...
2019-09-12 11:15:19.959403: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-09-12 11:15:20.105331: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x3f28f50 executing computations on platform CUDA. Devices:
2019-09-12 11:15:20.105370: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
2019-09-12 11:15:20.124476: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3298290000 Hz
2019-09-12 11:15:20.125267: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x3f92630 executing computations on platform Host. Devices:
2019-09-12 11:15:20.125297: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-09-12 11:15:20.125542: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.635
pciBusID: 0000:05:00.0
totalMemory: 10.73GiB freeMemory: 10.34GiB
2019-09-12 11:15:20.125569: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-09-12 11:15:20.127390: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-12 11:15:20.127411: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-09-12 11:15:20.127420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-09-12 11:15:20.127553: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10057 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:05:00.0, compute capability: 7.5)
TFLite graph generated at model_exported/output_tflite_graph.tflite
然后我edgetpu_compiler output_tflite_graph.tflite
似乎也成功地运行了以下输出:
Edge TPU Compiler version 2.0.258810407
INFO: Initialized TensorFlow Lite runtime.
Model compiled successfully in 383 ms.
Input model: model_exported/output_tflite_graph.tflite
Input size: 1.65MiB
Output model: output_tflite_graph_edgetpu.tflite
Output size: 2.33MiB
On-chip memory available for caching model parameters: 7.00MiB
On-chip memory used for caching model parameters: 2.11MiB
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 1
Total number of operations: 115
Operation log: output_tflite_graph_edgetpu.log
Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 114
Number of operations that will run on CPU: 1
See the operation log file for individual operation details.
以及以下output_tflite_graph_edgetpu.log
文件:
Edge TPU Compiler version 2.0.258810407
Input: output_tflite_graph.tflite
Output: output_tflite_graph_edgetpu.tflite
Operator Count Status
DEPTHWISE_CONV_2D 33 Mapped to Edge TPU
RESHAPE 13 Mapped to Edge TPU
LOGISTIC 1 Mapped to Edge TPU
CUSTOM 1 Operation is working on an unsupported data type
ADD 10 Mapped to Edge TPU
CONCATENATION 2 Mapped to Edge TPU
CONV_2D 55 Mapped to Edge TPU
最后,我把我的转换output_tflite_graph_edgetpu.tflite
在珊瑚上,并得到以下错误:
Traceback (most recent call last):
File "main.py", line 224, in <module>
main()
File "main.py", line 221, in main
run_app(add_render_gen_args, render_gen)
File "/home/mendel/projects/DARTS/object_detection/edge_tpu_vision/edgetpuvision/apps.py", line 75, in run_app
display=args.displaymode):
File "/home/mendel/projects/DARTS/object_detection/edge_tpu_vision/edgetpuvision/gstreamer.py", line 243, in run_gen
inference_size = render_overlay_gen.send(None) # Initialize.
File "main.py", line 154, in render_gen
engines, titles = utils.make_engines(args.model, DetectionEngine)
File "/home/mendel/projects/DARTS/object_detection/edge_tpu_vision/edgetpuvision/utils.py", line 53, in make_engines
engine = engine_class(model_path)
File "/usr/lib/python3/dist-packages/edgetpu/detection/engine.py", line 55, in __init__
super().__init__(model_path)
File "/usr/lib/python3/dist-packages/edgetpu/swig/edgetpu_cpp_wrapper.py", line 300, in __init__
this = _edgetpu_cpp_wrapper.new_BasicEngine(*args)
RuntimeError: Failed to allocate tensors.
我究竟做错了什么?
谢谢!
PS:我意识到这可能更适合 git 问题,但我不确定在 git 上发布 google-coral 问题的位置...