我有一个以 cfg 和权重的形式定制的 YOLO 模型。我使用暗流( https://github.com/thtrieu/darkflow)将此模型转换为 .pb 和 .meta 文件
sudo ./flow --model cfg/license.cfg --load bin/yololp1_420000.weights --savepb --verbalise
分析生成的 .pb(/ license.pb) 是
>>> import tensorflow as tf
>>> gf = tf.GraphDef()
>>> gf.ParseFromString(open('/darkflow/built_graph/license.pb','rb').read())
202339124
>>> [n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')]
[u'input=>Placeholder']
>>> [n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Mul')]
[u'mul=>Mul', u'mul_1=>Mul', u'mul_2=>Mul', u'mul_3=>Mul', u'mul_4=>Mul', u'mul_5=>Mul', u'mul_6=>Mul', ...]
它有“输入”层,但没有“输出”层。我试图将模型移植到 tensorflow 相机演示检测中(https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android)。相机预览在一秒钟后停止。安卓异常如下:
04-27 15:06:32.727 21721 21737 D gralloc : gralloc_lock_ycbcr success. format : 11, usage: 3, ycbcr.y: 0xc07cf000, .cb: 0xc081a001, .cr: 0xc081a000, .ystride: 640 , .cstride: 640, .chroma_step: 2
04-27 15:06:32.735 21721 21736 E TensorFlowInferenceInterface: Failed to run TensorFlow inference with inputs:[input], outputs:[output]
04-27 15:06:32.736 21721 21736 E AndroidRuntime: FATAL EXCEPTION: inference
04-27 15:06:32.736 21721 21736 E AndroidRuntime: Process: org.tensorflow.demo, PID: 21721
04-27 15:06:32.736 21721 21736 E AndroidRuntime: java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'ExtractImagePatches' with these attrs. Registered devices: [CPU], Registered kernels:
04-27 15:06:32.736 21721 21736 E AndroidRuntime: <no registered kernels>
04-27 15:06:32.736 21721 21736 E AndroidRuntime: [[Node: ExtractImagePatches = ExtractImagePatches[T=DT_FLOAT, ksizes=[1, 2, 2, 1], padding="VALID", rates=[1, 1, 1, 1], strides=[1, 2, 2, 1]](47-leaky)]]
如何解决这个问题?我也尝试使用“optimize_for_inference.py”将 .pb 转换为移动优化的 .pb,但没有用。鉴于此,如何在转换后的 .pb 文件中正确定义输入和输出张量/层?或者如何在 TF 相机检测演示中正确移植生成的 .pb?