我目前正在使用 MPS 在 iOS(Swift4)上复制 YOLOv2(不是很小)。
一个问题是我很难实现 space_to_depth 函数(https://www.tensorflow.org/api_docs/python/tf/space_to_depth)和两个卷积结果的串联(13x13x256 + 13x13x1024 -> 13x13x1280)。你能给我一些关于制作这些零件的建议吗?我的代码如下。
...
let conv19 = MPSCNNConvolutionNode(source: conv18.resultImage,
weights: DataSource("conv19", 3, 3, 1024, 1024))
let conv20 = MPSCNNConvolutionNode(source: conv19.resultImage,
weights: DataSource("conv20", 3, 3, 1024, 1024))
let conv21 = MPSCNNConvolutionNode(source: conv13.resultImage,
weights: DataSource("conv21", 1, 1, 512, 64))
/*****
1. space_to_depth with conv21
2. concatenate the result of conv20(13x13x1024) to the result of 1 (13x13x256)
I need your help to implement this part!
******/