9

我想要一个带有过滤器的二维卷积,该过滤器取决于张量流中小批量中的样本。有什么想法可以做到这一点,尤其是在不知道每小批量的样本数量的情况下?

具体来说,我有表单的输入数据,inp我有表单的MB x H x W x Channels过滤器。FMB x fh x fw x Channels x OutChannels

假设

inp = tf.placeholder('float', [None, H, W, channels_img], name='img_input').

我想做tf.nn.conv2d(inp, F, strides = [1,1,1,1]),但这是不允许的,因为F不能有小批量维度。知道如何解决这个问题吗?

4

4 回答 4

5

我认为建议的技巧实际上是不正确的。层发生的情况tf.conv3d()是输入在深度(=实际批次)维度上进行卷积,然后沿生成的特征图求和。结果的输出数量恰好与padding='SAME'批量大小相同,因此被愚弄了!

编辑:我认为为不同的小批量元素使用不同的过滤器进行卷积的一种可能方法涉及“破解”深度卷积。假设批量大小MB已知:

inp = tf.placeholder(tf.float32, [MB, H, W, channels_img])

# F has shape (MB, fh, fw, channels, out_channels)
# REM: with the notation in the question, we need: channels_img==channels

F = tf.transpose(F, [1, 2, 0, 3, 4])
F = tf.reshape(F, [fh, fw, channels*MB, out_channels)

inp_r = tf.transpose(inp, [1, 2, 0, 3]) # shape (H, W, MB, channels_img)
inp_r = tf.reshape(inp, [1, H, W, MB*channels_img])

out = tf.nn.depthwise_conv2d(
          inp_r,
          filter=F,
          strides=[1, 1, 1, 1],
          padding='VALID') # here no requirement about padding being 'VALID', use whatever you want. 
# Now out shape is (1, H, W, MB*channels*out_channels)

out = tf.reshape(out, [H, W, MB, channels, out_channels) # careful about the order of depthwise conv out_channels!
out = tf.transpose(out, [2, 0, 1, 3, 4])
out = tf.reduce_sum(out, axis=3)

# out shape is now (MB, H, W, out_channels)

如果MB未知,应该可以使用tf.shape()(我认为)动态确定它

于 2017-09-18T18:39:58.333 回答
4

你可以使用tf.map_fn如下:

inp = tf.placeholder(tf.float32, [None, h, w, c_in]) 
def single_conv(tupl):
    x, kernel = tupl
    return tf.nn.conv2d(x, kernel, strides=(1, 1, 1, 1), padding='VALID')
# Assume kernels shape is [tf.shape(inp)[0], fh, fw, c_in, c_out]
batch_wise_conv = tf.squeeze(tf.map_fn(
    single_conv, (tf.expand_dims(inp, 1), kernels), dtype=tf.float32),
    axis=1
)

dtype指定for很重要map_fn。基本上,这个解决方案定义了batch_dim_size2D 卷积操作。

于 2018-02-28T13:40:56.330 回答
4

公认的答案在处理尺寸方面略有错误,因为它们是通过 padding = "VALID" 更改的(他将它们视为 padding = "SAME")。因此,在一般情况下,由于这种不匹配,代码会崩溃。我附上了他更正的代码,两种情况都得到了正确处理。

inp = tf.placeholder(tf.float32, [MB, H, W, channels_img])

# F has shape (MB, fh, fw, channels, out_channels)
# REM: with the notation in the question, we need: channels_img==channels

F = tf.transpose(F, [1, 2, 0, 3, 4])
F = tf.reshape(F, [fh, fw, channels*MB, out_channels)

inp_r = tf.transpose(inp, [1, 2, 0, 3]) # shape (H, W, MB, channels_img)
inp_r = tf.reshape(inp_r, [1, H, W, MB*channels_img])

padding = "VALID" #or "SAME"
out = tf.nn.depthwise_conv2d(
          inp_r,
          filter=F,
          strides=[1, 1, 1, 1],
          padding=padding) # here no requirement about padding being 'VALID', use whatever you want. 
# Now out shape is (1, H-fh+1, W-fw+1, MB*channels*out_channels), because we used "VALID"

if padding == "SAME":
    out = tf.reshape(out, [H, W, MB, channels, out_channels)
if padding == "VALID":
    out = tf.reshape(out, [H-fh+1, W-fw+1, MB, channels, out_channels)
out = tf.transpose(out, [2, 0, 1, 3, 4])
out = tf.reduce_sum(out, axis=3)

# out shape is now (MB, H-fh+1, W-fw+1, out_channels)
于 2018-05-07T11:40:01.877 回答
3

他们绕过它的方法是使用添加一个额外的维度

tf.expand_dims(inp, 0)

创建一个“假”批量大小。然后使用

tf.nn.conv3d()

过滤器深度与批量大小匹配的操作。这将导致每个过滤器在每批中仅与一个样本进行卷积。

可悲的是,您不会以这种方式解决可变批量大小问题,只能解决卷积问题。

于 2017-02-07T09:56:09.920 回答