3

这个用python定义的简单神经网络:

input_layer = tf.placeholder(tf.float32, shape=[1, 1, 8000, 1], name='input_layer')

# Convolutional Layer
conv = tf.layers.conv2d(
        inputs=input_layer,
        filters=32,
        kernel_size=[1,11],
        padding="same",
        strides=1,
        activation=tf.nn.relu)

# Output layer
logits = tf.layers.dense(inputs=conv, units=5, name='logit')

结果如下图拓扑:

0 input_layer Placeholder
1 conv2d/kernel/Initializer/random_uniform/shape Const
2 conv2d/kernel/Initializer/random_uniform/min Const
3 conv2d/kernel/Initializer/random_uniform/max Const
4 conv2d/kernel/Initializer/random_uniform/RandomUniform RandomUniform
└─── Input0 ─ conv2d/kernel/Initializer/random_uniform/shape
5 conv2d/kernel/Initializer/random_uniform/sub Sub
└─── Input0 ─ conv2d/kernel/Initializer/random_uniform/max
└─── Input1 ─ conv2d/kernel/Initializer/random_uniform/min
6 conv2d/kernel/Initializer/random_uniform/mul Mul
└─── Input0 ─ conv2d/kernel/Initializer/random_uniform/RandomUniform
└─── Input1 ─ conv2d/kernel/Initializer/random_uniform/sub
7 conv2d/kernel/Initializer/random_uniform Add
└─── Input0 ─ conv2d/kernel/Initializer/random_uniform/mul
└─── Input1 ─ conv2d/kernel/Initializer/random_uniform/min
8 conv2d/kernel VariableV2
9 conv2d/kernel/Assign Assign
└─── Input0 ─ conv2d/kernel
└─── Input1 ─ conv2d/kernel/Initializer/random_uniform
10 conv2d/kernel/read Identity
└─── Input0 ─ conv2d/kernel
11 conv2d/bias/Initializer/zeros Const
12 conv2d/bias VariableV2
13 conv2d/bias/Assign Assign
└─── Input0 ─ conv2d/bias
└─── Input1 ─ conv2d/bias/Initializer/zeros
14 conv2d/bias/read Identity
└─── Input0 ─ conv2d/bias
15 conv2d/convolution/Shape Const
16 conv2d/convolution/dilation_rate Const
17 conv2d/convolution Conv2D
└─── Input0 ─ input_layer
└─── Input1 ─ conv2d/kernel/read
18 conv2d/BiasAdd BiasAdd
└─── Input0 ─ conv2d/convolution
└─── Input1 ─ conv2d/bias/read
19 conv2d/Relu Relu
└─── Input0 ─ conv2d/BiasAdd
20 logit/kernel/Initializer/random_uniform/shape Const
21 logit/kernel/Initializer/random_uniform/min Const
22 logit/kernel/Initializer/random_uniform/max Const
23 logit/kernel/Initializer/random_uniform/RandomUniform RandomUniform
└─── Input0 ─ logit/kernel/Initializer/random_uniform/shape
24 logit/kernel/Initializer/random_uniform/sub Sub
└─── Input0 ─ logit/kernel/Initializer/random_uniform/max
└─── Input1 ─ logit/kernel/Initializer/random_uniform/min
25 logit/kernel/Initializer/random_uniform/mul Mul
└─── Input0 ─ logit/kernel/Initializer/random_uniform/RandomUniform
└─── Input1 ─ logit/kernel/Initializer/random_uniform/sub
26 logit/kernel/Initializer/random_uniform Add
└─── Input0 ─ logit/kernel/Initializer/random_uniform/mul
└─── Input1 ─ logit/kernel/Initializer/random_uniform/min
27 logit/kernel VariableV2
28 logit/kernel/Assign Assign
└─── Input0 ─ logit/kernel
└─── Input1 ─ logit/kernel/Initializer/random_uniform
29 logit/kernel/read Identity
└─── Input0 ─ logit/kernel
30 logit/bias/Initializer/zeros Const
31 logit/bias VariableV2
32 logit/bias/Assign Assign
└─── Input0 ─ logit/bias
└─── Input1 ─ logit/bias/Initializer/zeros
33 logit/bias/read Identity
└─── Input0 ─ logit/bias
34 logit/Tensordot/transpose/perm Const
35 logit/Tensordot/transpose Transpose
└─── Input0 ─ conv2d/Relu
└─── Input1 ─ logit/Tensordot/transpose/perm
36 logit/Tensordot/Reshape/shape Const
37 logit/Tensordot/Reshape Reshape
└─── Input0 ─ logit/Tensordot/transpose
└─── Input1 ─ logit/Tensordot/Reshape/shape
38 logit/Tensordot/transpose_1/perm Const
39 logit/Tensordot/transpose_1 Transpose
└─── Input0 ─ logit/kernel/read
└─── Input1 ─ logit/Tensordot/transpose_1/perm
40 logit/Tensordot/Reshape_1/shape Const
41 logit/Tensordot/Reshape_1 Reshape
└─── Input0 ─ logit/Tensordot/transpose_1
└─── Input1 ─ logit/Tensordot/Reshape_1/shape
42 logit/Tensordot/MatMul MatMul
└─── Input0 ─ logit/Tensordot/Reshape
└─── Input1 ─ logit/Tensordot/Reshape_1
43 logit/Tensordot/shape Const
44 logit/Tensordot Reshape
└─── Input0 ─ logit/Tensordot/MatMul
└─── Input1 ─ logit/Tensordot/shape
45 logit/BiasAdd BiasAdd
└─── Input0 ─ logit/Tensordot
└─── Input1 ─ logit/bias/read

节点 36 和 44 之间的重塑操作的目的是什么?我正在使用不允许重塑操作的 Snapdragon 神经处理引擎 (SNPE)。有没有办法在没有重塑操作的情况下表达这个模型?

4

2 回答 2

2

所有这些reshape操作都是由 添加的tf.tensordot,它tf.layers.dense用于高维输入(在您的情况下为 4D)。从其文档中

注意:如果输入张量的秩大于 2,则在初始矩阵乘以内核之前将其展平。

源代码的链接。

如果您的环境不希望进行整形,请尝试手动定义权重和偏差并dot通过tf.matmul.

于 2017-11-06T14:47:04.970 回答
1

SNPEreshapev1.8.0中支持

于 2017-12-06T08:51:50.613 回答