我已经在 tensorflow 中编写了一个卷积网络,其中 relu 作为激活函数,但是它不是学习的(对于 eval 和 train 数据集,损失都是恒定的)。对于不同的激活功能,一切正常。
这是创建 nn 的代码:
def _create_nn(self):
current = tf.layers.conv2d(self.input, 20, 3, activation=self.activation)
current = tf.layers.max_pooling2d(current, 2, 2)
current = tf.layers.conv2d(current, 24, 3, activation=self.activation)
current = tf.layers.conv2d(current, 24, 3, activation=self.activation)
current = tf.layers.max_pooling2d(current, 2, 2)
self.descriptor = current = tf.layers.conv2d(current, 32, 5, activation=self.activation)
if not self.drop_conv:
current = tf.layers.conv2d(current, self.layer_7_filters_n, 3, activation=self.activation)
if self.add_conv:
current = tf.layers.conv2d(current, 48, 2, activation=self.activation)
self.descriptor = current
last_conv_output_shape = current.get_shape().as_list()
self.descr_size = last_conv_output_shape[1] * last_conv_output_shape[2] * last_conv_output_shape[3]
current = tf.layers.dense(tf.reshape(current, [-1, self.descr_size]), 100, activation=self.activation)
current = tf.layers.dense(current, 50, activation=self.last_activation)
return current
self.activiation 设置为 tf.nn.relu 并且 self.last_activiation 设置为 tf.nn.softmax
损失函数和优化器在这里创建:
self._nn = self._create_nn()
self._loss_function = tf.reduce_sum(tf.squared_difference(self._nn, self.Y), 1)
optimizer = tf.train.AdamOptimizer()
self._train_op = optimizer.minimize(self._loss_function)
我尝试通过tf.random_normal_initializer(0.1, 0.1)
作为初始化程序传递来更改变量初始化,但是它并没有导致损失函数发生任何变化。
如果能帮助我使这个神经网络与 ReLu 一起工作,我将不胜感激。
编辑:Leaky ReLu 也有同样的问题
编辑:我设法复制相同错误的小例子:
x = tf.constant([[3., 211., 123., 78.]])
v = tf.Variable([0.5, 0.5, 0.5, 0.5])
h_d = tf.layers.Dense(4, activation=tf.nn.leaky_relu)
h = h_d(x)
y_d = tf.layers.Dense(4, activation=tf.nn.softmax)
y = y_d(h)
d = tf.constant([[.5, .5, 0, 0]])
h_d 和 y_d 内核和偏差的梯度(使用 tf.gradients 计算)等于或接近 0