2

我想在 tensorflow 中将 l2-regularizatin 与 Dynamic_rnn 一起使用,但目前看来这并没有得到很好的处理。While循环是错误的来源。下面是重现问题的示例代码片段

import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4

with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
    lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
    inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
    cell = tf.nn.rnn_cell.GRUCell(hidden)
    cell_state = cell.zero_state(batch, tf.float32)
    output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
    inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
                        [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
                        dtype=np.int32)
    lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    output_ = sess.run(output, {inputs: inputs_, lengths: lengths_})
    print(output_)

INFO:tensorflow:Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop.

total_regularization_loss while context: None
test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer while context: test/rnn/while/while_context

如果我的网络中有 dynamic_rnn,如何添加 l2 正则化?目前我可以继续在损失计算中获得可训练的集合并在那里添加 l2 损失,但我也有词向量作为可训练的参数,我不想对其进行正则化

4

1 回答 1

0

我遇到了同样的问题,我一直在尝试用tensorflow==1.9.0.

代码:

import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4

with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
    lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
    inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
    cell = tf.nn.rnn_cell.GRUCell(hidden)
    cell_state = cell.zero_state(batch, tf.float32)
    output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
                        [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
                        dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    output_ = sess.run(output, {inputs: inputs_, lengths: lengths_})
    print(output_)
    print(sess.run(this_throws_error))

这是运行代码的结果:

...
File "/Users/piero/Development/mlenv3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_util.py", line 314, in CheckInputFromValidContext
    raise ValueError(error_msg + " See info log for more details.")
ValueError: Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop. See info log for more details.

然后我尝试将dynamic_rnn调用放在变量范围之外:

import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4

with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
    lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
    inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
    cell = tf.nn.rnn_cell.GRUCell(hidden)
    cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
                        [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
                        dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    output_ = sess.run(output, {inputs: inputs_, lengths: lengths_})
    print(output_)
    print(sess.run(this_throws_error))

理论上这应该没问题,因为正则化适用于 rnn 的权重,该权重应该包含在创建 rnn 单元格时初始化的变量。

这是输出:

[[[ 0.          0.          0.          0.        ]
  [ 0.1526176   0.33048663 -0.02288104 -0.1016309 ]
  [ 0.24402776  0.68280864 -0.04888818 -0.26671126]
  [ 0.          0.          0.          0.        ]]

 [[ 0.01998052  0.82368904 -0.00891946 -0.38874635]
  [ 0.          0.          0.          0.        ]
  [ 0.          0.          0.          0.        ]
  [ 0.          0.          0.          0.        ]]]
0.0

因此,将dynami_rnn调用放在变量范围之外是有效的,在某种意义上不会返回错误,但损失的值为 0,这表明它实际上并没有真正考虑来自 rnn 的任何权重来计算 l2 损失。

然后我尝试使用tensorflow==1.12.0. 这是范围内第一个脚本的输出dynamic_rnn

[[[ 0.          0.          0.          0.        ]
  [-0.17653276  0.06490126  0.02065791 -0.05175343]
  [-0.413078    0.14486027  0.03922977 -0.1465032 ]
  [ 0.          0.          0.          0.        ]]

 [[-0.5176822   0.03947531  0.00206934 -0.5542746 ]
  [ 0.          0.          0.          0.        ]
  [ 0.          0.          0.          0.        ]
  [ 0.          0.          0.          0.        ]]]
0.010403235

这是dynamic_rnn范围之外的输出:

[[[ 0.          0.          0.          0.        ]
  [ 0.04208181  0.03031874 -0.1749279   0.04617848]
  [ 0.12169671  0.09322995 -0.29029205  0.08247502]
  [ 0.          0.          0.          0.        ]]

 [[ 0.09673716  0.13300316 -0.02427006  0.00156245]
  [ 0.          0.          0.          0.        ]
  [ 0.          0.          0.          0.        ]
  [ 0.          0.          0.          0.        ]]]
0.0

范围内具有 dynamic_rnn 的版本返回非零值这一事实表明它工作正常,而在另一种情况下,返回值 0 表明它的行为不符合预期。所以底线是:这是一个错误tensorflow,他们在 version1.9.0和 version之间解决了1.12.0

于 2018-11-11T01:19:18.713 回答