1

我正在尝试创建一个用 DropoutWrapper 和 ResidualWrapper 包裹的 LSTM 单元的 MultiRNNCell。对于使用variational_recurrent=True,我们必须向DropoutWrapper 提供input_size 参数。我无法弄清楚应该将什么 input_size 传递给每个 LSTM 层,因为 ResidualWrapper 还添加了跳过连接以增加每一层的输入。

我正在使用以下实用程序函数来创建一个 LSTM 层:

def create_cell(units, residual_connections, keep_prob, input_size):
    lstm_cell = tf.nn.rnn_cell.LSTMCell(units,
                                        activation=tf.nn.tanh,
                                        initializer=tf.truncated_normal_initializer(),
                                        cell_clip=5.)
    lstm_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_cell,
                                              dtype=tf.float32,
                                              input_keep_prob=keep_prob,
                                              output_keep_prob=keep_prob,
                                              state_keep_prob=keep_prob,
                                              variational_recurrent=True,
                                              input_size=input_size)

    if residual_connections:
        lstm_cell = tf.nn.rnn_cell.ResidualWrapper(lstm_cell)
    return lstm_cell

以下代码用于创建完整的单元格:

net = tf.layers.dense(inputs,
                      128,
                      activation=tf.nn.relu, 
                      kernel_initializer=tf.variance_scaling_initializer())
net = tf.layers.batch_normalization(net, training=training)    
cells = [create_cell(64, False, keep_prob, ??)]
for _ in range(5):
    cells.append(create_cell(64, True, keep_prob, ??))
multirnn_cell = tf.nn.rnn_cell.MultiRNNCell(cells)
net, rnn_s1 = tf.nn.dynamic_rnn(cell=multirnn_cell, inputs=net, initial_state=rnn_s0, dtype=tf.float32)

对于第一层和后续 LSTM 层,应该将哪些值传递给 input_size?

4

0 回答 0