0

我正在 Tensorflow 中构建强化学习算法,我希望能够在一次调用session.run().

基本原理:我需要 (1) 做一个没有 dropout 的前向传递来计算目标;(2) 对生成的目标进行训练。如果我在不同的调用中执行这两个步骤session.run(),一切都很好。但我想通过一次调用session.run()(使用tf.stop_gradients(targets))来做到这一点。

在尝试了几个没有成功的解决方案之后,我找到了一个解决方案,我用一个变量替换了Keras 使用的learning_phase占位符(因为占位符是张量并且不允许赋值)并使用自定义层将该变量设置为 True 或根据需要为假。该解决方案如下面的代码所示。获取其中一个m1m2单独的值(例如,运行sess.run(m1, feed_dict={ph:np.ones((1,1))})按预期工作而没有错误。但是,获取 的值m3,或同时获取m1and的值m2,有时工作,有时不工作(并且错误消息没有提供信息)。

你知道我做错了什么或者更好的方法来做我想做的事吗?

编辑:代码显示了一个玩具示例。实际上,我有一个模型,我需要运行两次向前传球(一次关闭 dropout,另一次打开 dropout)和一次向后传球。我想在不返回 python 的情况下完成这一切。

from tensorflow.keras.layers import Dropout, Dense, Input, Layer
from tensorflow.python.keras import backend as K
from tensorflow.keras import Model
import tensorflow as tf
import numpy as np

class DropoutSwitchLayer(Layer):
  def __init__(self, stateful=True, **kwargs):
    self.stateful = stateful
    self.supports_masking = True
    super(DropoutSwitchLayer, self).__init__(**kwargs)

  def build(self, input_shape):
    self.lph = tf.Variable(True, dtype=tf.bool, name="lph", trainable=False)
    K._GRAPH_LEARNING_PHASES[tf.get_default_graph()] = self.lph
    super(DropoutSwitchLayer, self).build(input_shape)

  def call(self, inputs, mask=None):
    data_input, training = inputs
    op = self.lph.assign(training[0], use_locking=True)
    # ugly trick here to make the layer work
    data_input = data_input + tf.multiply(tf.cast(op, dtype=tf.float32), 0.0)
    return data_input

  def compute_output_shape(self, input_shape):
    return input_shape[0]


dropout_on = np.array([True], dtype=np.bool)
dropout_off = np.array([False], dtype=np.bool)
input_ph = tf.placeholder(tf.float32, shape=(None, 1))

drop = Input(shape=(), dtype=tf.bool)
input = Input(shape=(1,))
h = DropoutSwitchLayer()([input, drop])
h = Dense(1)(h)
h = Dropout(0.5)(h)
o = Dense(1)(h)
m = Model(inputs=[input, drop], outputs=o)

m1 = m([input_ph, dropout_on])
m2 = m([input_ph, dropout_off])
m3 = m([m2, dropout_on])

sess = tf.Session()
K.set_session(sess)
sess.run(tf.global_variables_initializer())

编辑 2: Daniel Möller 下面的解决方案在使用Dropout层时有效,但如果在层内使用 dropoutLSTM怎么办?

input = Input(shape=(1,))
h = Dense(1)(input)
h = RepeatVector(2)(h)
h = LSTM(1, dropout=0.5, recurrent_dropout=0.5)(h)
o = Dense(1)(h)
4

3 回答 3

1

事实证明,Keras 开箱即用地支持我想做的事情。在对 Dropout/LSTM 层的调用中使用训练参数,结合 Daniel Möller 构建模型的方法(谢谢!),就可以解决问题。

在下面的代码中(只是一个玩具示例),o1并且o3应该等于和不同o2

from tensorflow.keras.layers import Dropout, Dense, Input, Lambda, Layer, Add, RepeatVector, LSTM
from tensorflow.python.keras import backend as K
from tensorflow.keras import Model
import tensorflow as tf
import numpy as np

repeat = RepeatVector(2)
lstm = LSTM(1, dropout=0.5, recurrent_dropout=0.5)

#Forward pass with dropout disabled
next_state = tf.placeholder(tf.float32, shape=(None, 1), name='next_state')
h = repeat(next_state)
# Use training to disable dropout
o1 = lstm(h, training=False)
target1 = tf.stop_gradient(o1)

#Forward pass with dropout enabled
state = tf.placeholder(tf.float32, shape=(None, 1), name='state')
h = repeat(state)
o2 = lstm(h, training=True)
target2 = tf.stop_gradient(o2)

#Forward pass with dropout disabled
ph3 = tf.placeholder(tf.float32, shape=(None, 1), name='ph3')
h = repeat(ph3)
o3 = lstm(h, training=False)

loss = target1 + target2 - o3
opt = tf.train.GradientDescentOptimizer(0.1)
train = opt.minimize(loss)

sess = tf.Session()
K.set_session(sess)
sess.run(tf.global_variables_initializer())

data = np.ones((1,1))
sess.run([o1, o2, o3], feed_dict={next_state:data, state:data, ph3:data})
于 2018-12-16T20:09:54.517 回答
1

为什么不制作单个连续模型?

#layers
inputs = Input(shape(1,))
dense1 = Dense(1)
dense2 = Dense(1)

#no drop pass:
h = dense1(inputs)
o = dense2(h)
#optionally:
o = Lambda(lambda x: K.stop_gradient(x))(o)

#drop pass:
h = dense1(o)
h = Dropout(.5)(h)
h = dense2(h)

modelOnlyFinalOutput = Model(inputs,h)
modelOnlyNonDrop = Model(inputs,o)
modelBothOutputs = Model(inputs, [o,h])

选择一项进行培训:

model.fit(x_train,y_train) #where y_train = [targets1, targets2] if using both outputs
于 2018-12-14T17:02:33.743 回答
0

这个怎么样 :

class CustomDropout(tf.keras.layers.Layer):
    def __init__(self):
        super(CustomDropout, self).__init__()
        self.dropout1= Dropout(0.5)
        self.dropout2= Dropout(0.1)

    def call(self, inputs):
       if xxx:
           return self.dropout1(inputs)
       else:
           return self.dropout2(inputs)
于 2021-05-24T07:51:12.200 回答