4

在 Keras 中创建模型后,我想获取渐变并使用 tf.train.AdamOptimizer 类直接在 Tensorflow 中应用它们。但是,由于我使用的是 Dropout 层,我不知道如何告诉模型它是否处于训练模式。不接受培训关键字。这是代码:

    net_input = Input(shape=(1,))
    net_1 = Dense(50)
    net_2 = ReLU()
    net_3 = Dropout(0.5)
    net = Model(net_input, net_3(net_2(net_1(net_input))))

    #mycost = ...

    optimizer = tf.train.AdamOptimizer()
    gradients = optimizer.compute_gradients(mycost, var_list=[net.trainable_weights])
    # perform some operations on the gradients
    # gradients = ...
    trainstep = optimizer.apply_gradients(gradients)

即使有 dropout ,我也会在有和没有 dropout 层的情况下得到相同的行为rate=1。如何解决这个问题?

4

2 回答 2

1

正如@Sharky 已经说过的,您可以training在调用类call()方法时使用参数Dropout。但是,如果您想在 tensorflow 图模式下进行训练,则需要在训练期间传递一个占位符并为其提供布尔值。这是适用于您的案例的拟合高斯斑点的示例:

import tensorflow as tf
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import ReLU
from tensorflow.keras.layers import Input
from tensorflow.keras import Model

x_train, y_train = make_blobs(n_samples=10,
                              n_features=2,
                              centers=[[1, 1], [-1, -1]],
                              cluster_std=1)

x_train, x_test, y_train, y_test = train_test_split(
    x_train, y_train, test_size=0.2)

# `istrain` indicates whether it is inference or training
istrain = tf.placeholder(tf.bool, shape=()) 
y = tf.placeholder(tf.int32, shape=(None))
net_input = Input(shape=(2,))
net_1 = Dense(2)
net_2 = Dense(2)
net_3 = Dropout(0.5)
net = Model(net_input, net_3(net_2(net_1(net_input)), training=istrain))

xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
        labels=y, logits=net.output)
loss_fn = tf.reduce_mean(xentropy)

optimizer = tf.train.AdamOptimizer(0.01)
grads_and_vars = optimizer.compute_gradients(loss_fn,
                                             var_list=[net.trainable_variables])
trainstep = optimizer.apply_gradients(grads_and_vars)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    l1 = loss_fn.eval({net_input:x_train,
                       y:y_train,
                       istrain:True}) # apply dropout
    print(l1) # 1.6264652
    l2 = loss_fn.eval({net_input:x_train,
                       y:y_train,
                       istrain:False}) # no dropout
    print(l2) # 1.5676715
    sess.run(trainstep, feed_dict={net_input:x_train,
                                   y:y_train, 
                                   istrain:True}) # train with dropout

于 2019-04-01T14:56:43.613 回答
1

Keras 层继承自 tf.keras.layers.Layer 类。Keras API 在内部使用model.fit. 如果 Keras Dropout 与纯 TensorFlow 训练循环一起使用,它在其调用函数中支持训练参数。

所以你可以控制它

dropout = tf.keras.layers.Dropout(rate, noise_shape, seed)(prev_layer, training=is_training)

来自官方 TF 文档

注意: - 以下可选关键字参数保留用于特定用途: * 训练:Python 布尔值的布尔标量张量,指示调用是用于训练还是推理。* 掩码:布尔输入掩码。- 如果层的调用方法采用掩码参数(如某些 Keras 层所做的那样),其默认值将设置为前一层为输入生成的掩码(如果输入确实来自生成相应掩码的层,即如果它来自具有遮罩支持的 Keras 层 。https: //www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout#调用

于 2019-04-01T14:43:57.930 回答