0

要了解如何实现具有指数衰减的人工神经网络以及具有恒定学习率的人工神经网络,我在这里查找了它:https ://www.tensorflow.org/api_docs/python/tf/compat/v1/train/exponential_decay

我有一些疑问:

...
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = tf.compat.v1.train.exponential_decay(starter_learning_rate,
global_step,
                                           100000, 0.96, staircase=True)
# Passing global_step to minimize() will increment it at each step.
learning_step = (
    tf.compat.v1.train.GradientDescentOptimizer(learning_rate)
    .minimize(...my loss..., global_step=global_step)
)

当 global_step 设置为等于值为 0 的变量时,这并不意味着我们不会有衰减,因为

decayed_learning_rate = learning_rate *
                        decay_rate ^ (global_step / decay_steps)

因此,如果global_step= 0遵循decayed_learning_rate = learning_rate,这是对的还是我在这里犯了错误?

此外,我对 100,000 步到底指的是什么感到有些困惑。究竟是什么一步?是不是每次输入都完全通过网络并反向传播?

4

1 回答 1

0

我希望这个例子能消除你的疑惑。

epochs = 10
global_step = tf.Variable(0, trainable=False, dtype= tf.int32)
starter_learning_rate = 1.0

for epoch in range(epochs):
    print("Starting Epoch {}/{}".format(epoch+1,epochs))
    for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
        
        with tf.GradientTape() as tape:
            logits = model(x_batch_train, training=True)
            loss_value = loss_fn(y_batch_train, logits)
            
        grads = tape.gradient(loss_value, model.trainable_weights)
        
        learning_rate = tf.compat.v1.train.exponential_decay(
                    starter_learning_rate,
                    global_step,
                    100000, 
                    0.96
        )
        
        
        optimizer(learning_rate=learning_rate).apply_gradients(zip(grads, model.trainable_weights))
        print("Global Step: {}  Learning Rate: {}  Examples Processed: {}".format(global_step.numpy(), learning_rate(), (step + 1) * 100))
        global_step.assign_add(1)

输出:

Starting Epoch 1/10
Global Step: 0  Learning Rate: 1.0  Examples Processed: 100
Global Step: 1  Learning Rate: 0.9999996423721313  Examples Processed: 200
Global Step: 2  Learning Rate: 0.9999992251396179  Examples Processed: 300
Global Step: 3  Learning Rate: 0.9999988079071045  Examples Processed: 400
Global Step: 4  Learning Rate: 0.9999983906745911  Examples Processed: 500
Global Step: 5  Learning Rate: 0.9999979734420776  Examples Processed: 600
Global Step: 6  Learning Rate: 0.9999975562095642  Examples Processed: 700
Global Step: 7  Learning Rate: 0.9999971389770508  Examples Processed: 800
Global Step: 8  Learning Rate: 0.9999967217445374  Examples Processed: 900
Global Step: 9  Learning Rate: 0.9999963045120239  Examples Processed: 1000
Global Step: 10  Learning Rate: 0.9999958872795105  Examples Processed: 1100
Global Step: 11  Learning Rate: 0.9999954700469971  Examples Processed: 1200
Starting Epoch 2/10
Global Step: 12  Learning Rate: 0.9999950528144836  Examples Processed: 100
Global Step: 13  Learning Rate: 0.9999946355819702  Examples Processed: 200
Global Step: 14  Learning Rate: 0.9999942183494568  Examples Processed: 300
Global Step: 15  Learning Rate: 0.9999938607215881  Examples Processed: 400
Global Step: 16  Learning Rate: 0.9999934434890747  Examples Processed: 500
Global Step: 17  Learning Rate: 0.999993085861206  Examples Processed: 600
Global Step: 18  Learning Rate: 0.9999926686286926  Examples Processed: 700
Global Step: 19  Learning Rate: 0.9999922513961792  Examples Processed: 800
Global Step: 20  Learning Rate: 0.9999918341636658  Examples Processed: 900
Global Step: 21  Learning Rate: 0.9999914169311523  Examples Processed: 1000
Global Step: 22  Learning Rate: 0.9999909996986389  Examples Processed: 1100
Global Step: 23  Learning Rate: 0.9999905824661255  Examples Processed: 1200

现在,如果您将全局步骤保持为 0。即从上面的代码中删除增量操作。输出:

开始纪元 1/10

Global Step: 0  Learning Rate: 1.0  Examples Processed: 100
Global Step: 0  Learning Rate: 1.0  Examples Processed: 200
Global Step: 0  Learning Rate: 1.0  Examples Processed: 300
Global Step: 0  Learning Rate: 1.0  Examples Processed: 400
Global Step: 0  Learning Rate: 1.0  Examples Processed: 500
Global Step: 0  Learning Rate: 1.0  Examples Processed: 600
Global Step: 0  Learning Rate: 1.0  Examples Processed: 700
Global Step: 0  Learning Rate: 1.0  Examples Processed: 800
Global Step: 0  Learning Rate: 1.0  Examples Processed: 900
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1000
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1100
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1200
Starting Epoch 2/10
Global Step: 0  Learning Rate: 1.0  Examples Processed: 100
Global Step: 0  Learning Rate: 1.0  Examples Processed: 200
Global Step: 0  Learning Rate: 1.0  Examples Processed: 300
Global Step: 0  Learning Rate: 1.0  Examples Processed: 400
Global Step: 0  Learning Rate: 1.0  Examples Processed: 500
Global Step: 0  Learning Rate: 1.0  Examples Processed: 600
Global Step: 0  Learning Rate: 1.0  Examples Processed: 700
Global Step: 0  Learning Rate: 1.0  Examples Processed: 800
Global Step: 0  Learning Rate: 1.0  Examples Processed: 900
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1000
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1100
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1200

建议 - 而不是使用tf.compat.v1.train.exponential_decay使用tf.keras.optimizers.schedules.ExponentialDecay。这就是最简单的例子的样子。

def create_model1():
    initial_learning_rate = 0.01
    lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
        initial_learning_rate,
        decay_steps=100000,
        decay_rate=0.96,
        staircase=True)
    model = tf.keras.Sequential()
    model.add(tf.keras.Input(shape=(5,)))
    model.add(tf.keras.layers.Dense(units = 6, 
                                    activation='relu', 
                                    name = 'd1'))
    model.add(tf.keras.layers.Dense(units = 2, activation='softmax', name = 'O2'))
    
    model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=lr_schedule),
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])
    
    return model


model = create_model1()
model.fit(x, y, batch_size = 100, epochs = 100)

您还可以使用 tf.keras.callbacks.LearningRateScheduler 之类的回调来实现衰减。

于 2020-08-21T01:39:39.603 回答