0

我正在尝试在 Keras 中为 SGD 编写一个自定义学习率调度程序,它会根据迭代改变学习率。然而,LearningRateScheduler CallBack 只接受一个只接受 epoch 的函数。我的学习率函数如下所示:

学习率 = base_learning_rate x (1 + gamma x 迭代)^(-power)

4

2 回答 2

0

这可以通过定义您自己的tf.keras.optimizers.schedules.LearningRateSchedule并将其传递给优化器来实现。

class Example(tf.keras.optimizers.schedules.LearningRateSchedule):

  def __init__(self, initial_learning_rate, gamma, power):
    self.initial_learning_rate = initial_learning_rate
    self.gamma = gamma
    self.power = power

  def __call__(self, step):
     return self.initial_learning_rate * tf.pow((step*self.gamma+1),-self.power)

optimizer = tf.keras.optimizers.SGD(learning_rate=Example(0.1,0.001,2))

参考:https ://www.tensorflow.org/api_docs/python/tf/keras/optimizers/schedules/LearningRateSchedule

于 2021-06-12T11:06:52.863 回答
0

当您说“根据迭代更改学习率”时,您的意思是您想在每批结束时更改它吗?如果是这样,您可以使用自定义回调来做到这一点。我尚未对此进行测试,但代码会是一些东西喜欢

class LRA(keras.callbacks.Callback):
    def __init__(self,model, initial_learning_rate, gamma, power):
        super(LRA, self).__init__()
        self.initial_learning=initial_learning
        self.gamma=gamma
        self.power= power
        self.model=model # model is your compiled model
    def on_train_begin(self, logs=None):
        tf.keras.backend.set_value(self.model.optimizer.lr, 
                                    self.initial_learning_rate)
   def on_train_batch_end(self, batch, logs=None):
       lr=self.initial_learning_rate * tf.pow(((batch+1)*self.gamma+1),-self.power)
       tf.keras.backend.set_value(self.model.optimizer.lr, lr)
       # print('for ', batch, ' lr set to ', lr) remove comment if you want to see lr change

让我知道这是否有效,我还没有测试过

before you run model.fit include code
initial_learning_rate= .001  # set to desired value
gamma=   # set to desired value
power=   # set to desired value
callbacks=[LRA(model=model, initial_learning_rate=initial_learning_rate, gamma=gamma, power=power)                           
   
        
   
于 2021-06-12T19:28:56.467 回答