4

我正在尝试使用 R 中的 keras 在 LSTM 中使用批量标准化。在我的数据集中,目标/输出变量是Sales列,并且数据集中的每一行都记录Sales了一年(2008-2017)中的每一天。数据集如下所示:

销售数据

我的目标是建立一个基于此类数据集的 LSTM 模型,该模型应该能够在训练结束时提供预测。我正在用 2008-2016 年的数据训练这个模型,并使用 2017 年数据的一半作为验证,其余的作为测试集。

以前,我尝试使用辍学和提前停止来创建模型。如下所示:

mdl1 <- keras_model_sequential()
mdl1 %>%
  layer_lstm(units = 512, input_shape = c(1, 3), return_sequences = T ) %>%  
  layer_dropout(rate = 0.3) %>%
  layer_lstm(units = 512, return_sequences = FALSE) %>%
  layer_dropout(rate = 0.2) %>%
  layer_dense(units = 1, activation = "linear")

mdl1 %>% compile(loss = 'mse', optimizer = 'rmsprop')

模型如下所示

___________________________________________________________
Layer (type)               Output Shape         Param #    
===========================================================
lstm_25 (LSTM)             (None, 1, 512)       1056768    
___________________________________________________________
dropout_25 (Dropout)       (None, 1, 512)       0          
___________________________________________________________
lstm_26 (LSTM)             (None, 512)          2099200    
___________________________________________________________
dropout_26 (Dropout)       (None, 512)          0          
___________________________________________________________
dense_13 (Dense)           (None, 1)            513        
===========================================================
Total params: 3,156,481
Trainable params: 3,156,481
Non-trainable params: 0
___________________________________________________________

为了训练模型,提前停止与验证集一起使用。

mdl1.history <- mdl1 %>% 
  fit(dt.tr, dt.tr.out, epochs=500, shuffle=F,
      validation_data = list(dt.val, dt.val.out),
      callbacks = list(
        callback_early_stopping(min_delta = 0.000001,  patience = 10, verbose = 1)
      ))

最重要的是,我想使用批量标准化来加速训练。根据我的理解,要使用批量归一化,我需要将数据分成批次,并申请layer_batch_normalization每个隐藏层的输入。模型层如下所示:

batch_size <- 32
mdl2 <- keras_model_sequential()
mdl2 %>%
  layer_batch_normalization(input_shape = c(1, 3), batch_size = batch_size) %>%

  layer_lstm(units = 512, return_sequences = T) %>%
  layer_dropout(rate = 0.3) %>%
  layer_batch_normalization(batch_size = batch_size) %>%

  layer_lstm(units = 512, return_sequences = F) %>%
  layer_dropout(rate = 0.2) %>%
  layer_batch_normalization(batch_size = batch_size) %>%

  layer_dense(units = 1, activation = "linear")

mdl2 %>% compile(loss = 'mse', optimizer = 'rmsprop')

该模型如下所示:

______________________________________________________________________________
Layer (type)                                    Output Shape       Param #    
==============================================================================
batch_normalization_34 (BatchNormalization)     (32, 1, 3)         12         
______________________________________________________________________________
lstm_27 (LSTM)                                  (32, 1, 512)       1056768    
______________________________________________________________________________
dropout_27 (Dropout)                            (32, 1, 512)       0          
______________________________________________________________________________
batch_normalization_35 (BatchNormalization)     (32, 1, 512)       2048       
______________________________________________________________________________
lstm_28 (LSTM)                                  (32, 1, 512)       2099200    
______________________________________________________________________________
dropout_28 (Dropout)                            (32, 1, 512)       0          
______________________________________________________________________________
batch_normalization_36 (BatchNormalization)     (32, 1, 512)       2048       
______________________________________________________________________________
dense_14 (Dense)                                (32, 1, 1)         513        
==============================================================================
Total params: 3,160,589
Trainable params: 3,158,535
Non-trainable params: 2,054
______________________________________________________________________________

训练模型看起来像以前一样。唯一的区别在于训练和验证数据集,它们的大小是batch_size(这里是 32)的倍数,通过从第二批到最后一批重新采样数据。

但是, 的性能比 的mdl1要好得多mdl2,如下所示。

楷模

我不确定我到底做错了什么,因为我从 keras(以及一般的实用神经网络)开始。此外,第一个模型的性能也不是很好;任何关于如何改进的建议也很好。

4

2 回答 2

3

LSTM 中的批量归一化并不容易实现。一些论文提出了一些惊人的结果https://arxiv.org/pdf/1603.09025.pdf称为 Recurrent Batch normalization。作者应用以下等式

批量标准化 LSTM

不幸的是,这个模型还没有在 keras 中实现,但只在 tensorflow https://github.com/OlavHN/bnlstm

但是,在没有居中和移位的情况下,我能够在激活函数之后使用(默认)批量归一化获得良好的结果。这种做法不同于上面论文在c_t和h_t之后应用BN,或许值得一试。

model = Sequential()
model.add(LSTM(neurons1,
               activation=tf.nn.relu,
               return_sequences=True,
               input_shape=(timesteps, data_dim)))
model.add(BatchNormalization(momentum=m, scale=False, center=False))
model.add(LSTM(neurons2,
               activation=tf.nn.relu))
model.add(BatchNormalization(momentum=m, scale=False, center=False))
model.add(Dense(1))
于 2019-08-07T00:55:29.967 回答
0

我将 Keras 与 Python 一起使用,但我可以尝试 R。在该fit方法中,文档说如果省略,它默认为 32。这在当前版本中不再适用,因为它可以在源代码中看到。我认为你应该这样尝试,至少它在 Python 中是这样工作的:

mdl2 <- keras_model_sequential()
mdl2 %>%
  layer_input(input_shape = c(1, 3))  %>%

  layer_batch_normalization() %>%
  layer_lstm(units = 512, return_sequences = T, dropout=0.3) %>%

  layer_batch_normalization() %>%
  layer_lstm(units = 512, return_sequences = F, dropout=0.2) %>%

  layer_batch_normalization() %>%
  layer_dense(units = 1, activation = "linear")

mdl2 %>% compile(loss = 'mse', optimizer = 'rmsprop')
mdl2.history <- mdl2 %>% 
  fit(dt.tr, dt.tr.out, epochs=500, shuffle=F,
      validation_data = list(dt.val, dt.val.out),
      batch_size=32,
      callbacks = list(
        callback_early_stopping(min_delta = 0.000001,  patience = 10, verbose = 1)
      ))
于 2018-01-31T17:50:42.873 回答