0

我正在尝试训练卷积网络,但无论我做什么,损失都会改变。我想知道我哪里出错了,也很感谢任何友好的建议,因为这是我第一次处理如此大的数据。

我尝试了优化器(adam、SGD、adamdelta...)、损失函数(平方平均误差、二进制交叉熵...)和激活(Relu、elu、selu ....)的许多组合,但问题仍然存在持续存在。

我的项目性质:这是我在模拟中训练一辆简单的自动驾驶汽车的尝试。

训练数据:训练数据被分成大约 4000 个 .h5 文件。每个文件正好有 200 张图像,每个图像都有各自的数据,例如速度、加速度等。

由于数据的性质,我决定以 200 个小批量进行训练并循环浏览所有文件。

# model (I am a beginner so forgive my sloppy code)
rgb_in = Input(batch_shape=(200, 88, 200, 3), name='rgb_in')
conv_1 = Conv2D(filters=10,kernel_size=5,activation="elu",data_format="channels_last",init = "he_normal")(rgb_in)
conv_2 = Conv2D(filters=16,kernel_size=5,activation="elu",data_format="channels_last",init = "he_normal")(conv_1)
conv_3 = Conv2D(filters=24,kernel_size=5,activation="elu",data_format="channels_last",init = "he_normal")(conv_2)
conv_4 = Conv2D(filters=32,kernel_size=3,activation="elu",data_format="channels_last",init = "he_normal")(conv_3)
conv_5 = Conv2D(filters=32,kernel_size=3,activation="elu",data_format="channels_last",init = "he_normal")(conv_4)
flat = Flatten(data_format="channels_last")(conv_5)
t_in = Input(batch_shape=(200,14), name='t_in')
x = concatenate([flat, t_in])
dense_1 = Dense(100,activation="elu",init = "he_normal")(x)
dense_2 = Dense(50,activation="elu",init = "he_normal")(dense_1)
dense_3 = Dense(25,activation="elu",init = "he_normal")(dense_2)
out = Dense(5,activation="elu",init = "he_normal")(dense_3)
model = Model(inputs=[rgb_in, t_in], outputs=[out])
model.compile(optimizer='Adadelta', loss='binary_crossentropy')



for i in range(3663,6951):
    filename = 'data_0'+str(i)+'.h5'
    f = h5py.File(filename, 'r')
    rgb = f["rgb"][:,:,:,:]
    targets = f["targets"][:,:]
    rgb = (rgb - rgb.mean())/rgb.std()
    input_target[:,0] = targets[:,10]
    input_target[:,1] = targets[:,11]
    input_target[:,2] = targets[:,12]
    input_target[:,3] = targets[:,13]
    input_target[:,4] = targets[:,16]
    input_target[:,5] = targets[:,17]
    input_target[:,6] = targets[:,18]
    input_target[:,7] = targets[:,21]
    input_target[:,8] = targets[:,22]
    input_target[:,9] = targets[:,23]
    a = one_hot(targets[:,24].astype(int),6)
    input_target[:,10] = a[:,2]
    input_target[:,11] = a[:,3]
    input_target[:,12] = a[:,4]
    input_target[:,13] = a[:,5]
    output[:,0] = targets[:,0]
    output[:,1] = targets[:,1]
    output[:,2] = targets[:,2]
    output[:,3] = targets[:,4]
    output[:,4] = targets[:,5]
    model.fit([rgb,input_target], output,epochs=10,batch_size=200)

结果:

Epoch 1/10
200/200 [==============================] - 7s 35ms/step - loss: 6.1657
Epoch 2/10
200/200 [==============================] - 0s 2ms/step - loss: 2.3812
Epoch 3/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2955
Epoch 4/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 5/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 6/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 7/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 8/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 9/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 10/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 1/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 2/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 3/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 4/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 5/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 6/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 7/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 8/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 9/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 10/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241

最后,如果您对我的项目有任何建议,我将不胜感激。</p>

4

2 回答 2

0

使用ReduceLROnPlateau 回调怎么样?

from keras.callbacks import ReduceLROnPlateau

reduce_lr = ReduceLROnPlateau(monitor='loss', patience=6)

model.fit(X,y,num_epochs=666,callbacks=[reduce_lr])
于 2019-04-17T03:09:00.033 回答
0

我使用了循环学习率,它解决了这个问题。对于曾经遇到过类似问题的人,这是一个链接

https://github.com/bckenstler/CLR

于 2019-04-17T22:37:02.203 回答