6

我正在用一组图像训练一个类似 VGG 的 convnet(如示例http://keras.io/examples/ )。我将图像转换为数组并使用 scipy 调整它们的大小:

mapper = [] # list of photo ids
data = np.empty((NB_FILES, 3, 100, 100)).astype('float32')
i = 0
for f in onlyfiles[:NB_FILES]:
    img = load_img(mypath + f)
    a = img_to_array(img)

    a_resize = np.empty((3, 100, 100))
    a_resize[0,:,:] = sp.misc.imresize(a[0,:,:], (100,100)) / 255.0 # - 0.5
    a_resize[1,:,:] = sp.misc.imresize(a[1,:,:], (100,100)) / 255.0 # - 0.5
    a_resize[2,:,:] = sp.misc.imresize(a[2,:,:], (100,100)) / 255.0 # - 0.5

    photo_id = int(f.split('.')[0])
    mapper.append(photo_id)
    data[i, :, :, :] = a_resize; i += 1

在最后一个密集层中,我有 2 个神经元,并使用 softmax 激活。这是最后几行:

model.add(Dense(2))
model.add(Activation('softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd)

model.fit(data, target_matrix, batch_size=32, nb_epoch=2, verbose=1, show_accuracy=True, validation_split=0.2)

我无法改进减少损失,并且每个时期都具有与以前相同的损失和相同的精度。损失实际上在第 1 个和第 2 个 epoch 之间上升:

Train on 1600 samples, validate on 400 samples
Epoch 1/5
1600/1600 [==============================] - 23s - loss: 3.4371 - acc: 0.7744 - val_loss: 3.8280 - val_acc: 0.7625
Epoch 2/5
1600/1600 [==============================] - 23s - loss: 3.4855 - acc: 0.7837 - val_loss: 3.8280 - val_acc: 0.7625
Epoch 3/5
1600/1600 [==============================] - 23s - loss: 3.4855 - acc: 0.7837 - val_loss: 3.8280 - val_acc: 0.7625
Epoch 4/5
1600/1600 [==============================] - 23s - loss: 3.4855 - acc: 0.7837 - val_loss: 3.8280 - val_acc: 0.7625
Epoch 5/5
1600/1600 [==============================] - 23s - loss: 3.4855 - acc: 0.7837 - val_loss: 3.8280 - val_acc: 0.7625

我究竟做错了什么?

4

2 回答 2

3

根据我的经验,这经常发生在学习率太高的时候。优化将无法找到最小值而只是“转身”。

理想的速率将取决于您的数据和网络架构。

(作为参考,我目前正在运行一个 8 层的卷积网络,样本大小与您的相似,并且在我将学习率降低到 0.001 之前可以观察到同样的缺乏收敛性)

于 2016-05-27T06:39:27.913 回答
2

我的建议是降低学习率,尝试数据增强。

数据增强代码:

print('Using real-time data augmentation.')

    # this will do preprocessing and realtime data augmentation
     datagen = ImageDataGenerator(
        featurewise_center=False,  # set input mean to 0 over the dataset
        samplewise_center=False,  # set each sample mean to 0
        featurewise_std_normalization=False,  # divide inputs by std of the dataset
        samplewise_std_normalization=False,  # divide each input by its std
        zca_whitening=True,  # apply ZCA whitening
        rotation_range=90,  # randomly rotate images in the range (degrees, 0 to 180)
        width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width)
        height_shift_range=0.1,  # randomly shift images vertically (fraction of total height)
        horizontal_flip=True,  # randomly flip images
        vertical_flip=False)  # randomly flip images

    # compute quantities required for featurewise normalization
    # (std, mean, and principal components if ZCA whitening is applied)
     datagen.fit(X_train)

    # fit the model on the batches generated by datagen.flow()
     model.fit_generator(datagen.flow(X_train, Y_train,
                         batch_size=batch_size),
                         samples_per_epoch=X_train.shape[0],
                         nb_epoch=nb_epoch)
于 2016-05-27T11:30:34.090 回答