-1

这是我的代码。我尝试构建一个 VGG 11 层网络,混合了 ReLu 和 ELu 激活以及内核和活动的许多正则化。结果确实令人困惑:代码在第 10 个 epoch。我在 train 和 val 上的损失从 2000 年减少到 1.5,但我在 train 和 val 上的 acc 保持在 50% 不变。有人可以向我解释吗?

# VGG 11
from keras.regularizers import l2
from keras.layers.advanced_activations import ELU
from keras.optimizers import Adam
model = Sequential()

model.add(Conv2D(64, (3, 3), kernel_initializer='he_normal', 
          kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.0001), 
          input_shape=(1, 96, 96), activation='relu'))
model.add(Conv2D(64, (3, 3), kernel_initializer='he_normal', 
          kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.0001), 
          activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), kernel_initializer='he_normal', 
          kernel_regularizer=l2(0.0001),activity_regularizer=l2(0.0001), 
          activation='relu'))
model.add(Conv2D(128, (3, 3), kernel_initializer='he_normal',     
          kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.0001), 
          activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(256, (3, 3), kernel_initializer='he_normal',     
          kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.0001), 
          activation='relu'))
model.add(Conv2D(256, (3, 3), kernel_initializer='he_normal',     
          kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.0001), 
          activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(512, (3, 3), kernel_initializer='he_normal', 
          kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.0001), 
          activation='relu'))
model.add(Conv2D(512, (3, 3), kernel_initializer='he_normal', 
          kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.0001), 
          activation='relu'))
model.add(Conv2D(512, (3, 3), kernel_initializer='he_normal', 
          kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.0001),     
          activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

# convert convolutional filters to flat so they can be feed to fully connected layers
model.add(Flatten())

model.add(Dense(2048, kernel_initializer='he_normal',
               kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.01)))
model.add(ELU(alpha=1.0))
model.add(Dropout(0.5))

model.add(Dense(1024, kernel_initializer='he_normal',
               kernel_regularizer=l2(0.0001), activity_regularizer=l2(0.01)))
model.add(ELU(alpha=1.0))
model.add(Dropout(0.5))

model.add(Dense(2))
model.add(Activation('softmax'))

adammo = Adam(lr=0.0008, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy', optimizer=adammo, metrics=['accuracy'])
hist = model.fit(X_train, y_train, batch_size=48, epochs=20, verbose=1, validation_data=(X_val, y_val))
4

1 回答 1

1

这不是缺陷,其实完全有可能!

分类交叉熵损失不要求准确度随着损失的减少而上升。这不是 keras 或 theano 中的错误,而是网络或数据问题。

对于您可能尝试做的事情,这种网络结构可能过于复杂。您应该删除一些正则化,仅使用 ReLu,使用更少的层,使用标准的 adam 优化器,更大的批次等。首先尝试使用 keras 的默认模型之一,例如 VGG16,

如果您想查看他们的实现以针对不同的 VGG11 结构对其进行编辑。是这里:

def VGG_16(weights_path=None):
    model = Sequential()
    model.add(ZeroPadding2D((1,1),input_shape=(3,224,224)))
    model.add(Convolution2D(64, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(64, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(128, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(128, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(Flatten())
    model.add(Dense(4096, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(4096, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(1000, activation='softmax'))

    if weights_path:
        model.load_weights(weights_path)

    return model

你可以看到它要简单得多。它只使用依赖(最近变得流行)没有正则化,不同的卷积结构等。根据您的需要修改它!

于 2017-08-03T18:20:26.567 回答