0

我正在尝试用 CNN 做一个多类分类项目。我的问题是获得良好的准确性,但不能很好地预测验证数据。我已经介绍了 l2 正则化,但它不能很好地泛化。还尝试了不同的 l2 正则化值 (1e-3, 1e-4) 这是我的Accuracy graphLoss graph。拓扑:

model = Sequential()
inputs = keras.Input(shape=(512, 512, 3), name="img")
x = Conv2D(32, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')
x = BatchNormalization()(x)
x1 = Activation('relu')(x)
x2 = Conv2D(32, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same'
(x1)
x = BatchNormalization()(x2)
x = Activation('relu')(x2)
x3 = Conv2D(32, kernel_size=(3,3), strides=(1,1),  kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x3)
x = tensorflow.keras.layers.add([x, x1]) # ==> Shortcut
x = Activation('relu')(x)
x4 = Conv2D(64, kernel_size=(3,3), strides=(2,2), kernel_regularizer=l2(1e-5),padding='same')(x)
x = BatchNormalization()(x4)
x = Activation('relu')(x)
x5 = Conv2D(64, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x5)
x = Activation('relu')(x)
x6 = Conv2D(64, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x6)
x = tensorflow.keras.layers.add([x, x4]) # ==> Shortcut
x = Activation('relu')(x)
x7 = Conv2D(128, kernel_size=(3,3), strides=(2,2), kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x7)
x = Activation('relu')(x)
x8 = Conv2D(128, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x8)
x = Activation('relu')(x)
x9 = Conv2D(128, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x9)
x = tensorflow.keras.layers.add([x, x7]) #
x = Activation('relu')(x)

x10 = Conv2D(256, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x10)
x = Activation('relu')(x)

x11 = Conv2D(256, kernel_size=(3,3) , strides=(1,1), kernel_regularizer=l2(1e-5),padding='same') 
(x)
x = BatchNormalization()(x11)
x = Activation('relu')(x)

x12 = Conv2D(256, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x12)
x = tensorflow.keras.layers.add([x, x10]) #
x = Activation('relu')(x)

x13 = Conv2D(512, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x13)
x = Activation('relu')(x)

x14 = Conv2D(512, kernel_size=(3,3), strides=(1,1),kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x14)
x = Activation('relu')(x)

x15 = Conv2D(512, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')(x)
x = BatchNormalization()(x15)
x = tensorflow.keras.layers.add([x, x13]) #
x = Activation('relu')(x)
x = Flatten()(Conv2D(1, kernel_size=1, strides=(1,1), kernel_regularizer=l2(1e-5), 
 padding='same')(x))
x = layers.Dropout(0.3)(x)
outputs = Dense(4, activation ='softmax', kernel_initializer ='he_normal')(x) 
model = Model(inputs, outputs)
model.summary()

` 我尝试了不同的过滤器,减少/增加层。这个问题是因为过度拟合吗?关于我可以改进的任何建议,以便我获得更平滑的曲线和良好的预测。

4

1 回答 1

0
  • 您也可以尝试在 Conv2D 层中放置 dropout,这应该有助于解决一些过度拟合问题。
  • 降低 alpha(优化器的学习率),以便最优值不会过冲。

应该有帮助:)

于 2021-03-16T17:50:23.210 回答