-1

一直在尝试构建一个 CNN 来对 MFCC 数据进行分类,但该模型立即过拟合。

数据:

  • 18 000 个文件(80% 训练,20% 测试)
  • 5 个标签

数据中的 5 个类别都是等量的。这个模型的创建是为了处理比 18k 更多的文件,所以我被告知要尽我所能减少网络,这可能会有所帮助。

将过滤器从 (3,3) 减少到 (1,1),尝试减少隐藏神经元数量甚至减少层数量。我只是卡住了,有人有什么想法吗?

无论发生什么,在使用测试数据测量准确度时,我的准确度都不会高于 60-65%。

型号代码:

time_start_train = time.time()
i = Input(shape=(feature_count,feature_count,1))
m = Conv2D(16, d, activation='elu', padding='same')(i)
m = MaxPooling2D()(m)
m = Conv2D(32, d, activation='elu', padding='same')(m)
m = MaxPooling2D()(m)
m = Conv2D(64, d, activation='elu', padding='same')(m)
m = MaxPooling2D()(m)
m = Conv2D(128, d, activation='elu', padding='same')(m)
m = MaxPooling2D()(m)
m = Conv2D(256, d, activation='elu', padding='same')(m)
m = MaxPooling2D()(m)
m = Flatten()(m)
m = Dense(512, activation='elu')(m)
m = Dropout(0.2)(m)
o = Dense(out_dim, activation='softmax')(m)

model = Model(inputs=i, outputs=o)

model.compile(loss='categorical_crossentropy', optimizer=Nadam(lr=1e-3), metrics=['accuracy'])

history = model.fit(data_train[0], data_train[1], epochs=10, verbose=1, validation_split = 0.1, shuffle=True)

型号总结:

    _________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 192, 192, 1)       0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 192, 192, 16)      32        
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 96, 96, 16)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 96, 96, 32)        544       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 48, 48, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 48, 48, 64)        2112      
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 24, 24, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 24, 24, 128)       8320      
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 12, 12, 128)       0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 12, 12, 256)       33024     
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 6, 6, 256)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 9216)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               4719104   
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 5)                 2565      
=================================================================
Total params: 4,765,701
Trainable params: 4,765,701
Non-trainable params: 0

MFCC 示例 (192x192)

模型精度

模型损失

4

2 回答 2

0

如果您对 ML/DL 模型没有深入了解,请使用 AUTOML 而不是 KERAS。在 AUTOML 中,不需要过多考虑不同的参数。

于 2020-12-05T15:28:31.413 回答
0

尝试应用 L1/L2 正则化。

于 2020-12-05T12:35:19.433 回答