0

我正在尝试使用相同的数据集复制这篇论文。但不知何故,他们训练并获得了(80%)比我通常(65%)更好的准确率。我的数据不平衡 300:500(填充词:非填充词)。该模型倾向于在前几个时期过拟合。

该模型

input  = Input(shape = (256,256*4,1))
bnEps=2e-5
bnMom=0.9


c1 = Conv2D(64, (7,7), padding='same',strides=2,activation='relu', use_bias=False,kernel_initializer='glorot_uniform')(input)
b1 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c1)

c2 = conv1 = Conv2D(32, (3,3),strides=2, padding='same', use_bias=False,kernel_initializer='glorot_uniform')(input)
b2 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c2)
a2 = Activation('relu')(b2)

c3 = conv1 = Conv2D(64, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a2)
b3 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c3)
a3 = Activation('relu')(b3)

c4 = conv1 = Conv2D(64, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a3)
b4 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c4)

m1  = add([c1, b4])
a4 = Activation('relu')(m1)


#-----------------------------------------------layer 2----------------------------------------------------------------------------

c1 = Conv2D(128, (3,3),strides=2, padding='same',activation='relu', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b1 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c1)

c2 = conv1 = Conv2D(64, (3,3),strides=2, padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b2 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c2)
a2 = Activation('relu')(b2)

c3 = conv1 = Conv2D(128, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a2)
b3 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c3)
a3 = Activation('relu')(b3)


c4 = conv1 = Conv2D(128, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a3)
b4 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c4)

m1  = add([c1, b4])
a4 = Activation('relu')(m1)

#----------------------------------------------layer 3------------------------------------------------------------------------------

c1 = Conv2D(128, (3,3),strides = (1,2) ,padding='same',activation='relu', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b1 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c1)

c2 = conv1 = Conv2D(128, (3,3),strides = (1,2), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b2 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c2)
a2 = Activation('relu')(b2)

c3 = conv1 = Conv2D(128, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a2)
b3 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c3)
a3 = Activation('relu')(b3)


c4 = conv1 = Conv2D(128, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a3)
b4 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c4)

m1  = add([c1, b4])
a4 = Activation('relu')(m1)

#-------------------------------------------layer 4---------------------------------------------------------------------------------

c1 = Conv2D(64, (3,3),strides = (2,2) ,padding='same',activation='relu', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b1 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c1)

c2 = conv1 = Conv2D(128, (3,3),strides = (2,2), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b2 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c2)
a2 = Activation('relu')(b2)

c3 = conv1 = Conv2D(64, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a2)
b3 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c3)
a3 = Activation('relu')(b3)


c4 = conv1 = Conv2D(64, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a3)
b4 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c4)

m1  = add([c1, b4])
a4 = Activation('relu')(m1)

#-------------------------------------------layer 5-----------------------------------------------------------------------------------
c1 = Conv2D(32, (3,3),strides = (2,2) ,padding='same',activation='relu', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b1 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c1)

c2 = conv1 = Conv2D(64, (3,3),strides = (2,2), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b2 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c2)
a2 = Activation('relu')(b2)

c3 = conv1 = Conv2D(64, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a2)
b3 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c3)
a3 = Activation('relu')(b3)


c4 = conv1 = Conv2D(32, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a3)
b4 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c4)

m1  = add([c1, b4])
a4 = Activation('relu')(m1)

#-----------------------------------------layer 6-------------------------------------------------------------------------
c1 = Conv2D(16, (3,3),strides = (2,2) ,padding='same',activation='relu', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b1 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c1)

c2 = conv1 = Conv2D(32, (3,3),strides = (2,2), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a4)
b2 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c2)
a2 = Activation('relu')(b2)

c3 = conv1 = Conv2D(32, (3,3), padding='same', use_bias=False,kernel_initializer='glorot_uniform')(a2)
b3 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c3)
a3 = Activation('relu')(b3)


c4 = conv1 = Conv2D(16, (3,3), padding='same',use_bias=False,kernel_initializer='glorot_uniform')(a3)
b4 = BatchNormalization(epsilon=bnEps, momentum=bnMom)(c4)

m1  = add([c1, b4])
a4 = Activation('relu')(m1)

f = Flatten()(a4)
f = Reshape((int(8192/4), 1))(f)

# #-----------------------------------------layer7---------------------------------------------------------------------------
bi1 = Bidirectional(CuDNNLSTM(512, return_sequences=True))(f)
d1  = Dropout(0.2)(bi1)

bi2 = Bidirectional(CuDNNLSTM(512))(d1)
d2 = Dropout(0.4)(bi2)

out = Dense(2,activation='softmax')(d2)

# create model
model = Model(inputs=input, outputs=out)
model.compile(loss='categorical_crossentropy',optimizer=optimizers.RMSprop(lr=1e-4),metrics=[categorical_accuracy])
return model

我尝试过的更新:

将 RMSprop 等优化器更改为 adam 和 SGD 并没有帮助。

学习率的变化没有帮助。

模型

模型

WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

Epoch 1/30
50/50 [==============================] - 103s 2s/step - loss: 0.6552 - categorical_accuracy: 0.6461 - val_loss: 0.6419 - val_categorical_accuracy: 0.6562

Epoch 00001: val_loss improved from inf to 0.64193, saving model to model.01-0.64.h5
Epoch 2/30
50/50 [==============================] - 89s 2s/step - loss: 0.5127 - categorical_accuracy: 0.7280 - val_loss: 0.6183 - val_categorical_accuracy: 0.6345

Epoch 00002: val_loss improved from 0.64193 to 0.61830, saving model to model.02-0.62.h5
Epoch 3/30
50/50 [==============================] - 88s 2s/step - loss: 0.2025 - categorical_accuracy: 0.9213 - val_loss: 0.9228 - val_categorical_accuracy: 0.6276

Epoch 00003: val_loss did not improve from 0.61830
Epoch 4/30
50/50 [==============================] - 88s 2s/step - loss: 0.0572 - categorical_accuracy: 0.9786 - val_loss: 0.6971 - val_categorical_accuracy: 0.5724

Epoch 00004: val_loss did not improve from 0.61830
Epoch 5/30
50/50 [==============================] - 89s 2s/step - loss: 0.0769 - categorical_accuracy: 0.9736 - val_loss: 1.0118 - val_categorical_accuracy: 0.5172

Epoch 00005: val_loss did not improve from 0.61830
Epoch 6/30
50/50 [==============================] - 88s 2s/step - loss: 0.0492 - categorical_accuracy: 0.9849 - val_loss: 1.1499 - val_categorical_accuracy: 0.6000

Epoch 00006: val_loss did not improve from 0.61830
Epoch 00006: early stopping
4

0 回答 0