我正在使用 ResNet50 利用迁移学习构建 keras CNN 模型。出于某种原因,我的准确性和损失对于每个时代都是完全相同的。奇怪的是,我看到使用类似代码但使用 VGG19 的相同行为。这让我相信问题不在于实际的模型代码,而在于预处理的某个地方。我试过调整学习率、改变优化器、图像分辨率、冻结层等,但分数没有改变。我进入我的图像目录以检查我的两个不同的类是否混合,而它们不是。问题是什么?我只想提前说声谢谢。
PS我正在训练〜2000张图像并有两个课程。
import numpy as np
import pandas as pd
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras.models import Sequential, Model, load_model
from keras.layers import Conv2D, GlobalAveragePooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import applications
from keras import optimizers
img_height, img_width, img_channel = 400, 400, 3 #change chanel to 1 instead of three since it is black and white
base_model = applications.ResNet50(weights='imagenet', include_top=False, input_shape=(img_height, img_width, img_channel))
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(512, activation='relu',name='fc-1')(x)
#x = Dropout(0.5)(x)
x = Dense(256, activation='relu',name='fc-2')(x)
#x = Dropout(0.5)(x)
# and a logistic layer -- let's say we have 2 classes
predictions = Dense(1, activation='softmax', name='output_layer')(x)
model = Model(inputs=base_model.input, outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=0.1),
metrics=['accuracy'])
model.summary()
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint
batch_size = 6
# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True,
vertical_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
#possibely resize the image
train_generator = train_datagen.flow_from_directory(
"../Train/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary',
shuffle=True
)
validation_generator = test_datagen.flow_from_directory(
"../Test/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary',
shuffle=True)
epochs = 10
history = model.fit_generator(
train_generator,
steps_per_epoch=2046 // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=512 // batch_size,
callbacks=[ModelCheckpoint('snapshots/ResNet50-transferlearning.model', monitor='val_acc', save_best_only=True)])
这是keras给出的输出:
Epoch 1/10
341/341 [==============================] - 59s 172ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 2/10
341/341 [==============================] - 57s 168ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 3/10
341/341 [==============================] - 56s 165ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 4/10
341/341 [==============================] - 57s 168ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 5/10
341/341 [==============================] - 57s 167ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588