4

我正在使用两个不同的数据集,每个数据集有 1200 张图像。第一个数据集有 4 个类,第二个数据集有 6 个类。

这是一个简单的图像分类问题。但是在训练时,在每个 epoch 上,我对两个数据集的验证准确性都得到了相同的值。

我已使用 imagemagick 将两个数据集的所有图像大小调整为 100x100。

我不知道我在哪里犯错。提前致谢

终端输出:

Using Theano backend.
Couldn't import dot_parser, loading of dot files will not be possible.
X_train shape: (880, 3, 100, 100)
880 train samples
220 test samples
train:
0 418
3 179
2 174
1 109
dtype: int64
test:
0 98
3 55
2 43
1 24
dtype: int64
Train on 880 samples, validate on 220 samples
Epoch 1/5
880/880 [==============================] - 582s - loss: 1.3444 - acc: 0.4500 - val_loss: 1.2752 - val_acc: 0.4455
Epoch 2/5
880/880 [==============================] - 540s - loss: 1.2624 - acc: 0.4750 - val_loss: 1.2802 - val_acc: 0.4455
Epoch 3/5
880/880 [==============================] - 540s - loss: 1.2637 - acc: 0.4750 - val_loss: 1.2712 - val_acc: 0.4455
Epoch 4/5
880/880 [==============================] - 538s - loss: 1.2484 - acc: 0.4750 - val_loss: 1.2623 - val_acc: 0.4455
Epoch 5/5
880/880 [==============================] - 537s - loss: 1.2375 - acc: 0.4750 - val_loss: 1.2486 - val_acc: 0.4455

prediction on test data:
In [26]: model.predict_classes(X_test)
220/220 [==============================] - 37s

Out[26]:
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

代码:

from __future__ import print_function
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten, Reshape
from keras.layers.convolutional import Convolution2D, MaxPooling2D, Convolution1D, MaxPooling1D
from keras.optimizers import SGD
from keras.utils import np_utils, generic_utils
import numpy as np
from sklearn.cross_validation import train_test_split
import pandas as pd

batch_size = 30
nb_classes = 4 
nb_epoch = 10

img_rows, img_cols = 100, 100
img_channels = 3
X = np.load( 'image-data.npy' )
y = np.load( 'image-class.npy' )

# the data, shuffled and split between train and test sets
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=100 ) 
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print("train:\n ",pd.value_counts(y_train))
print("test:\n",pd.value_counts(y_test))


Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

model = Sequential()

model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(img_channels, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1,1) ))
model.add(Dropout(0.25))

model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1,1) ))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd)


model.fit(X_train, Y_train , batch_size = batch_size, nb_epoch = nb_epoch,shuffle=True, show_accuracy=True,validation_data=(X_test,Y_test) )
out = model.predict_classes(X_test)
4

0 回答 0