1

我目前正在处理Analytics Vidhya的数字识别挑战,其链接是https://datahack.analyticsvidhya.com/contest/practice-problem-identify-the-digits/。与此挑战相关的数据集中的图像尺寸为28*28*4(28 = 长度 = 宽度,4 = 通道数)。我实现的代码是:

from keras.models import Sequential
from keras.layers import Dense,Dropout,Flatten,Activation
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
K.set_image_dim_ordering('th')
import numpy as np
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# define the larger model
def larger_model():
# create model
  model = Sequential()
  model.add(Conv2D(32, (3, 3), input_shape=(4, 28, 28),activation='relu',padding='same'))
  model.add(MaxPooling2D(pool_size=(2, 2)))
  model.add(Dropout(0.25))
  model.add(Conv2D(15, (3, 3), activation='relu',padding='same'))
  model.add(MaxPooling2D(pool_size=(2, 2)))
  model.add(Dropout(0.2))
  model.add(Flatten())
  model.add(Dense(200, activation='relu'))
  model.add(Dense(num_classes, activation='softmax'))
  # Compile model
  model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
  return model
def loadImages(path):
# return array of images

  imagesList = listdir(path)
  loadedImages = []
  for image in imagesList:
    img = io.imread(path + "/" + image,as_grey = False)
    loadedImages.append(np.array(img)) 
  return loadedImages
path = "C:/Users/Farz Jamal/Downloads/mnist/Train/Images/train" #path_to_train_dataset 
import pandas as pd
df = pd.read_csv("C:/Users/Farz Jamal/Downloads/mnist/Train/train.csv") #path_to_class_labels
y = np.array(df['label'])
from sklearn.cross_validation import train_test_split as ttt
x_train,x_val,y_train,y_val = ttt(imgs,y,test_size = 0.2)

续代码:

x_vall,x_test,y_vall,y_test = ttt(x_val,y_val,test_size = 0.4)

x_train,x_vall,x_test = np.array(x_train).astype('float32'),np.array(x_vall).astype('float32'),np.array(x_test).astype('float32')
# normalize inputs from 0-255 to 0-1
x_train = x_train / 255.0
x_vall = x_vall / 255.0
x_test = x_test / 255.0
y_train = np_utils.to_categorical(y_train)
y_vall = np_utils.to_categorical(y_vall)
y_test = np_utils.to_categorical(y_test)
num_classes = y_vall.shape[1] #10

#fitting_and_evaluating
model = larger_model()
# Fit the model
model.fit(x_train, y_train, validation_data=(x_vall, y_vall), epochs=50,   batch_size=200)
# Final evaluation of the model
scores = model.evaluate(x_test, y_test, verbose=0)

输出如下:(从第16 个纪元到第 37 个纪元)

Epoch 16/50
39200/39200 [==============================] - 271s 7ms/step - loss: 2.3013 -    acc: 0.1135 - val_loss: 2.3015 - val_acc: 0.1095
Epoch 17/50
39200/39200 [==============================] - 275s 7ms/step - loss: 2.3011 -    acc: 0.1128 - val_loss: 2.3014 - val_acc: 0.1095
Epoch 18/50
39200/39200 [==============================] - 270s 7ms/step - loss: 2.3011 -    acc: 0.1124 - val_loss: 2.3015 - val_acc: 0.1095
Epoch 19/50
39200/39200 [==============================] - 273s 7ms/step - loss: 2.3012 -    acc: 0.1131 - val_loss: 2.3017 - val_acc: 0.1095
Epoch 20/50
39200/39200 [==============================] - 273s 7ms/step - loss: 2.3011 -    acc: 0.1130 - val_loss: 2.3018 - val_acc: 0.1111
Epoch 21/50
39200/39200 [==============================] - 272s 7ms/step - loss: 2.3010 -    acc: 0.1127 - val_loss: 2.3013 - val_acc: 0.1095
Epoch 22/50
39200/39200 [==============================] - 281s 7ms/step - loss: 2.3006 -    acc: 0.1133 - val_loss: 2.3015 - val_acc: 0.1097
Epoch 23/50
39200/39200 [==============================] - 273s 7ms/step - loss: 2.3005 -    acc: 0.1136 - val_loss: 2.3018 - val_acc: 0.1099
Epoch 24/50
39200/39200 [==============================] - 276s 7ms/step - loss: 2.3005 -    acc: 0.1135 - val_loss: 2.3022 - val_acc: 0.1116
Epoch 25/50
39200/39200 [==============================] - 271s 7ms/step - loss: 2.2998 -    acc: 0.1155 - val_loss: 2.3025 - val_acc: 0.1071
Epoch 26/50
39200/39200 [==============================] - 271s 7ms/step - loss: 2.2996 -    acc: 0.1156 - val_loss: 2.3021 - val_acc: 0.1100
Epoch 27/50
39200/39200 [==============================] - 272s 7ms/step - loss: 2.2981 -    acc: 0.1168 - val_loss: 2.3024 - val_acc: 0.1078
Epoch 28/50
39200/39200 [==============================] - 270s 7ms/step - loss: 2.2970 -    acc: 0.1187 - val_loss: 2.3035 - val_acc: 0.1065
Epoch 29/50
39200/39200 [==============================] - 271s 7ms/step - loss: 2.2945 -    acc: 0.1218 - val_loss: 2.3061 - val_acc: 0.1041
Epoch 30/50
39200/39200 [==============================] - 270s 7ms/step - loss: 2.2935 -    acc: 0.1223 - val_loss: 2.3059 - val_acc: 0.1003
Epoch 31/50
39200/39200 [==============================] - 274s 7ms/step - loss: 2.2906 -    acc: 0.1268 - val_loss: 2.3067 - val_acc: 0.1014
Epoch 32/50
39200/39200 [==============================] - 276s 7ms/step - loss: 2.2873 -    acc: 0.1278 - val_loss: 2.3078 - val_acc: 0.1073
Epoch 33/50
39200/39200 [==============================] - 292s 7ms/step - loss: 2.2806 -    acc: 0.1368 - val_loss: 2.3118 - val_acc: 0.1034
Epoch 34/50
39200/39200 [==============================] - 301s 8ms/step - loss: 2.2744 -    acc: 0.1404 - val_loss: 2.3160 - val_acc: 0.1022
Epoch 35/50
39200/39200 [==============================] - 289s 7ms/step - loss: 2.2662 -    acc: 0.1486 - val_loss: 2.3172 - val_acc: 0.1029
Epoch 36/50
39200/39200 [==============================] - 295s 8ms/step - loss: 2.2557 -    acc: 0.1543 - val_loss: 2.3162 - val_acc: 0.1087
Epoch 37/50
39200/39200 [==============================] - 308s 8ms/step - loss: 2.2459 -    acc: 0.1632 - val_loss: 2.3275 - val_acc: 0.1083

可以看出,训练和验证准确度都非常低。

我曾尝试减少 Dropout(以前其中一层为 0.5),但仍然没有效果。我将最后一个隐藏层中的神经元加倍,(以前是 100 个),仍然没有效果。看起来,这与图像的预处理以及图像的输入参数有关。可以做什么?

4

1 回答 1

0

从评论中复制作为答案:

事实上,您的模型没有学习任何东西,这通常指向一个错误。我看不出有什么明显的错误。一个常见的错误是意外向网络输入垃圾。获取您提供给网络的前几张图像,并在您的拟合步骤之前将它们显示在调试器中,并打印出标签并确保它们匹配。对您的输入进行完整性检查。

于 2018-04-10T13:20:21.027 回答