0

我是 CNN 的新手,我正在尝试 MNIST 数据库中的反卷积(生成特征图)代码(因为它是初学者最容易学习的代码)。我希望我的模型在最后生成特征图。想法是在一定程度上实现论文Saliency Detection Via Dense Convolution Network 。

这是我尝试运行的完整代码:

import keras
from keras.datasets import mnist
import keras.backend as K
from keras.models import Model, Sequential
from keras.layers import Input, Dense, Flatten, Dropout, Activation, Reshape
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.pooling import MaxPooling2D, GlobalAveragePooling2D
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D, Conv2DTranspose, UpSampling2D
from keras.initializers import RandomNormal

init = RandomNormal(mean = 0., stddev = 0.02)

def GeneratorDeconv(image_size = 28): 

    L = int(image_size)

    inputs = Input(shape = (100, ))
    x = Dense(512*int(L/16)**2)(inputs) #shape(512*(L/16)**2,)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = Reshape((int(L/16), int(L/16), 512))(x) # shape(L/16, L/16, 512)
    x = Conv2DTranspose(256, (4, 4), strides = (2, 2),
                        kernel_initializer = init,
                        padding = 'same')(x) # shape(L/8, L/8, 256)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = Conv2DTranspose(128, (4, 4), strides = (2, 2),
                        kernel_initializer = init,
                        padding = 'same')(x) # shape(L/4, L/4, 128)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = Conv2DTranspose(64, (4, 4), strides = (2, 2),
                        kernel_initializer = init,
                        padding = 'same')(x) # shape(L/2, L/2, 64)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = Conv2DTranspose(3, (4, 4), strides= (2, 2),
                        kernel_initializer = init,
                        padding = 'same')(x) # shape(L, L, 3)
    images = Activation('tanh')(x)

    model = Model(inputs = inputs, outputs = images)
    model.summary()
    return model

batch_size = 128
num_classes = 10
epochs = 1

# input image dimensions
img_rows, img_cols = 28, 28

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)


model = GeneratorDeconv()

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))

score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

函数def GeneratorDeconv(image_size = 28):我从ProgramCreek Python中挑选的

现在,我很困惑如何将它嵌入到我自己的自定义模型中。直到model.compile(...)程序运行正常。但是在model.fit(...),它给出了错误:

ValueError:检查输入时出错:预期 input_2 有 2 个维度,但得到了形状为 (60000, 28, 28, 1) 的数组

我不知道如何解决这些问题。请帮忙。

4

1 回答 1

0

模型的输入是:

    inputs = Input(shape = (100, ))

这将采用 (samples, 100) 形状的向量,因此它需要一个 2D 输入。

然而:

print('x_train shape:', x_train.shape)
>>>x_train shape: (60000, 28, 28, 1)

当您指定输入采用 2D 数组时,您正在输入 4D 数组。这就是导致错误的原因。

我对您的架构进行了一些编辑,以便形状匹配并且它实际上可以训练:

def GeneratorDeconv(image_size = 28):

    L = int(image_size)

    inputs = Input(shape = (28, 28,1))
    x = Dense(512*int(L/16)**2)(inputs)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = Conv2DTranspose(256, (4, 4), strides = (2, 2),
                    kernel_initializer = init,
                    padding = 'same')(x)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = Conv2DTranspose(128, (4, 4), strides = (2, 2),
                    kernel_initializer = init,
                    padding = 'same')(x)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = Conv2DTranspose(64, (4, 4), strides = (2, 2),
                    kernel_initializer = init,
                    padding = 'same')(x)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = Conv2DTranspose(3, (4, 4), strides= (2, 2),
                    kernel_initializer = init,
                    padding = 'same')(x)
    x = Flatten()(x)
    x = Dense(10)(x)
    images = Activation('tanh')(x)

    model = Model(inputs = inputs, outputs = images)
    model.summary()
    return model
于 2018-04-13T20:55:51.773 回答