1

我已经在图像数据集上训练了一个连体网络,并得到了一些错误,如下所示。

a = Input(shape=(256,256,3))
b = Input(shape=(256,256,3))
#create model
model = Sequential()
#add model layers
model.add(Conv2D(64, kernel_size=10, activation='relu', input_shape=(256,256,3),strides=(1,1)))
model.add(MaxPooling2D(2,2))
model.add(Conv2D(128, kernel_size=7, activation='relu',strides=(1,1)))
model.add(MaxPooling2D(2,2))
model.add(Conv2D(128, kernel_size=4, activation='relu',strides=(1,1)))
model.add(MaxPooling2D(2,2))
model.add(Conv2D(256, kernel_size=4, activation='relu',strides=(1,1)))
model.add(Flatten())
model.add(Dense(7, activation='sigmoid'))
encoded_l = model(a)
encoded_r = model(b)
L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1]))
L1_distance = L1_layer([encoded_l, encoded_r])
prediction = Dense(4096,activation='sigmoid')(L1_distance)

# Connect the inputs with the outputs
model = Model(inputs=[a,b],outputs=prediction)
# plot graph
keras.utils.plot_model(model, show_shapes=True)

我已经在图像数据集上训练了模型:

train_data_path = '/content/drive/My Drive/jaffe augmented/train'
validation_data_path = '/content/drive/My Drive/jaffe augmented/validation'
test_data_path = '/content/drive/My Drive/jaffe augmented/test'
img_rows = 256
img_cols = 256
epochs = 2
batch_size = 32
num_of_train_samples = 1026
num_of_validation_samples =126
num_of_test_samples =21
train_datagen = ImageDataGenerator(rescale=1. / 255)
validation_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(train_data_path,
                                                target_size=(img_rows, img_cols),
                                                batch_size=batch_size)
validation_generator = validation_datagen.flow_from_directory(validation_data_path,
                                                    target_size=(img_rows, img_cols),
                                                    batch_size=batch_size)
test_generator = test_datagen.flow_from_directory(test_data_path,
                                                target_size=(img_rows, img_cols),
                                                batch_size=batch_size)                                             
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
history=model.fit_generator(train_generator,
                steps_per_epoch=num_of_train_samples // batch_size,
                epochs=epochs,
                validation_data=validation_generator,
                validation_steps=num_of_validation_samples//batch_size)

我有以下错误。

ValueError:检查模型输入时出错:您传递给模型的 Numpy 数组列表不是模型预期的大小。预计会看到 2 个数组,但得到了以下 1 个数组的列表:[array([[[[1., 1., 1.], [1., 1., 1.], [1. , 1. , 1. ], ..., [1. , 1. , 1. ...

4

2 回答 2

0

ImageDataGenerator默认情况下一次只给出一张图片。但是您的模型需要 2 张图像作为输入 ( model = Model(inputs=[a,b],outputs=prediction))

您可以参考链接来构建自定义数据生成器以生成 2 个图像

于 2020-07-24T19:12:58.470 回答
0

如果您的两个输入都来自同一数据集。您可以使用以下代码。

history=model.fit_generator([train_generator,train_generator], 
                steps_per_epoch=num_of_train_samples // batch_size,
                epochs=epochs,
                validation_data=validation_generator,
                validation_steps=num_of_validation_samples//batch_size)
于 2020-07-24T20:31:03.703 回答