我正在建立一个模型,它有一个图像输入 (130,130,1) 和 3 个输出,每个输出包含一个 (10,1) 向量,其中单独应用了 softmax。
(受 J. Goodfellow、Yaroslav Bulatov、Julian Ibarz、Sacha Arnoud 和 Vinay D. Shet 的启发。使用深度卷积神经网络从街景图像中识别多位数字。CorR,abs/1312.6082,2013。URL http:// arxiv.org/abs/1312.6082,遗憾的是他们没有发布他们的网络)。
input = keras.layers.Input(shape=(130,130, 1)
l0 = keras.layers.Conv2D(32, (5, 5), padding="same")(input)
[conv-blocks etc]
l12 = keras.layers.Flatten()(l11)
l13 = keras.layers.Dense(4096, activation="relu")(l12)
l14 = keras.layers.Dense(4096, activation="relu")(l13)
output1 = keras.layers.Dense(10, activation="softmax")(l14)
output2 = keras.layers.Dense(10, activation="softmax")(l14)
output3 = keras.layers.Dense(10, activation="softmax")(l14)
model = keras.models.Model(inputs=input, outputs=[output1, output2, output3])
model.compile(loss=['categorical_crossentropy', 'categorical_crossentropy',
'categorical_crossentropy'],
loss_weights=[1., 1., 1.],
optimizer=optimizer,
metrics=['accuracy'])
train_generator = train_datagen.flow(x_train,
[[y_train[:, 0, :], y_train[:, 1, :], y_train[:, 2, :]],
batch_size=batch_size)
但后来我得到:ValueError: x
(images tensor) 和y
(labels) 应该有相同的长度。找到:x.shape = (1000, 130, 130, 1), y.shape = (3, 1000, 10)
但是,如果我将其更改为:
[same as before]
train_generator = train_datagen.flow(x_train,
y_train,
batch_size=batch_size)
Then i'm getting: ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 3 array(s)
- dimension(x_train) = (1000, 130, 130, 1)
- where each single image is (130, 130, 1) and there are 1000 images
- dimension(y_train) = (1000, 3, 10)
In the documentation it is stated that it should be like that;
model = Model(inputs=[main_input, auxiliary_input], outputs=
[main_output, auxiliary_output])
However, I don't know how you should be able to have the same length for outputs and inputs?