2

我正在尝试将 RGB 图像从模拟器传递到我的自定义神经网络中。在 RGB 生成源(模拟器),RGB 图像的维度为(3,144,256).

这就是我构建神经网络的方式:

rgb_model = Sequential()
rgb = env.shape() // this is (3, 144, 256)
rgb_shape = (1,) + rgb
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_shape, data_format = "channels_first"))
Now, my rbg_shape is (1, 3, 144, 256).

这是我得到的错误:

rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_kshape, data_format = "channels_first"))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/sequential.py", line 166, in add
layer(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 414, in call
self.assert_input_compatibility(inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5

当我的实际尺寸为 4 时,为什么 keras 抱怨预期尺寸为 5?

PS:我和这个问题有同样的问题。理想情况下,我想对该帖子发表评论,但没有足够的声誉。

编辑:

这是处理错误的代码:

rgb_shape = env.rgb.shape
rgb_model = Sequential()
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_shape, data_format = "channels_first"))
rgb_model.add(Conv2D(128, (3, 3), strides=(2, 2), padding='valid', activation='relu', data_format = "channels_first" ))
rgb_model.add(Conv2D(384, (3, 3), strides=(1, 1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Conv2D(384, (3, 3), strides=(1,1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Conv2D(256, (3,3), strides=(1,1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Flatten())
rgb_input = Input(shape=rgb_shape)
rgb = rgb_model(rgb_input)

这是我传递时的env.rgb.shapeinput_shape错误Conv2D

dqn.fit(env, callbacks=callbacks, nb_steps=250000, visualize=False, verbose=0, log_interval=100)
  File "/usr/local/lib/python2.7/dist-packages/rl/core.py", line 169, in fit
    action = self.forward(observation)
  File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 228, in forward
    q_values = self.compute_q_values(state)
  File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 69, in compute_q_values
    q_values = self.compute_batch_q_values([state]).flatten()
  File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 64, in compute_batch_q_values
    q_values = self.model.predict_on_batch(batch)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1276, in predict_on_batch
    x, _, _ = self._standardize_user_data(x)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 754, in _standardize_user_data
    exception_prefix='input')
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training_utils.py", line 126, in standardize_input_data
    'with shape ' + str(data_shape))
ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (1, 1, 3, 144, 256)
4

1 回答 1

1

Conv2D 层的输入形状是(num_channels, width, height)。所以你不应该添加另一个维度(实际上输入形状是(batch_size, num_channels, width, height)但你不需要在batch_size这里设置;它将在fit方法中设置)。只需传递input_shape=env.shapeConv2D它就可以正常工作。

编辑:

为什么要定义一个Input层并将其传递给模型?这不是它的工作原理。首先,您需要使用方法编译模型compile,然后使用方法在训练数据上对其进行训练fit,然后使用predict方法进行预测。我强烈建议阅读官方指南以了解这些事情是如何工作的。

于 2018-08-14T13:17:35.920 回答