0

我愿意创建一个 3 层的 GRU 模型,每层分别有 32、16、8 个单元。该模型将模拟量作为输入并产生模拟值作为输出。

我写了以下代码:

def getAModelGRU(neuron=(10), look_back=1, numInputs = 1, numOutputs = 1):
    model = Sequential()
    if len(neuron) > 1:
        model.add(GRU(units=neuron[0], input_shape=(look_back,numInputs)))
        for i in range(1,len(neuron)-1):
            model.add(GRU(units=neuron[i]))
        model.add(GRU(units=neuron[-1], input_shape=(look_back,numInputs)))
    else:
    model.add(GRU(units=neuron, input_shape=(look_back,numInputs)))
    model.add(Dense(numOutputs))
    model.compile(loss='mean_squared_error', optimizer='adam')
    return model

而且,我将此函数称为:

chkEKF = getAModelGRU(neuron=(32,16,8), look_back=1, numInputs=10, numOutputs=6)

而且,我获得了以下信息:

Traceback (most recent call last):
  File "/home/momtaz/Dropbox/QuadCopter/quad_simHierErrorCorrectionEstimator.py", line 695, in <module>
    Single_Point2Point()
  File "/home/momtaz/Dropbox/QuadCopter/quad_simHierErrorCorrectionEstimator.py", line 74, in Single_Point2Point
    chkEKF = getAModelGRU(neuron=(32,16,8), look_back=1, numInputs=10, numOutputs=6)
  File "/home/momtaz/Dropbox/QuadCopter/rnnUtilQuad.py", line 72, in getAModelGRU
    model.add(GRU(units=neuron[i]))
  File "/home/momtaz/PycharmProjects/venv/lib/python3.6/site-packages/keras/engine/sequential.py", line 181, in add
    output_tensor = layer(self.outputs[0])
  File "/home/momtaz/PycharmProjects/venv/lib/python3.6/site-packages/keras/layers/recurrent.py", line 532, in __call__
    return super(RNN, self).__call__(inputs, **kwargs)
  File "/home/momtaz/PycharmProjects/venv/lib/python3.6/site-packages/keras/engine/base_layer.py", line 414, in __call__
    self.assert_input_compatibility(inputs)
  File "/home/momtaz/PycharmProjects/venv/lib/python3.6/site-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
    str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer gru_2: expected ndim=3, found ndim=2

我在网上试过,但没有找到任何解决“ndim”相关问题的方法。

请让我知道我在这里做错了什么。

4

1 回答 1

1

您需要确保input_shape仅在第一层中定义参数,并且每个层都具有return_sequences=True除了可能的最后一层(取决于您的模型)。

下面的代码适用于您想要堆叠几层并且只有每层中的单元数量发生变化的常见情况。

model = tf.keras.Sequential()

gru_options = [dict(units = units,
                    time_major=False,
                    kernel_regularizer=0.01,
                    # ... potentially more options
                    return_sequences=True) for units in [32,16,8]]
gru_options[0]['input_shape'] = (n_timesteps, n_inputs)
gru_options[-1]['return_sequences']=False # optionally disable sequences in the last layer. 
                                          # If you want to return sequences in your last
                                          # layer delete this line, however it is necessary
                                          # if you want to connect this to a dense layer
                                          # for example.
for opts in gru_options:
    model.add(tf.keras.layers.GRU(**opts))

model.add(tf.keras.Dense(6))

顺便说一句,您的代码中有一个错误,因为在else子句之后没有缩进。另请注意,实现该Iterable协议的 Python 类(例如列表和元组)可以通过使用for-in语法进行迭代,您不必进行类似 C 的迭代(使用上述语法更惯用或Pythonic )。

于 2020-09-14T18:42:49.923 回答