4

我正在尝试将 TensorFlow 数据集(从 .csv 文件中读取)提供给使用功能 API 定义的多输入 tf.keras 模型。当我传递这些与标签一起压缩的数据集时,训练效果很好。当我想调用时predict()(可能是在一些未标记的不同数据集上)它会引发错误(在急切和非急切执行中)

这是我当前的代码:

import tensorflow as tf
import numpy as np

tf.enable_eager_execution()

# Define model.
input_A = tf.keras.layers.Input(shape=(None, 5), name='sensor_A_input')
x_1 = tf.keras.layers.LSTM(5, return_sequences=False, recurrent_initializer='glorot_uniform')(input_A)

input_B = tf.keras.layers.Input(shape=(None, 4), name='sensor_B_input')
x_2 = tf.keras.layers.LSTM(5, return_sequences=False, recurrent_initializer='glorot_uniform')(input_B)

x = tf.keras.layers.concatenate([x_1, x_2], name='concat_test')
output = tf.keras.layers.Dense(1, activation='sigmoid', name='output')(x)

model = tf.keras.Model(inputs=[input_A, input_B], outputs=output)

model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.run_eagerly = tf.executing_eagerly()

# Define input data.
# Dataset 1 is read from 10 .csv files where one file is one timeseries observation sequence of length 100 and 5 dimensions.
dataset_1 = tf.data.Dataset.from_tensor_slices(np.random.rand(10, 100, 5))
# Dataset 2 is read from 10 .csv files where one file is one timeseries observation sequence of length 300 and 4 dimensions.
dataset_2 = tf.data.Dataset.from_tensor_slices(np.random.rand(10, 300, 4))

# Define labels.
labels = tf.data.Dataset.from_tensor_slices(np.random.randint(0, 2, (10, 1)))

# Zip inputs and output into one dataset.
input_with_labels = tf.data.Dataset.zip(((dataset_1, dataset_2), labels)).batch(10)
model.fit(input_with_labels)

# Here's the problem - how should the input be arranged?
zipped_input = tf.data.Dataset.zip((dataset_1, dataset_2)).batch(10)
predictions = model.predict_generator(zipped_input)
print(predictions)

这是错误:

ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [<tf.Tensor: id=71049, shape=(10, 100, 5), dtype=float64, numpy=
array([[[0.54049765, 0.64218937, 0.31734092, 0.81307839, 0.75465237],
        [0.32371089, 0.85923477, 0.60619924, 0.68692891, 0.186234...

完整追溯:

Traceback (most recent call last):
  File "C:/xxx/debug_multiple_input_model.py", line 39, in <module>
    model.predict(zipped_input)
  File "C:\env_path\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1054, in predict
    callbacks=callbacks)
  File "C:\env_path\lib\site-packages\tensorflow\python\keras\engine\training_generator.py", line 264, in model_iteration
    batch_outs = batch_function(*batch_data)
  File "C:\env_path\lib\site-packages\tensorflow\python\keras\engine\training_generator.py", line 536, in predict_on_batch
    return model.predict_on_batch(x)
  File "C:\env_path\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1281, in predict_on_batch
    x, extract_tensors_from_dataset=True)
  File "C:\env_path\lib\site-packages\tensorflow\python\keras\engine\training.py", line 2651, in _standardize_user_data
    exception_prefix='input')
  File "C:\env_path\lib\site-packages\tensorflow\python\keras\engine\training_utils.py", line 346, in standardize_input_data
    str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [<tf.Tensor: id=71049, shape=(10, 100, 5), dtype=float64, numpy=
array([[[0.54049765, 0.64218937, 0.31734092, 0.81307839, 0.75465237],
        [0.32371089, 0.85923477, 0.60619924, 0.68692891, 0.186234...

我也试过predict()这样调用函数:

1:

model.predict_generator(zipped_input)

导致相同的错误。

2:

model.predict((dataset_1, dataset_2))

抛出此错误:

AttributeError: 'DatasetV1Adapter' object has no attribute 'shape'
4

2 回答 2

5

事实证明,应该将一个 1 元素元组作为 tf.data.Dataset.zip() 函数的输入:

zipped_input = tf.data.Dataset.zip(((dataset_1, dataset_2), )).batch(10)

这会产生预期的结果。

于 2019-07-26T22:43:37.413 回答
1

另一种可能的解决方法:

ds1_iter= dataset_1.make_one_shot_iterator()
ds1_next= ds1_iter.get_next()

ds2_iter= dataset_2.make_one_shot_iterator()
ds2_next= ds2_iter.get_next()

prediction = model.predict(x={'first_input_name':ds1_next, 'second_input_name':ds2_next}, steps=N)

其中N是数据集的长度。

于 2019-08-27T21:34:59.677 回答