0

我尝试使用连体神经网络预测图像,我有格式为 .hdf5 的模型首先我尝试加载我想预测的图像,然后加载模型,最后调用 .predict 来预测我想知道的图片. 这是我尝试的代码

img = cv2.imread('/Users/tania/Desktop/TEST/Pa/Pu/Pu - Copy (3).PNG')
siamese_model1.load_weights("/Users/tania/Desktop/weights/siamese_n1.hdf5")
siamese_model1.predict(img)

我发现了这个错误

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-65-789026f30db8> in <module>
      1 img = cv2.imread('/Users/tania/Desktop/TEST/Pa/Pu/Pu - Copy (3).PNG')
      2 siamese_model1.load_weights("/Users/tania/Desktop/weights/siamese_n1.hdf5")
----> 3 siamese_model1.predict(img)

/opt/miniconda3/envs/tensorflow/lib/python3.7/site-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
   1439 
   1440         # Case 2: Symbolic tensors or Numpy array-like.
-> 1441         x, _, _ = self._standardize_user_data(x)
   1442         if self.stateful:
   1443             if x[0].shape[0] > batch_size and x[0].shape[0] % batch_size != 0:

/opt/miniconda3/envs/tensorflow/lib/python3.7/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    577             feed_input_shapes,
    578             check_batch_axis=False,  # Don't enforce the batch size.
--> 579             exception_prefix='input')
    580 
    581         if y is not None:

/opt/miniconda3/envs/tensorflow/lib/python3.7/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    107                 'Expected to see ' + str(len(names)) + ' array(s), '
    108                 'but instead got the following list of ' +
--> 109                 str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
    110         elif len(names) > 1:
    111             raise ValueError(

ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]],

       [[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
...

我该如何解决?或者有什么办法可以解决吗?

模型摘要是

Model: "model_2"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 105, 105, 1)  0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            (None, 105, 105, 1)  0                                            
__________________________________________________________________________________________________
model_1 (Model)                 (None, 4096)         38947648    input_1[0][0]                    
                                                                 input_2[0][0]                    
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 4096)         0           model_1[1][0]                    
                                                                 model_1[2][0]                    
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 1)            4097        lambda_1[0][0]                   
==================================================================================================
Total params: 38,951,745
Trainable params: 38,951,745
Non-trainable params: 0
__________________________________________________________________________________________________

和这样的暹罗人

# Siamese Network
def build_network(conv_model):
    # Build two networks
    input_shape = (105, 105, 1)
    input1 = Input(input_shape)
    input2 = Input(input_shape)

    model = conv_model(input_shape)

    model_output_left = model(input1)
    model_output_right = model(input2)

    def l1_distance(x): 
        return K.abs(x[0] - x[1])

    def l1_distance_shape(x): 
        print(x)
        return x[0]
    merged_model = keras.layers.Lambda(l1_distance)([model_output_left, model_output_right])
    #merged_model = merge([model_output_left, model_output_right], mode=l1_distance, output_shape=l1_distance_shape)
    output = Dense(1, activation='sigmoid')(merged_model)
    siamese_model = Model([input1, input2], output)
    return siamese_model
4

1 回答 1

0

我的检查是您的输入形状与模型想要的不匹配。通过运行以下代码重新检查您的模型输入img.shape:确保您的图像形状(105,105,1)与模型的输入兼容。

此外,由于siamese_model.predict()接受输入作为批处理,输入形状(105,105,1)不兼容。因此,请确保将图像重塑为形状(1,105,105,1)(这相当于使用 1 的预测批量大小)。

TL;DR 运行以下代码:img = img.reshape(1,105,105,1)

于 2020-03-18T08:21:28.397 回答