1

在 tensorflow 迁移学习retrain.py示例中,他们一一获取每张图像的瓶颈值:

image_data = tf.gfile.FastGFile(image_path, 'rb').read()
...
bottleneck_values = run_bottleneck_on_image(sess, image_data, 
jpeg_data_tensor, decoded_image_tensor,resized_input_tensor, 
bottleneck_tensor)

在 run_bottleneck_on_image 中,对于每个 image_data,它们会:

# First decode the JPEG image, resize it, and rescale the pixel values.
resized_input_values = sess.run(decoded_image_tensor,
                              {image_data_tensor: image_data})
# Then run it through the recognition network.
bottleneck_values = sess.run(bottleneck_tensor,
                           {resized_input_tensor: resized_input_values})
bottleneck_values = np.squeeze(bottleneck_values)
return bottleneck_values

有没有办法一次获取 BATCH 大小的图像的瓶颈值,而不是一个一个地运行它,这在 GPU 上要慢得多且效率低下?

4

0 回答 0