1

I am using tensor flow to run a convolution neural network on MNIST database. But I am getting the following error.

tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'x' with dtype float [[Node: x = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]]

x = tf.placeholder(tf.float32, [None, 784], name='x') # mnist data image of shape 28*28=784

I thought I correctly update the value of x using feed_dict, but its saying i haven't update the value of placeholder x.

Also, is there any other logical flaw in my code?

Any help would be greatly appreciated. Thanks.

import tensorflow as tf
import numpy
from tensorflow.examples.tutorials.mnist import input_data

def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)


mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# Parameters
learning_rate = 0.01
training_epochs = 10
batch_size = 100
display_step = 1

# tf Graph Input
#x = tf.placeholder(tf.float32, [50, 784], name='x') # mnist data image of shape 28*28=784
#y = tf.placeholder(tf.float32, [50, 10], name='y') # 0-9 digits recognition => 10 classes

# Set model weights
W = tf.Variable(tf.zeros([784, 10]), name="weights")
b = tf.Variable(tf.zeros([10]), name="bias")

W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])


W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])


W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])

W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])

# Initializing the variables
init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)


    # Training cycle
    for i in range(1000):
        print i
        batch_xs, batch_ys = mnist.train.next_batch(50)

        x_image = tf.reshape(x, [-1,28,28,1])

        h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
        h_pool1 = max_pool_2x2(h_conv1)

        h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
        h_pool2 = max_pool_2x2(h_conv2)

        h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
        h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)


        y_conv=tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2)

        cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_conv), reduction_indices=[1]))
        sess.run(
          [cross_entropy, y_conv],
          feed_dict={x: batch_xs, y: batch_ys})

        correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y,1))
        print correct_prediction.eval()
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
4

3 回答 3

2

您收到该错误是因为您尝试eval()correct_prediction. 该张量需要批量输入(x 和 y)才能进行评估。您可以通过将错误更改为:

print correct_prediction.eval(feed_dict={x: batch_xs, y: batch_ys})

但正如 Benoit Steiner 所提到的,您可以轻松地将其拉入模型中。

更一般地说,您在这里没有进行任何优化,但也许您还没有解决这个问题。就目前而言,它只会在一段时间内打印出错误的预测。:)

于 2016-04-26T01:00:01.357 回答
2

你为什么要创建占位符变量?mnist.train.next_batch(50)如果您在模型本身内部移动计算correct_prediction和准确性,您应该能够使用直接生成的输出。

batch_xs, batch_ys = mnist.train.next_batch(50)
x_image = tf.reshape(batch_xs, [-1,28,28,1])
...
cross_entropy = tf.reduce_mean(-tf.reduce_sum(batch_ys * tf.log(y_conv), reduction_indices=[1]))
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(batch_ys,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
predictions_correct, acc = sess.run([cross_entropy, y_conv, correct_prediction, accuracy])
print predictions_correct, acc
于 2016-04-26T00:45:03.090 回答
0

首先,您的 x 和 y 被注释掉,如果这存在于您的实际代码中,则很可能是问题所在。

correct_prediction.eval()等效于tf.session.run(correct_prediction)(或在您的情况下sess.run()),因此需要相同的语法*。所以它需要correct_prediction.eval(feed_dict={x: batch_xs, y: batch_ys})为了运行,但要注意这通常是 RAM 密集型的,并且可能导致您的系统挂起。由于 ram 的使用,将精度函数引入模型可能是一个好主意。

我没有看到一个优化函数来利用你的交叉熵,但是我从来没有尝试过不使用它,所以如果它有效,请不要修复它。但如果它最终引发错误,您可能需要尝试:

optimizer = optimizer = tf.train.AdamOptimizer().minimize(cross_entropy)

并替换 ' cross_entropy' 中

sess.run([cross_entropy, y_conv],feed_dict={x: batch_xs, y: batch_ys})

与' optimizer'

https://pythonprogramming.net/tensorflow-neural-network-session-machine-learning-tutorial/

检查脚本的准确性评估部分。

于 2017-01-19T06:12:15.083 回答