0

我的网络开始学习并且在第一批看起来还不错,然后突然停止,仅在第二批出现 TypeError!为什么第一批就可以了?或者为什么它在第一次之后就破裂了?令人震惊的错误...以下是详细信息:

我已经构建了一个 CNN,它试图为每张图像预测 124 个特征。图像大小为61 x 72像素,数字的输出向量大小为124 x 1这些图像是数字在1-1之间的浮点矩阵。我试图预测的信息在一个 CSV 文件中,每一行描述一个图像。当我为训练过程加载数据时,我会处理每一行并重塑它们,还可以获得网络正在学习的图片。但是,当我运行我的程序时,第二批出现以下错误:

“TypeError: Fetch argument 2.7674865e+09 has invalid type ,必须是字符串或张量。(无法将 float32 转换为张量或操作。)”

你能帮忙找出问题所在吗?这是我的代码:

import tensorflow as tf
import numpy as np

data_in=np.loadtxt(open("images.csv"), delimiter=',',dtype=np.float32);
data_out=np.loadtxt(open("outputmix-124.csv"), 
          delimiter=',',dtype=np.float32);

x_train = data_in[0:6000, :]
x_test = data_in[6000:10000,:]
y_train = data_out[0:6000, :]
y_test = data_out[6000:10000, :]

batch=600
epochs=10

n = x_test.shape[1] #4392
m = x_train.shape[0] #6000
d = y_test.shape[1]  #124
l = y_test.shape[0]     #4000

trainX= tf.placeholder(tf.float32, [batch, n], name="X")
trainY = tf.placeholder(tf.float32, [batch, d])

def conv2d(x, W):
    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')


def maxpool2d(x):
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], 
           padding='SAME')


def convolutional_neural_network(x):
    weights = {'W_c1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
               'W_c2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
               'W_fc': tf.Variable(tf.random_normal([18 * 16 * 64, 1024])),
               'out': tf.Variable(tf.random_normal([1024, d]))}

    biases = {'b_c1': tf.Variable(tf.random_normal([32])),
              'b_c2': tf.Variable(tf.random_normal([64])),
              'b_fc': tf.Variable(tf.random_normal([1024])),
              'out': tf.Variable(tf.random_normal([d]))}

    x = tf.reshape(x, shape=[-1,61,72, 1])

    conv1 = tf.nn.relu(conv2d(x, weights['W_c1']) + biases['b_c1'])
    conv1 = maxpool2d(conv1)

    conv2 = tf.nn.relu(conv2d(conv1, weights['W_c2']) + biases['b_c2'])
    conv2 = maxpool2d(conv2)

    fc = tf.reshape(conv2, [-1, 18 * 16 * 64])
    fc = tf.nn.relu(tf.matmul(fc, weights['W_fc']) + biases['b_fc'])
    fc = tf.nn.dropout(fc, keep_rate)

    output = tf.matmul(fc, weights['out']) + biases['out']

    return output

def train_neural_network(x):
    prediction = convolutional_neural_network(x)
    cost =tf.reduce_mean(tf.pow(prediction-trainY,2))
    optimizer = tf.train.AdamOptimizer().minimize(cost)

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        for epoch in range(epochs):
            epoch_loss = 0
            for i in (np.linspace(0,m - batch, m / batch, dtype=np.int32)):
                x = x_train[i:i + batch, :]
                y = y_train[i:i + batch, :]
                sess.run(optimizer, feed_dict={trainX: x, trainY: y})
                cost = sess.run(cost, feed_dict={trainX: x, trainY: y})
                print("Epoch=", '%04d' % (epoch + 1), "loss=", " 
                      {:.9f}".format(cost))
                epoch_loss += cost

            print('Epoch', epoch, 'completed out of', epochs, 'loss:', 
                 epoch_loss)

train_neural_network(trainX)
4

1 回答 1

0

这是一个相当典型的错误。问题在于可变成本。首先,在函数的第二行将损失计算张量分配给它train_neural_network()

cost =tf.reduce_mean(tf.pow(prediction-trainY,2))

然后当你运行训练和成本计算时,你会这样做,这就是它搞砸的地方:

cost = sess.run(cost, feed_dict={trainX: x, trainY: y})

因为您将损失的值分配给cost,它现在是一个简单的浮点数,而不是张量。下一次sess.run()获取浮点数而不是张量作为第一个参数,并打印上面的错误。

使用诸如cost_val之类的东西来存储损失的值,并使用留下成本来存储对张量的引用。您当然还需要更新打印值的行,所以我更改了这三行:

cost_val = sess.run(cost, feed_dict={trainX: x, trainY: y})
print("Epoch=", '%04d' % (epoch + 1), "loss=", " {:.9f}".format(cost_val))
epoch_loss += cost_val

我在这里发布了完整的修订版本(经过测试的代码;请注意,我生成了测试数据而不是加载;这对任何人来说都是一个可加载和可测试的示例,但您需要将其改回以加载您的实际数据):

import tensorflow as tf
import numpy as np

keep_rate = 0.8

#data_in=np.loadtxt(open("images.csv"), delimiter=',',dtype=np.float32);
#data_out=np.loadtxt(open("outputmix-124.csv"), 
#          delimiter=',',dtype=np.float32);

data_in = np.random.normal( size = ( 10000, 4392 ) )
data_out = np.random.normal( size = ( 10000, 124 ) )

x_train = data_in[0:6000, :]
x_test = data_in[6000:10000,:]
y_train = data_out[0:6000, :]
y_test = data_out[6000:10000, :]

batch=600
epochs=10

n = x_test.shape[1] #4392
m = x_train.shape[0] #6000
d = y_test.shape[1]  #124
l = y_test.shape[0]     #4000

trainX = tf.placeholder(tf.float32, [batch, n], name="X")
trainY = tf.placeholder(tf.float32, [batch, d])

def conv2d(x, W):
    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')


def maxpool2d(x):
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], 
           padding='SAME')


def convolutional_neural_network(x):
    weights = {'W_c1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
               'W_c2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
               'W_fc': tf.Variable(tf.random_normal([18 * 16 * 64, 1024])),
               'out': tf.Variable(tf.random_normal([1024, d]))}

    biases = {'b_c1': tf.Variable(tf.random_normal([32])),
              'b_c2': tf.Variable(tf.random_normal([64])),
              'b_fc': tf.Variable(tf.random_normal([1024])),
              'out': tf.Variable(tf.random_normal([d]))}

    x = tf.reshape(x, shape=[-1,61,72, 1])

    conv1 = tf.nn.relu(conv2d(x, weights['W_c1']) + biases['b_c1'])
    conv1 = maxpool2d(conv1)

    conv2 = tf.nn.relu(conv2d(conv1, weights['W_c2']) + biases['b_c2'])
    conv2 = maxpool2d(conv2)

    fc = tf.reshape(conv2, [-1, 18 * 16 * 64])
    fc = tf.nn.relu(tf.matmul(fc, weights['W_fc']) + biases['b_fc'])
    fc = tf.nn.dropout(fc, keep_rate)

    output = tf.matmul(fc, weights['out']) + biases['out']

    return output

def train_neural_network(x):
    prediction = convolutional_neural_network(x)
    cost =tf.reduce_mean(tf.pow(prediction-trainY,2))
    optimizer = tf.train.AdamOptimizer().minimize(cost)

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        for epoch in range(epochs):
            epoch_loss = 0
            for i in (np.linspace(0,m - batch, m / batch, dtype=np.int32)):
                x = x_train[i:i + batch, :]
                y = y_train[i:i + batch, :]
                sess.run(optimizer, feed_dict={trainX: x, trainY: y})
                cost_val = sess.run(cost, feed_dict={trainX: x, trainY: y})
                print("Epoch=", '%04d' % (epoch + 1), "loss=", " {:.9f}".format(cost_val))
                epoch_loss += cost_val

            print('Epoch', epoch, 'completed out of', epochs, 'loss:', 
                 epoch_loss)

train_neural_network(trainX)
于 2018-04-15T22:57:50.077 回答