0

我对softmax, 的输入y = tf.nn.softmax(tf.matmul(x, W) + b)是一个有值矩阵

tf.matmul(x, W) + b =
[[  9.77206726e+02]
 [  5.72391296e+02]
 [  3.53560760e+02]
 [  4.75727379e-01]
 [  6.58911804e+02]]

但是当它被输入时softmax,我得到:

tf.nn.softmax(tf.matmul(x, W) + b) =
[[ 1.]
 [ 1.]
 [ 1.]
 [ 1.]
 [ 1.]]

导致我的训练输出是一个1s 数组,这意味着每批训练数据的权重W或偏差都不会更新。b这也导致我的准确性是1在一组随机的测试数据上

下面是我的代码:

x = tf.placeholder(tf.float32, [None, 2])

W = tf.Variable(tf.random_normal([2, 1]))

b = tf.Variable(tf.random_normal([1]))

y = tf.nn.softmax(tf.matmul(x, W) + b)

## placeholder for cross-entropy
y_ = tf.placeholder(tf.float32, [None, 1])

## cross-entropy function
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))

## backpropagation & gradienct descent
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

## initialize variables
init = tf.initialize_all_variables()

sess = tf.Session()
sess.run(init)

ITER_RANGE = 10
EVAL_BATCH_SIZE = ( len(training_outputs)/ITER_RANGE )
training_outputs = np.reshape(training_outputs, (300, 1))
## training
for i in range(ITER_RANGE):
  print 'iterator:'
  print i

  ## batch out training data
  BEGIN = ( i*EVAL_BATCH_SIZE )
  END = ( (i*EVAL_BATCH_SIZE) + EVAL_BATCH_SIZE )

  batch_ys = training_outputs[BEGIN:END]
  batch_xs = training_inputs[BEGIN:END]

  print 'batch_xs'
  print batch_xs

  print 'batch_ys'
  print batch_ys

  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

  # y = tf.nn.softmax(tf.matmul(x, W) + b)
  print 'y'
  print (sess.run(y, feed_dict={x: batch_xs, y_: batch_ys}))

  #print 'x'
  #print sess.run(x)

  print 'W'
  print sess.run(W)

  print 'b'
  print sess.run(b)

  print 'tf.matmul(x, W) + b'
  print sess.run(tf.matmul(x, W) + b, feed_dict={x: batch_xs, y_: batch_ys})

  print 'tf.nn.softmaxtf.matmul(x, W) + b)'
  print sess.run((tf.nn.softmax(tf.matmul(x, W) + b)), feed_dict={x: batch_xs, y_: batch_ys})

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

test_outputs = np.random.rand(300, 1)

## the following prints 1
print(sess.run(accuracy, feed_dict={x: test_inputs, y_: test_outputs}))
4

4 回答 4

1

看起来你只有两个类 {yes, no} 并tf.matmul(x, W) + b代表 {yes} 的概率。在这种情况下,您应该使用tf.nn.sigmoid_cross_entropy_with_logits而不是softmax. 就像是:

y_pred = tf.matmul(x, W) + b
loss = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(y_pred, y_))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
于 2017-10-19T03:26:15.493 回答
1

似乎您的 softmax 函数已应用于输出向量中的每个不同值。尝试转置您的输出,即更改tf.nn.softmax(tf.matmul(x, W) + b))tf.nn.softmax(tf.transpose(tf.matmul(x, W) + b))).

于 2016-09-08T06:02:49.643 回答
0

交叉熵损失是不完整的。使用带有 logits 的交叉熵。

于 2017-10-19T06:28:08.867 回答
0

根据Softmax的定义,它“将任意实数值的 K 维向量‘压缩’为 (0, 1) 范围内的实数值加起来为 1的 K 维向量”

如果只有 1 个输出值,那么 Softmax 输出的分类概率分布就是1,而不是加起来为 1 的值。

于 2016-09-08T02:07:49.357 回答