1

我在 tensorflow 中定义了一个无监督问题,我需要在每次迭代时更新我的​​ B 和我的 tfZ,但我不知道如何tfZ使用 tensorflow 会话来更新我的。

tfY = tf.placeholder(shape=(15, 15), dtype=tf.float32)

with tf.variable_scope('test'):
    B = tf.Variable(tf.zeros([]))
    tfZ = tf.convert_to_tensor(Z, dtype=tf.float32)

def loss(tfY):
    r = tf.reduce_sum(tfZ*tfZ, 1)
    r = tf.reshape(r, [-1, 1])
    D = tf.sqrt(r - 2*tf.matmul(tfZ, tf.transpose(tfZ)) + tf.transpose(r) + 1e-9)
    return tf.reduce_sum(tfY*tf.log(tf.sigmoid(D+B))+(1-tfY)*tf.log(1-tf.sigmoid(D+B)))

LOSS = loss(Y)
GRADIENT = tf.gradients(LOSS, [B, tfZ])

sess = tf.Session()
sess.run(tf.global_variables_initializer())

tot_loss = sess.run(LOSS, feed_dict={tfY: Y})

loss_grad = sess.run(GRADIENT, feed_dict={tfY: Y})

learning_rate = 1e-4
for i in range(1000):
    sess.run(B.assign(B - learning_rate * loss_grad[0]))
    print(tfZ)
    sess.run(tfZ.assign(tfZ - learning_rate * loss_grad[1]))

    tot_loss = sess.run(LOSS, feed_dict={tfY: Y})
    if i%10==0:
        print(tot_loss)

此代码打印以下内容:

Tensor("test_18/Const:0", shape=(15, 2), dtype=float32)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-35-74ddafc0bf3a> in <module>()
     25     sess.run(B.assign(B - learning_rate * loss_grad[0]))
     26     print(tfZ)
---> 27     sess.run(tfZ.assign(tfZ - learning_rate * loss_grad[1]))
     28 
     29     tot_loss = sess.run(LOSS, feed_dict={tfY: Y})

AttributeError: 'Tensor' object has no attribute 'assign'

张量对象正确地没有分配属性,但我找不到任何其他附加到对象的函数可以做到这一点。如何正确更新我的张量?

4

1 回答 1

7

与, 不同tf.Variabletf.Tensor不提供assign方法;如果张量是mutable,则必须tf.assign显式调用函数:

tf.assign(tfZ, tfZ - learning_rate * loss_grad[1])

更新:并非所有张量都是可变的,例如你tfZ的不是。截至目前,可变张量仅是与此答案中解释的变量相对应的那些(至少在 tensorflow 1.x 中,将来可以扩展)。普通张量是操作结果的句柄,即它们绑定到该操作及其输入。要更改不可变张量值,必须更改源张量(占位符或变量)。tfZ在您的特定情况下,创建变量也会更容易。

顺便说一句,tf.Variable.assign()它只是一个包装器,tf.assign必须在会话中运行结果操作才能实际执行分配。

Note in both cases a new node in the graph is created. If you call it in a loop (like in your snippet), the graph will by inflated by a thousand nodes. Doing so in real production code is a bad practice, because it can easily cause OOM.

于 2018-03-07T10:33:37.470 回答