在非急切模式下,我可以毫无问题地运行它:
s = tf.complex(tf.Variable(1.0), tf.Variable(1.0))
train_op = tf.train.AdamOptimizer(0.01).minimize(tf.abs(s))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(5):
_, s_ = sess.run([train_op, s])
print(s_)
>(1+1j)
(0.99+0.99j)
(0.98+0.98j)
(0.9700001+0.9700001j)
(0.9600001+0.9600001j)
但我似乎无法在渴望模式下找到等效的表达式。我尝试了以下方法,但 TF 抱怨:
tfe = tf.contrib.eager
s = tf.complex(tfe.Variable(1.0), tfe.Variable(1.0))
def obj(s):
return tf.abs(s)
with tf.GradientTape() as tape:
loss = obj(s)
grads = tape.gradient(loss, [s])
optimizer.apply_gradients(zip(grads, [s]))
tf.float32
调用 GradientTape.gradient 时,源张量的 dtype 必须是浮动的(例如),得到tf.complex64
和
没有为任何变量提供梯度:
['tf.Tensor((1+1j), shape=(), dtype=complex64)']
如何在 Eager 模式下训练复杂变量?