正如https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer?hl=en#minimize中所说, minmize 的第一个参数应该满足要求,
张量或可调用。如果是可调用的,损失应该不带参数并返回值以最小化。如果是张量,则必须传递磁带参数。
第一段代码将张量作为minimize()的输入,它需要梯度带,但我不知道怎么做。
第二段代码将callable函数作为minimize()的输入,很简单
import numpy as np
import tensorflow as tf
from tensorflow import keras
x_train = [1, 2, 3]
y_train = [1, 2, 3]
W = tf.Variable(tf.random.normal([1]), name='weight')
b = tf.Variable(tf.random.normal([1]), name='bias')
hypothesis = W * x_train + b
@tf.function
def cost():
y_model = W * x_train + b
error = tf.reduce_mean(tf.square(y_train - y_model))
return error
optimizer = tf.optimizers.SGD(learning_rate=0.01)
cost_value = cost()
train = tf.keras.optimizers.Adam().minimize(cost_value, var_list=[W, b])
tf.print(W)
tf.print(b)
如何添加渐变胶带,我知道下面的代码肯定有效。
import numpy as np
import tensorflow as tf
from tensorflow import keras
x_train = [1, 2, 3]
y_train = [1, 2, 3]
W = tf.Variable(tf.random.normal([1]), name='weight')
b = tf.Variable(tf.random.normal([1]), name='bias')
hypothesis = W * x_train + b
@tf.function
def cost():
y_model = W * x_train + b
error = tf.reduce_mean(tf.square(y_train - y_model))
return error
optimizer = tf.optimizers.SGD(learning_rate=0.01)
cost_value = cost()
train = tf.keras.optimizers.Adam().minimize(cost, var_list=[W, b])
tf.print(W)
tf.print(b)
请帮我修改第一段代码并让它运行,谢谢!