我正在查看 Tensorflow 的力学部分,特别是关于共享变量的部分。在“问题”部分,他们正在处理卷积神经网络,并提供以下代码(通过模型运行图像):
# First call creates one set of variables.
result1 = my_image_filter(image1)
# Another set is created in the second call.
result2 = my_image_filter(image2)
如果模型以这种方式实现,那么是否不可能学习/更新参数,因为我的训练集中的每个图像都有一组新的参数?
编辑:我还在一个简单的线性回归示例中尝试了“问题”方法,并且这种实现方法似乎没有任何问题。训练似乎很有效,代码的最后一行也可以看出这一点。所以我想知道 tensorflow 文档和我在做什么之间是否存在细微的差异。:
import tensorflow as tf
import numpy as np
trX = np.linspace(-1, 1, 101)
trY = 2 * trX + np.random.randn(*trX.shape) * 0.33 # create a y value which is approximately linear but with some random noise
X = tf.placeholder("float") # create symbolic variables
Y = tf.placeholder("float")
def model(X):
with tf.variable_scope("param"):
w = tf.Variable(0.0, name="weights") # create a shared variable (like theano.shared) for the weight matrix
return tf.mul(X, w) # lr is just X*w so this model line is pretty simple
y_model = model(X)
cost = (tf.pow(Y-y_model, 2)) # use sqr error for cost function
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # construct an optimizer to minimize cost and fit line to my data
sess = tf.Session()
init = tf.initialize_all_variables() # you need to initialize variables (in this case just variable W)
sess.run(init)
with tf.variable_scope("train"):
for i in range(100):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
print sess.run(y_model, feed_dict={X: np.array([1,2,3])})