我想在每个时期/批次组合中打印 MSE 的值。下面的代码在每次迭代中报告表示 mse 而不是它的值的张量对象:
print("Epoch", epoch, "Batch_Index", batch_index, "MSE:", mse)
输出示例行:
Epoch 0 Batch_Index 0 MSE: Tensor("mse_2:0", shape=(), dtype=float32)
我理解这是因为 MSE 引用了 tf.placeholder 节点,这些节点本身没有任何数据。但是一旦我运行下面的代码:
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
数据应该已经可用,因此取决于该数据的所有节点的值也应该可以访问,我认为在打印语句中请求评估 MSE 会导致错误
print("Epoch", epoch, "Batch_Index", batch_index, "MSE:", mse.eval())
输出2:
InvalidArgumentError:您必须使用 dtype float 和 shape [?,9] 为占位符张量“X_2”提供一个值 ...
这告诉我mse.eval()
看不到中定义的数据sess.run()
为什么我们会经历这样的行为?我们应该如何更改代码以使其在每次指定迭代时报告 MSA?
import numpy as np
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data] # ADD COLUMN OF 1s for BIAS!
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_housing_data = scaler.fit_transform(housing.data)
scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
n_epochs = 100
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
learning_rate = 0.01
def fetch_batch(epoch, batch_index, batch_size):
np.random.seed(epoch * n_batches + batch_index) # not shown in the book
indices = np.random.randint(m, size=batch_size) # not shown
X_batch = scaled_housing_data_plus_bias[indices] # not shown
y_batch = housing.target.reshape(-1, 1)[indices] # not shown
return X_batch, y_batch
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if (epoch % 50 == 0 and batch_index % 100 == 0):
print("Epoch", epoch, "Batch_Index", batch_index, "MSE:", mse)
best_theta = theta.eval()
best_theta