我在使用张量板时遇到问题。我的代码运行良好,当我尝试使用 tensorboard --logdir = logs/log1 可视化图形然后打开浏览器输入 localhost:6006 时,我看到的页面没有内容(只有 tensorboard 符号和选项卡,如事件、图形 .. .) 非常感谢帮助。不知道如何解决问题。(我正在使用 jupyter 笔记本)
这是我收到的错误消息:
WARNING:tensorflow:IOError [Errno 2] No such file or directory: '/home/tiger/anaconda3/envs/tensorflow/lib/python3.5/site- packages/tensorflow/tensorboard/TAG' on path /home/tiger/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/tensorboard/TAG
WARNING:tensorflow:Unable to read TensorBoard tag
Starting TensorBoard on port 6006
(You can navigate to http://0.0.0.0:6006)
127.0.0.1 - - [03/Jun/2016 21:20:49] "GET / HTTP/1.1" 200 -
WARNING:tensorflow:IOError [Errno 2] No such file or directory: '/home/tiger/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/tensorboard/lib/css/global.css' on path /home/tiger/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/tensorboard/lib/css/global.css
127.0.0.1 - - [03/Jun/2016 21:20:49] code 404, message Not Found
127.0.0.1 - - [03/Jun/2016 21:20:49] "GET /lib/css/global.css HTTP/1.1" 404 -
127.0.0.1 - - [03/Jun/2016 21:20:50] "GET /external/lodash/lodash.min.js HTTP/1.1" 200 -
.......
WARNING:tensorflow:IOError [Errno 2] No such file or directory: '/home/tiger/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/tensorboard/favicon.ico' on path /home/tiger/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/tensorboard/favicon.ico
我的代码如下:
n_features = x_train.shape[1]
n_samples = x_train.shape[0]
n_labels = 10
n_hidden = 200
epoch_train = 200
learning_rate = 0.01
batch_size = 20
x_tr = tf.placeholder(tf.float32, shape=(None, n_features), name='x')
y_tr = tf.placeholder(tf.float32, shape=(None, n_labels), name='y')
w1 = tf.Variable(tf.truncated_normal([n_features,n_hidden]),name='weight1')
b1 = tf.Variable (tf.zeros([n_hidden]), name='bias1')
w2 = tf.Variable (tf.truncated_normal([n_hidden, n_labels]),name ='weight2')
b2 = tf.Variable(tf.zeros([n_labels]), name='bias2')
w1_hist = tf.histogram_summary('weight1', w1)
w2_hist = tf.histogram_summary('weight2', w2)
b1_hist = tf.histogram_summary('bias1', b1)
b2_hist = tf.histogram_summary('bias2', b2)
y_hist = tf.histogram_summary('y', y_tr)
with tf.name_scope('hidden') as scope:
z1 = tf.matmul(x_tr, w1)+b1
a1 = tf.nn.relu (z1)
with tf.name_scope('output') as scope:
z2 = tf.matmul(a1, w2)+b2
a2 = tf.nn.softmax (z2)
with tf.name_scope('cost') as scope:
loss = tf.reduce_mean (tf.nn.softmax_cross_entropy_with_logits(z2, y_tr))
cost_summ = tf.scalar_summary ('cost', loss)
with tf.name_scope('train') as scope:
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
def acc (pred, y):
return (np.mean(np.argmax(pred, 1)==np.argmax(y,1)))
with tf.Session() as session:
session.run(tf.initialize_all_variables())
merged = tf.merge_summary([y_hist, w1_hist, w2_hist, b1_hist, b2_hist, cost_summ])
writer = tf.train.SummaryWriter ('logs/log1', session.graph)
for epoch in range (epoch_train):
offset = epoch*batch_size % (x_train.shape[0]-batch_size)
x_tr_batch = x_train[offset:offset+batch_size, :]
y_tr_batch = y_train[offset:offset+batch_size, :]
feed_dict = {x_tr:x_tr_batch, y_tr:y_tr_batch}
_, cost, prediction = session.run ([optimizer, loss, a2], feed_dict=feed_dict)
summary = session.run (merged, feed_dict=feed_dict)
writer.add_summary(summary,epoch)
if epoch % 20 ==0:
print ('training accuracy:', acc(prediction, y_tr_batch))
print ('cost at epoch {} is:'.format(epoch), cost)
pred_ts = session.run (a2, feed_dict = {x_tr:x_test})
print ('test accuracy is:', acc(pred_ts, y_test))