这是 UDACITY 用于情感分类的 LSTM 代码。
这是整个句子-rnn代码的链接:udacity/sentiment-rnn
我想知道为什么他们在 for 循环下初始化单元状态以进行 epoch。
我认为当输入语句发生变化时,单元格状态必须为零初始化,所以它必须在小批量for循环语句下。
## part of the sentence-rnn code
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state) ###### i think this line
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
###### should be here
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
谁能解释为什么?
谢谢!