1
def RNN(X, weights, biases):
    X = tf.reshape(X, [-1, n_inputs])
    X_in = tf.matmul(X, weights['in']) + biases['in']
    X_in = tf.reshape(X_in, [-1, n_steps, n_hidden_units])
    lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden_units, forget_bias=0.0, state_is_tuple=True)
    init_state = lstm_cell.zero_state(batch_size, dtype=tf.float32)
    outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, X_in, initial_state=init_state, time_major=False)

    outputs = tf.unpack(tf.transpose(outputs, [1, 0, 2]))    # states is the last outputs
    results = tf.matmul(outputs[-1], weights['out']) + biases['out']
    del outputs,final_state,lstm_cell,init_state,X,X_in
    return results

def while_loop(s,e,step):
    while s+batch_size<ran:
        batch_id=file_id[s:e]
        batch_col=label_matrix[s:e]                                             

        batch_label = csc_matrix((data, (batch_row, batch_col)), shape=(batch_size, n_classes))
        batch_label = batch_label.toarray()
        batch_xs1=tf.nn.embedding_lookup(embedding_matrix,batch_id)
        batch_xs=sess.run(batch_xs1)
        del batch_xs1
        sess.run([train_op], feed_dict={x: batch_xs,
                                        y: batch_label})

        print(step,':',sess.run(accuracy, feed_dict={x: batch_xs,y: batch_label}),sess.run(cost,feed_dict={x: batch_xs,y: batch_label}))
        if step!=0 and step % 20 == 0:
            save_path = saver.save(sess, './model/lstm_classification.ckpt',write_meta_graph=False)
            print('Save to path', save_path)

        step += 1
        s+=batch_size
        e+=batch_size
        del batch_label,batch_xs,batch_id,batch_col
        print(hp.heap())
        print(hp.heap().more)

这是我的代码。它一直出现这个错误'ResourceExhaustedError:OOM when allocating tensor with shape'我使用 guppy。然后得到了这个。孔雀鱼的结果

为什么 tensorflow 的变量占用这么多空间。

4

2 回答 2

1

问题是由训练循环中的这一行引起的:

while s + batch_size < ran:
    # ...
    batch_xs1 = tf.nn.embedding_lookup(embedding_matrix, batch_id)

调用该tf.nn.embedding_lookup()函数会将节点添加到 TensorFlow 图中,并且——因为这些节点永远不会被垃圾收集——在循环中这样做会导致内存泄漏。

内存泄漏的实际原因可能是embedding_matrix参数中的 NumPy 数组tf.nn.embedding_lookup()。TensorFlow 尝试提供帮助,并将参数中的所有 NumPy 数组转换为函数到tf.constant()TensorFlow 图中的节点。然而,在一个循环中,这最终会导致多个单独的副本embedding_matrix复制到 TensorFlow 中,然后再复制到稀缺的 GPU 内存中。

最简单的解决方案是将tf.nn.embedding_lookup()调用移到训练循环之外。例如:

def while_loop(s,e,step):
  batch_id_placeholder = tf.placeholder(tf.int32)
  batch_xs1 = tf.nn.embedding_lookup(embedding_matrix, batch_id_placeholder)

  while s+batch_size<ran:
    batch_id=file_id[s:e]
    batch_col=label_matrix[s:e]                                             

    batch_label = csc_matrix((data, (batch_row, batch_col)), shape=(batch_size, n_classes))
    batch_label = batch_label.toarray()

    batch_xs=sess.run(batch_xs1, feed_dict={batch_id_placeholder: batch_id})
于 2017-02-28T15:38:43.733 回答
1

我最近在使用 TF + Keras 时遇到了这个问题,之前在使用 yolo v3 时遇到了 Darknet。我的数据集包含非常大的图像,用于存储我的两台 GTX 1050。我不得不将图像调整为更小。平均而言,1024x1024 图像每个 GPU 需要 6GB。

于 2018-12-18T04:23:20.837 回答