0

我想将我的树神经网络转换为 dag 神经网络。前向传递很简单,因为我可以通过缓存节点和张量的唯一名称来简单地检测我是否已经通过了该节点/子树。但是,在反向传递中这是一个问题,因为即使我重用这些张量,自动毕业引擎也会抱怨我已经通过这些张量进行了反向传递(并且正确地如此)。有没有办法以简单的方式修改后向传递,以便重新使用已经计算的梯度?

还是更容易放弃我的旧代码并在 Pytorch Geometric (PyG) 或使用 Deep Graph Library (dgl) 中实现这些模型?

为了完整起见,这里是错误消息:

    train_loss, train_acc = agent.train(n_epoch)
  File "/home/miranda9/ML4Coq/ml4coq-proj-src/agents.py", line 102, in train
    loss.backward()  # each process synchronizes it's gradients in the backward pass
  File "/home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/autograd/__init__.py", line 130, in backward
    Variable._execution_engine.run_backward(
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.

交叉贴:

4

0 回答 0