我目前正在尝试使用 pytorch 训练神经网络,我尝试在输入导数上匹配输入。我想这样做是因为这确保了一个保守的向量场。(在为分子动力学中的力匹配训练神经网络时完成)这意味着:
input = torch.rand((n,3),requires_grad=True)
output = torch.rand((n,3),requires_grad=True)
prediction = model(input) # size of the prediction[1]
input_grad = torch.autograd.grad(outputs=prediction, inputs=input, retain_graph=True, create_graph=True)
loss = loss_fn(output,input_grad)
...
问题是,如果我尝试更新神经网络的参数,所有参数的梯度都是 0。我确保模型正常工作;我不知道如何构建模型正在正确训练的图形。在 Jaxmd 中,可以像 [Jax Glass Training][1] 所示训练这样的模型。我也试过
input_grad = torch.autograd.grad(outputs=prediction, inputs=input, create_graph=True)
但这会产生类似的结果并且没有意义。[1]:https ://colab.research.google.com/github/google/jax-md/blob/master/notebooks/neural_networks.ipynb#scrollTo=WNs8v2745Mc3
编辑:
更新了复制 pytorch 版本 1.6.0 的代码示例
import torch
class Feedforward(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super(Feedforward, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, 1)
def forward(self, x):
hidden = self.fc1(x)
relu = self.relu(hidden)
output = self.fc2(relu)
output = output.sum()
output =torch.autograd.grad(outputs=output, inputs=x, retain_graph=True, create_graph=True)
return output[0]
test_input = torch.rand((10,3),requires_grad=True)
test_output = torch.rand((10,3))
model = Feedforward(3,10)
optim = torch.optim.Adam(model.parameters())
optim.zero_grad()
loss_fn = torch.nn.L1Loss()
model.train()
out = model(test_input)
loss = loss_fn(out, test_output)
loss.backward()
optim.step() # if you break here and investigate the gradients
# of the FFNN, the gradients will be 0