我想计算网络中两个张量之间的梯度。输入 X 张量通过一组卷积层发送,这些卷积层返回并输出 Y 张量。
我正在创建一个新的损失,我想知道 X 的每个元素的 norm(Y) 梯度之间的 MSE。这里的代码:
# Staring tensors
X = torch.rand(40, requires_grad=True)
Y = torch.rand(40, requires_grad=True)
# Define loss
loss_fn = nn.MSELoss()
#Make some calculations
V = Y*X+2
# Compute the norm
V_norm = V.norm()
# Computing gradient to calculate the loss
for i in range(len(V)):
if i == 0:
grad_tensor = torch.autograd.grad(outputs=V_norm, inputs=X[i])
else:
grad_tensor_ = torch.autograd.grad(outputs=V_norm, inputs=X[i])
grad_tensor = torch.cat((grad_tensor, grad_tensor_), dim=0)
# Grund truth
gt = grad_tensor * 0 + 1
#Loss
loss_g = loss_fn(grad_tensor, gt)
print(loss_g)
不幸的是,我一直在用 torch.autograd.grad() 进行测试,但我不知道该怎么做。我收到以下错误:RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
设置allow_unused=True
让我回来None
,这不是一个选择。不确定如何计算梯度和范数之间的损失。关于如何编码这种损失的任何想法?