问问题
17498 次
1 回答
36
让我们从简单的工作示例开始,该示例具有普通的损失函数和常规的后向。我们将构建简短的计算图并对其进行一些梯度计算。
代码:
import torch
from torch.autograd import grad
import torch.nn as nn
# Create some dummy data.
x = torch.ones(2, 2, requires_grad=True)
gt = torch.ones_like(x) * 16 - 0.5 # "ground-truths"
# We will use MSELoss as an example.
loss_fn = nn.MSELoss()
# Do some computations.
v = x + 2
y = v ** 2
# Compute loss.
loss = loss_fn(y, gt)
print(f'Loss: {loss}')
# Now compute gradients:
d_loss_dx = grad(outputs=loss, inputs=x)
print(f'dloss/dx:\n {d_loss_dx}')
输出:
Loss: 42.25
dloss/dx:
(tensor([[-19.5000, -19.5000], [-19.5000, -19.5000]]),)
好的,这行得通!现在让我们尝试重现错误“grad can be implicitly created only for scalar outputs”。如您所见,前面示例中的损失是一个标量。backward()
并且grad()
默认处理单个标量值:loss.backward(torch.tensor(1.))
. 如果您尝试传递具有更多值的张量,您将收到错误消息。
代码:
v = x + 2
y = v ** 2
try:
dy_hat_dx = grad(outputs=y, inputs=x)
except RuntimeError as err:
print(err)
输出:
grad can be implicitly created only for scalar outputs
因此,使用grad()
时需要指定grad_outputs
参数如下:
代码:
v = x + 2
y = v ** 2
dy_dx = grad(outputs=y, inputs=x, grad_outputs=torch.ones_like(y))
print(f'dy/dx:\n {dy_dx}')
dv_dx = grad(outputs=v, inputs=x, grad_outputs=torch.ones_like(v))
print(f'dv/dx:\n {dv_dx}')
输出:
dy/dx:
(tensor([[6., 6.],[6., 6.]]),)
dv/dx:
(tensor([[1., 1.], [1., 1.]]),)
注意:如果您backward()
改用,只需执行y.backward(torch.ones_like(y))
.
于 2019-02-19T00:46:52.543 回答