1
x = torch.ones(1, requires_grad=True)
print(x)
y = x + 2.
print(y)
y.backward()
print(x.grad)

-->结果>>>>>

tensor([1.], requires_grad=True)
tensor([3.], grad_fn=<AddBackward0>)
tensor([1.])

这里没有问题。但是,如果我改变类型,我会得到

x = torch.ones(1, requires_grad=True)
x = x.double()
print(x)
y = x + 2.
y = y.double()
print(y)
y.backward()
print(x.grad)

-->结果>>>>>

tensor([1.], dtype=torch.float64, grad_fn=<CopyBackwards>)
tensor([3.], dtype=torch.float64, grad_fn=<AddBackward0>)
None

/usr/local/lib/python3.6/dist-packages/torch/tensor.py:746: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
  warnings.warn("The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad "

==============================

有什么区别?“向后”有“类型”的限制吗?

4

2 回答 2

0
The .grad attribute of a Tensor that is not a leaf Tensor is being accessed.

意味着您正在访问.gradPyTorch 永远不会填充该字段的张量的.grad字段。请注意,叶张量将.grad填充其字段。因此,如果发生此警告,则意味着您认为某事物是叶张量,但实际上并非如此。如果您对需要梯度的张量执行操作,通常会发生这种情况。

例如,在您的情况下,

x = torch.ones(1, requires_grad=True) # x is a leaf tensor
x = x.double() # x is not a leaf tensor

使用第二个语句,x不再是叶子,因为它是.double()操作的结果。

于 2020-06-23T12:53:54.203 回答
0

您正在查看错误的张量;)

通过调用x = x.double()你创建一个新的张量,所以当你x.grad稍后调用时,你在 的double()版本上调用它x,这不再是你原来x的。

于 2020-06-23T09:37:31.087 回答