x = torch.ones(1, requires_grad=True)
print(x)
y = x + 2.
print(y)
y.backward()
print(x.grad)
-->结果>>>>>
tensor([1.], requires_grad=True)
tensor([3.], grad_fn=<AddBackward0>)
tensor([1.])
这里没有问题。但是,如果我改变类型,我会得到
x = torch.ones(1, requires_grad=True)
x = x.double()
print(x)
y = x + 2.
y = y.double()
print(y)
y.backward()
print(x.grad)
-->结果>>>>>
tensor([1.], dtype=torch.float64, grad_fn=<CopyBackwards>)
tensor([3.], dtype=torch.float64, grad_fn=<AddBackward0>)
None
/usr/local/lib/python3.6/dist-packages/torch/tensor.py:746: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
warnings.warn("The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad "
==============================
有什么区别?“向后”有“类型”的限制吗?