实际上,我选择了 jodag 的评论回答:
torch.manual_seed("0")
for i,elt in enumerate(unlabeled_dataloader):
order.append(elt[2].item())
print(elt)
if i > 10:
break
torch.manual_seed("0")
print("new dataloader")
for i,elt in enumerate( unlabeled_dataloader):
print(elt)
if i > 10:
break
exit(1)
和输出:
[tensor([[-0.3583, -0.6944]]), tensor([3]), tensor([1610])]
[tensor([[-0.6623, -0.3790]]), tensor([3]), tensor([1958])]
[tensor([[-0.5046, -0.6399]]), tensor([3]), tensor([1814])]
[tensor([[-0.5349, 0.2365]]), tensor([2]), tensor([1086])]
[tensor([[-0.1310, 0.1158]]), tensor([0]), tensor([321])]
[tensor([[-0.2085, 0.0727]]), tensor([0]), tensor([422])]
[tensor([[ 0.1263, -0.1597]]), tensor([0]), tensor([142])]
[tensor([[-0.1387, 0.3769]]), tensor([1]), tensor([894])]
[tensor([[-0.0500, 0.8009]]), tensor([3]), tensor([1924])]
[tensor([[-0.6907, 0.6448]]), tensor([4]), tensor([2016])]
[tensor([[-0.2817, 0.5136]]), tensor([2]), tensor([1267])]
[tensor([[-0.4257, 0.8338]]), tensor([4]), tensor([2411])]
new dataloader
[tensor([[-0.3583, -0.6944]]), tensor([3]), tensor([1610])]
[tensor([[-0.6623, -0.3790]]), tensor([3]), tensor([1958])]
[tensor([[-0.5046, -0.6399]]), tensor([3]), tensor([1814])]
[tensor([[-0.5349, 0.2365]]), tensor([2]), tensor([1086])]
[tensor([[-0.1310, 0.1158]]), tensor([0]), tensor([321])]
[tensor([[-0.2085, 0.0727]]), tensor([0]), tensor([422])]
[tensor([[ 0.1263, -0.1597]]), tensor([0]), tensor([142])]
[tensor([[-0.1387, 0.3769]]), tensor([1]), tensor([894])]
[tensor([[-0.0500, 0.8009]]), tensor([3]), tensor([1924])]
[tensor([[-0.6907, 0.6448]]), tensor([4]), tensor([2016])]
[tensor([[-0.2817, 0.5136]]), tensor([2]), tensor([1267])]
[tensor([[-0.4257, 0.8338]]), tensor([4]), tensor([2411])]
这是所希望的。但是,我认为jodag的主要答案仍然更好;这只是一个快速破解,现在可以使用;)