1

Pytorch Dataloader 的迭代顺序是否保证相同(在温和条件下)?

例如:

dataloader = DataLoader(my_dataset, batch_size=4,
                        shuffle=True, num_workers=4)
print("run 1")
for batch in dataloader:
  print(batch["index"])

print("run 2")
for batch in dataloader:
  print(batch["index"])

到目前为止,我已经尝试过测试它,它似乎没有被修复,两次运行的顺序相同。有没有办法使订单相同?谢谢

编辑:我也试过做

unlabeled_sampler = data.sampler.SubsetRandomSampler(unlabeled_indices)
unlabeled_dataloader = data.DataLoader(train_dataset, 
                sampler=unlabeled_sampler, batch_size=args.batch_size, drop_last=False)

然后遍历数据加载器两次,但结果相同的不确定性。

4

2 回答 2

5

简短的回答是否定的,当shuffle=Truea 的迭代顺序DataLoader在迭代之间不稳定时。每次迭代加载器时,内部RandomSampler都会创建一个新的随机顺序。

获得稳定的洗牌的一种方法是使用一组洗牌的索引DataLoader创建数据集。Subset

shuffled_dataset = torch.utils.data.Subset(my_dataset, torch.randperm(len(my_dataset)).tolist())
dataloader = DataLoader(shuffled_dataset, batch_size=4, num_workers=4, shuffled=False)
于 2019-12-13T19:38:14.560 回答
0

实际上,我选择了 jodag 的评论回答:

torch.manual_seed("0")

for i,elt in enumerate(unlabeled_dataloader):
    order.append(elt[2].item())
    print(elt)

    if i > 10:
        break

torch.manual_seed("0")

print("new dataloader")
for i,elt in enumerate( unlabeled_dataloader):
    print(elt)
    if i > 10:
        break
exit(1)                       

和输出:

[tensor([[-0.3583, -0.6944]]), tensor([3]), tensor([1610])]
[tensor([[-0.6623, -0.3790]]), tensor([3]), tensor([1958])]
[tensor([[-0.5046, -0.6399]]), tensor([3]), tensor([1814])]
[tensor([[-0.5349,  0.2365]]), tensor([2]), tensor([1086])]
[tensor([[-0.1310,  0.1158]]), tensor([0]), tensor([321])]
[tensor([[-0.2085,  0.0727]]), tensor([0]), tensor([422])]
[tensor([[ 0.1263, -0.1597]]), tensor([0]), tensor([142])]
[tensor([[-0.1387,  0.3769]]), tensor([1]), tensor([894])]
[tensor([[-0.0500,  0.8009]]), tensor([3]), tensor([1924])]
[tensor([[-0.6907,  0.6448]]), tensor([4]), tensor([2016])]
[tensor([[-0.2817,  0.5136]]), tensor([2]), tensor([1267])]
[tensor([[-0.4257,  0.8338]]), tensor([4]), tensor([2411])]
new dataloader
[tensor([[-0.3583, -0.6944]]), tensor([3]), tensor([1610])]
[tensor([[-0.6623, -0.3790]]), tensor([3]), tensor([1958])]
[tensor([[-0.5046, -0.6399]]), tensor([3]), tensor([1814])]
[tensor([[-0.5349,  0.2365]]), tensor([2]), tensor([1086])]
[tensor([[-0.1310,  0.1158]]), tensor([0]), tensor([321])]
[tensor([[-0.2085,  0.0727]]), tensor([0]), tensor([422])]
[tensor([[ 0.1263, -0.1597]]), tensor([0]), tensor([142])]
[tensor([[-0.1387,  0.3769]]), tensor([1]), tensor([894])]
[tensor([[-0.0500,  0.8009]]), tensor([3]), tensor([1924])]
[tensor([[-0.6907,  0.6448]]), tensor([4]), tensor([2016])]
[tensor([[-0.2817,  0.5136]]), tensor([2]), tensor([1267])]
[tensor([[-0.4257,  0.8338]]), tensor([4]), tensor([2411])]

这是所希望的。但是,我认为jodag的主要答案仍然更好;这只是一个快速破解,现在可以使用;)

于 2019-12-16T00:18:30.727 回答