0

我正在尝试使用来自 2 个不同数据集的 2 个数据加载器来训练我的模型。

我找到了如何通过使用来设置它,cycle() and zip()因为我的数据集与这里的长度不同:如何使用 pytorch 同时迭代两个数据加载器?

  File "/home/Desktop/example/train.py", line 229, in train_2
    for i, (x1, x2) in enumerate(zip(cycle(train_loader_1), train_loader_2)):
  File "/home/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 346, in __next__
    data = self.dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/home/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/home/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 80, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 80, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 154140672 bytes. Error code 12 (Cannot allocate memory)

我试图通过设置num_workers=0、减小批量大小、使用pinned_memory=False和...来解决这个问题shuffle=False……但都没有奏效……我有 256GB 的 RAM 和 4 个 NVIDIA TESLA V100 GPU。

我试图通过不同时训练 2 个数据加载器而是单独训练来运行它,并且它起作用了。但是,对于我的项目,我需要使用 2 个数据集进行这种并行训练......

4

1 回答 1

4

基于这个讨论,而不是cycle()zip()我通过使用避免任何错误:

  try:
     data, target = next(dataloader_iterator)
  except StopIteration:
     dataloader_iterator = iter(dataloader)
     data, target = next(dataloader_iterator)

从这篇 pytorch 帖子中向 @srossi93 致敬!

于 2019-09-11T13:39:57.050 回答