1

我想在 4 个 GPU 上使用 Horovod PyTorch 训练 VGG16 模型。我不想使用 torch vision.datasets.CIFAR10 的 CIFAR10 数据集,而是自己拆分数据集。于是我从官网下载了数据集,并拆分了数据集。这就是我拆分数据的方式:

if __name__ == '__main__':
    import pickle

    train_data, train_label = [], []
    test_data, test_label = [], []
    for i in range(1, 6):
        with open('/Users/wangqipeng/Downloads/cifar-10-batches-py/data_batch_{}'.format(i), 'rb') as f:
            b = pickle.load(f, encoding='bytes')
        train_data.extend(b[b'data'].tolist()[:8000])
        train_label.extend(b[b'labels'][:8000])
        test_data.extend(b[b'data'].tolist()[8000:])
        test_label.extend(b[b'labels'][8000:])
    num_train = len(train_data)
    num_test = len(test_data)
    print(num_train, num_test)
    train_data = np.array(train_data)
    test_data = np.array(test_data)
    for i in range(4):
        with open('/Users/wangqipeng/Downloads/train_{}'.format(i), 'wb') as f:
            d = {b'data': train_data[int(0.25 * i * num_train): int(0.25 * (i + 1) * num_train)],
                 b'labels': train_label[int(0.25 * i * num_train): int(0.25 * (i + 1) * num_train)]}
            pickle.dump(d, f)
    with open('/Users/wangqipeng/Downloads/test'.format(i), 'wb') as f:
        d = {b'data': test_data,
             b'labels': test_label}
        pickle.dump(d, f)

但是,我发现如果使用我从官网下载的数据集,会出现梯度爆炸的问题。我发现经过几次迭代,损失会增加并且是“nan”。这就是我读取数据集的方式:

class DataSet(torch.utils.data.Dataset):
    def __init__(self, path):
        self.dataset = unpickle(path)

    def __getitem__(self, index):
        data = torch.tensor(
            self.dataset[b'data'][index], dtype=torch.float32).resize(3, 32, 32)
        return data, torch.tensor(self.dataset[b'labels'][index])

    def __len__(self):
        return len(self.dataset[b'data'])

train_dataset = DataSet("./cifar10/train_" + str(hvd.rank()))
train_loader = torch.utils.data.DataLoader(
    train_dataset, batch_size=args.batch_size, sampler=None, **kwargs)

如果我打印每次迭代的损失,我会看到如下内容:

Mon Nov  9 11:28:29 2020[0]<stdout>:epoch 0 iter[ 0 / 313 ] loss 7.725658416748047 accuracy 5.46875
Mon Nov  9 11:28:29 2020[0]<stdout>:epoch 0 iter[ 1 / 313 ] loss 15.312677383422852 accuracy 8.59375
Mon Nov  9 11:28:29 2020[0]<stdout>:epoch 0 iter[ 2 / 313 ] loss 16.333066940307617 accuracy 9.375
Mon Nov  9 11:28:30 2020[0]<stdout>:epoch 0 iter[ 3 / 313 ] loss 15.549728393554688 accuracy 9.9609375
Mon Nov  9 11:28:30 2020[0]<stdout>:epoch 0 iter[ 4 / 313 ] loss 14.090616226196289 accuracy 9.843750298023224
Mon Nov  9 11:28:31 2020[0]<stdout>:epoch 0 iter[ 5 / 313 ] loss 12.310989379882812 accuracy 9.63541641831398
Mon Nov  9 11:28:31 2020[0]<stdout>:epoch 0 iter[ 6 / 313 ] loss 11.578919410705566 accuracy 9.15178582072258
Mon Nov  9 11:28:31 2020[0]<stdout>:epoch 0 iter[ 7 / 313 ] loss 13.210229873657227 accuracy 8.7890625
Mon Nov  9 11:28:32 2020[0]<stdout>:epoch 0 iter[ 8 / 313 ] loss 764.713623046875 accuracy 9.28819477558136
Mon Nov  9 11:28:32 2020[0]<stdout>:epoch 0 iter[ 9 / 313 ] loss 4.590414250749922e+20 accuracy 8.984375
Mon Nov  9 11:28:32 2020[0]<stdout>:epoch 0 iter[ 10 / 313 ] loss nan accuracy 9.446022659540176
Mon Nov  9 11:28:33 2020[0]<stdout>:epoch 0 iter[ 11 / 313 ] loss nan accuracy 10.09114608168602
Mon Nov  9 11:28:33 2020[0]<stdout>:epoch 0 iter[ 12 / 313 ] loss nan accuracy 10.39663478732109

但是,如果我使用来自 torchvision 的数据集,一切都会好起来的:

train_dataset = \
    datasets.CIFAR10(args.train_dir, download=True,
                         transform=transforms.Compose([transforms.ToTensor(),
                           transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]))
train_sampler = torch.utils.data.distributed.DistributedSampler(
    train_dataset, num_replicas=hvd.size(), rank=hvd.rank())
train_loader = torch.utils.data.DataLoader(
    train_dataset, batch_size=args.batch_size, sampler=train_sampler, **kwargs) 

DistributedSampler 也可能有问题。但我认为 DistributedSampler 仅用于拆分数据。我不知道 DistributedSampler 是否会成为这个问题的原因。

我读取 CIFAR10 数据集的方式有问题吗?还是我“重塑”它的方式有问题?谢谢你的帮助!

4

1 回答 1

0

也许是因为我没有规范化数据集。感谢大家的帮助!

于 2020-11-10T11:25:08.043 回答