1

我正在使用Segnet的 Pytorch 实现,它带有我为对象分割找到的预训练值,它工作正常。现在我想使用具有相似图像的新数据集从我拥有的值中恢复训练。我怎样才能做到这一点?

我想我必须使用存储库中找到的“train.py”文件,但我不知道要写什么来替换“填充批处理”注释。这是代码的那部分:

def train(epoch):
    model.train()

    # update learning rate
    lr = args.lr * (0.1 ** (epoch // 30))
    for param_group in optimizer.param_groups:
        param_group['lr'] = lr

    # define a weighted loss (0 weight for 0 label)
    weights_list = [0]+[1 for i in range(17)]
    weights = np.asarray(weights_list)
    weigthtorch = torch.Tensor(weights_list)
    if(USE_CUDA):
        loss = nn.CrossEntropyLoss(weight=weigthtorch).cuda()
    else:
        loss = nn.CrossEntropyLoss(weight=weigthtorch)


    total_loss = 0

    # iteration over the batches
    batches = []
    for batch_idx,batch_files in enumerate(tqdm(batches)):

        # containers
        batch = np.zeros((args.batch_size,input_nbr, imsize, imsize), dtype=float)
        batch_labels = np.zeros((args.batch_size,imsize, imsize), dtype=int)

        # fill the batch
        # ... 
        # What should I write here?

        batch_th = Variable(torch.Tensor(batch))
        target_th = Variable(torch.LongTensor(batch_labels))

        if USE_CUDA:
            batch_th =batch_th.cuda()
            target_th = target_th.cuda()

        # initilize gradients
        optimizer.zero_grad()

        # predictions
        output = model(batch_th)

        # Loss
        output = output.view(output.size(0),output.size(1), -1)
        output = torch.transpose(output,1,2).contiguous()
        output = output.view(-1,output.size(2))
        target = target.view(-1)

        l_ = loss(output.cuda(), target)
        total_loss += l_.cpu().data.numpy()
        l_.cuda()
        l_.backward()
        optimizer.step()

return total_loss/len(files)
4

1 回答 1

0

如果我不得不猜测他可能制作了一些扩展 Pytorch Dataloader 类的 Dataloader 馈线。见 https://pytorch.org/tutorials/beginner/data_loading_tutorial.html

在页面底部附近,您可以看到一个示例,其中他们循环访问其数据加载器

对于 i_batch,enumerate(dataloader) 中的 sample_batched:

例如,这对图像的要求是:

trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batchSize, shuffle=True, num_workers=2)

for batch_idx, (inputs, targets) in enumerate(trainloader):
    # Using the pytorch data loader the inputs and targets are given 
    # automatically
    inputs, targets = inputs.cuda(), targets.cuda()
    optimizer.zero_grad()
    inputs, targets = Variable(inputs), Variable(targets)

我不知道作者究竟是如何加载他的文件的。您可以按照以下步骤操作:https : //pytorch.org/tutorials/beginner/data_loading_tutorial.html 来制作您自己的 Dataloader。

于 2018-05-09T11:08:29.540 回答