0

我正在执行这里的 Github 存储库中提供的 head2head 模型。当我使用以下命令运行代码时:

./scripts/train/train_on_target.sh Obama head2headDataset

train_on_target.sh文件的内容为:

target_name=$1
dataset_name=$2

python train.py --checkpoints_dir checkpoints/$dataset_name \
                --target_name $target_name \
                --name head2head_$target_name \
                --dataroot datasets/$dataset_name/dataset \
                --serial_batches

然后我收到以下错误:

Traceback (most recent call last):
  File "train.py", line 108, in <module>
    flow_ref, conf_ref, t_scales, n_frames_D)
  File "/home/nitin/head2head/util/util.py", line 48, in get_skipped_flows
    flow_ref_skipped[s], conf_ref_skipped[s] = flowNet(real_B[s][:,1:], real_B[s][:,:-1])
  File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/nitin/head2head/models/flownet.py", line 38, in forward
    flow, conf = self.compute_flow_and_conf(input_A, input_B)
  File "/home/nitin/head2head/models/flownet.py", line 55, in compute_flow_and_conf
    flow1 = self.flowNet(data1)
  File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/nitin/head2head/models/flownet2_pytorch/models.py", line 156, in forward
    flownetfusion_flow = self.flownetfusion(concat3)
  File "/home/nitin/anaconda3/envs/head2head/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/nitin/head2head/models/flownet2_pytorch/networks/FlowNetFusion.py", line 62, in forward
    concat0 = torch.cat((out_conv0,out_deconv0,flow1_up),1)
RuntimeError: CUDA out of memory. Tried to allocate 82.00 MiB (GPU 0; 5.80 GiB total capacity; 4.77 GiB already allocated; 73.56 MiB free; 4.88 GiB reserved in total by PyTorch)

我已经检查了文件options/base_options.py中的批处理大小。它已经设置为 1。如何解决上述异常。我的系统有6 GB NVIDIA GTX 1660 Super GPU。

4

1 回答 1

1

数据管理:

您可以尝试减少用于训练的数据集,以检查是否是硬件限制。此外,如果是图像数据集,可以通过降低 dpi 来降低图像的维度。

模型参数管理:

另一种方法是减少模型的参数数量。第一个建议是更改密集层大小,然后更改其他神经网络超参数。

于 2021-03-05T12:17:34.833 回答