我尝试使用 2 gpu 微调 tensorflow 提供的预训练模型所以我设置如下标志
python train.py --logtostderr
--train_split='train'
--model_variant='xception_65'
--atrous_rates=12
--atrous_rates=24
--atrous_rates=36
--output_stride=8
--decoder_output_stride=4
--train_crop_size=513
--train_crop_size=513
--train_batch_size=2
--training_number_of_steps=20000
--fine_tune_batch_norm=false
--tf_initial_checkpoint="/storage/models-master/research/deeplab/datasets/pascal_voc_seg/init_models/deeplabv3_pascal_train_aug/model.ckpt" --train_logdir="/storage/models-master/research/deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/train/0831_2"
--dataset_dir="/storage/models-master/research/deeplab/datasets/pascal_voc_seg/tfrecord"
--image_pyramid=0.5
--image_pyramid=0.25
--image_pyramid=1.75
--num_replicas=1
--num_clones=2
--num_ps_tasks=1
但我收到如下错误消息
无法为操作“parallel_read/filenames/Greater”分配设备:操作已明确分配给 /job:worker/device:CPU:0 但可用设备为 [ /job:localhost/replica:0/task:0/device:CPU :0, /job:localhost/replica:0/task:0/device:GPU:0, /job:localhost/replica:0/task:0/device:GPU:1]。确保设备规范引用了有效的设备。[[节点:parallel_read/filenames/Greater = Greater[T=DT_INT32, _device="/job:worker/device:CPU:0"](parallel_read/filenames/Size, parallel_read/filenames/Greater/y)]]
似乎操作已分配 '/job:worker/device:CPU:0' 但我的设置是 2 gpus,我该如何解决这个问题?