我尝试使用 DeepLab v3+ 进行语义分割,但结果全黑了。
我把原来的文件删掉了,把原来的数据分别放在ImageSets/,JPEGImages/和SegmentationClass/对应的里面。
我根据 PASCAL VOC 2012 颜色的规则准备了 SegmentationClassRaw 图像。
我编辑了 build_voc2012_data.py 和 segmentation_dataset.py
[build_voc2012_data.py]
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string('image_folder',
'./VOCdevkit/VOC2012/JPEGImages',
'Folder containing images.')
tf.app.flags.DEFINE_string(
'semantic_segmentation_folder',
'./VOCdevkit/VOC2012/SegmentationClassRaw',
'Folder containing semantic segmentation annotations.')
tf.app.flags.DEFINE_string(
'list_folder',
'./VOCdevkit/VOC2012/ImageSets/Segmentation',
'Folder containing lists for training and validation')
tf.app.flags.DEFINE_string(
'output_dir',
'./tfrecord',
'Path to save converted SSTable of TensorFlow examples.')
_NUM_SHARDS = 4
# add -->>
FLAGS.image_folder = "./pascal_voc_seg/VOCdevkit/VOC2012/JPEGImages"
FLAGS.semantic_segmentation_folder = "./pascal_voc_seg/VOCdevkit/VOC2012/SegmentationClassRaw"
FLAGS.list_folder = "./pascal_voc_seg/VOCdevkit/VOC2012/ImageSets/Segmentation"
FLAGS.image_format = "png"
FLAGS.output_dir = "./pascal_voc_seg/tfrecord"
# add --<<
[segmentation_dataset.pu]
# add kani 20181115 -->>
_ORIGINAL_INFORMATION = DatasetDescriptor(
splits_to_sizes={
'train': 10,
'trainval': 2,
'val': 2,
},
num_classes=5,
ignore_label=255,
)
#add kani 20181115 --<<
# mod kani 20181115 -->>
# _DATASETS_INFORMATION = {
# 'cityscapes': _CITYSCAPES_INFORMATION,
# 'pascal_voc_seg': _PASCAL_VOC_SEG_INFORMATION,
# 'ade20k': _ADE20K_INFORMATION,
# }
_DATASETS_INFORMATION = {
'cityscapes': _CITYSCAPES_INFORMATION,
'pascal_voc_seg': _PASCAL_VOC_SEG_INFORMATION,
'ade20k': _ADE20K_INFORMATION,
'original': _ORIGINAL_INFORMATION,
}
# mod kani 20181115 --<<
我像这样运行 train.py 和 vis.py。
[train.py 命令]
python train.py --logtostderr --train_split=trainval --model_variant=xception_65 --atrous_rates=3 --atrous_rates=6 --atrous_rates=9 --output_stride=32 --decoder_output_stride=4 --train_crop_size=512 --train_crop_size=512 --train_batch_size=2 --training_number_of_steps=6000 --fine_tune_batch_norm=false --tf_initial_checkpoint="./datasets/pascal_voc_seg/init_models/deeplabv3_pascal_train_aug/model.ckpt" --train_logdir="./datasets/pascal_voc_seg/exp/train_on_trainval_set/train" --dataset_dir="./datasets/pascal_voc_seg/tfrecord" --dataset=original
[vis.py 命令]
python vis.py --logtostderr --vis_split="val" --model_variant="xception_65" --atrous_rates=6 --atrous_rates=12 --atrous_rates=18 --output_stride=16 --decoder_output_stride=4 --vis_crop_size=513 --vis_crop_size=513 --checkpoint_dir="./datasets/pascal_voc_seg/exp/train_on_trainval_set/train" --vis_logdir="./datasets/pascal_voc_seg/exp/train_on_trainval_set/vis" --dataset_dir="./datasets/pascal_voc_seg/tfrecord" --max_number_of_iterations=1 --dataset=original --max_resize_value=512 --min_resize_value=128
两者都没有问题,但我确认了图片datasets/pascal_voc_seg/exp/train_on_trainval_set/vis/raw_segmentation_results/
,这些都是黑色的。为什么?
这是因为火车数据大于 512x512 吗?(训练数据量很大:大约15000x13500)
[构建我的目录]
/tmp/models/research/deeplab
-README.md
-common.py
-datasets/
--__init__.py
--build_data.py
--convert_cityscapes.sh
--pascal_voc_seg/
---VOCdevkit/
----VOC2012/
-----Annotations/
-----ImageSets/
-----JPEGImages/
-----SegmentationClass/
-----SegmentationObject/
---VOCtrainval_11-May-2012.tar
---exp/
----train_on_trainval_set/
-----train/
------train.py
-----vis/
------vis.py
---init_models/
----deeplabv3_pascal_train_aug/
-----frozen_inference_graph.pb
-----model.ckpt.data-00000-of-00001
-----model.ckpt.index
---tfrecord/
----build_voc_2012.py
--__pycache__
--build_data.pyc
--download_and_convert_ade20k.sh
--remove_gt_colormap.py
--build_ade20k_data.py
--build_voc2012_data.py
--download_and_convert_voc2012.sh
--segmentation_dataset.py
--build_cityscapes_data.py
--build_voc2012_data.py.org
-export_model.py
-local_test.sh
-model_test.py
-utils/
-__init__.py
-common_test.py
-deeplab_demo.ipynb
-g3doc/
-local_test_mobilenetv2.sh
-train.py
-vis.py
-__pycache__
-core/
-eval.py
-input_preprocess.py
-model.py
-train.py.bk