我正在尝试使用 Detectron2 和 COCO 数据集训练模型以进行车辆和人员检测,但我遇到了模型加载问题。
我在 SO 和https://github.com/immersive-limit/coco-manager(filter.py 文件)代码上使用了帖子来过滤 COCO 数据集,以仅包含来自“人”、“汽车”类的注释和图像、“自行车”、“卡车”和“自行车”。现在我的目录结构是:
main
- annotations:
- instances_train2017_filtered.json
- instances_val2017_filtered.json
- images:
- train2017_filtered (lots of images inside)
- val2017_filtered (lots of images inside)
基本上,我在这里所做的唯一一件事就是删除与这些类不对应的文档和图像,并更改它们的 ID(因此它们从 1 变为 5)。
然后我使用了 Detectron2 教程中的代码:
import random
import cv2
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.data.datasets import register_coco_instances
from detectron2.engine import DefaultTrainer, DefaultPredictor
from detectron2.config import get_cfg
import os
from detectron2.model_zoo import model_zoo
from detectron2.utils.visualizer import Visualizer
register_coco_instances("train",
{},
"/home/jakub/Projects/coco/annotations/instances_train2017_filtered.json",
"/home/jakub/Projects/coco/images/train2017_filtered/")
register_coco_instances("val",
{},
"/home/jakub/Projects/coco/annotations/instances_val2017_filtered.json",
"/home/jakub/Projects/coco/images/val2017_filtered/")
metadata = MetadataCatalog.get("train")
dataset_dicts = DatasetCatalog.get("train")
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025
cfg.SOLVER.MAX_ITER = 300
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 5
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.DATASETS.TEST = ("val", )
predictor = DefaultPredictor(cfg)
img = cv2.imread("demo/input.jpg")
outputs = predictor(img)
for d in random.sample(dataset_dicts, 1):
im = cv2.imread(d["file_name"])
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=metadata,
scale=0.8)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2.imwrite('demo/output_retrained.jpg', out.get_image()[:, :, ::-1])
在训练期间,我收到以下错误:
Unable to load 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (81, 1024) in the checkpoint but (6, 1024) in the model!
Unable to load 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (81,) in the checkpoint but (6,) in the model!
Unable to load 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (320, 1024) in the checkpoint but (20, 1024) in the model!
Unable to load 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (320,) in the checkpoint but (20,) in the model!
Unable to load 'roi_heads.mask_head.predictor.weight' to the model due to incompatible shapes: (80, 256, 1, 1) in the checkpoint but (5, 256, 1, 1) in the model!
Unable to load 'roi_heads.mask_head.predictor.bias' to the model due to incompatible shapes: (80,) in the checkpoint but (5,) in the model!
尽管在训练期间减少了总损失,但该模型无法预测训练后的任何有用信息。我知道由于大小不匹配(我减少了课程数量),我应该收到警告,这从我在互联网上看到的情况来看是正常的,但在每个错误行之后我都没有得到“跳过”。我认为该模型实际上没有在这里加载任何东西,我想知道为什么以及如何解决这个问题。
编辑
为了比较,几乎相同情况下的类似行为被报告为问题,但它在每个错误行的末尾都有“跳过”,使它们有效地发出警告,而不是错误: https ://github.com/facebookresearch/detectron2/问题/196