1

我在 Windows 10 上使用具有此配置的虚拟机:

Memory 7.8 GiB
Processor Intel® Core™ i5-6600K CPU @ 3.50GHz × 3
Graphics llvmpipe (LLVM 11.0.0, 256 bits)
Disk Capcity 80.5 GB
OS Ubuntu 20.10 64 Bit
Virtualization Oracle

我按照官方文档中的描述为 Ubuntu 安装了 docker 。
我按照 docker 的yolo github 部分所述提取了 docker 映像。
由于我没有 NVIDIA GPU,我无法安装驱动程序或 CUDA。我从roboflow中拉出水族箱并将其安装在折叠水族箱上。我运行了这个命令来启动图像并安装了我的水族馆文件夹

sudo docker run --ipc=host -it -v "$(pwd)"/Desktop/yolo/aquarium:/usr/src/app/aquarium ultralytics/yolov5:latest

并受到了这个横幅的欢迎

============= == PyTorch ==

NVIDIA 版本 21.03(内部版本 21060478)PyTorch 版本 1.9.0a0+df837d0

容器图像 版权所有 (c) 2021,NVIDIA CORPORATION。版权所有。

版权所有 (c) 2014-2021 Facebook Inc. 版权所有 (c) 2011-2014 Idiap Research Institute (Ronan Collobert) 版权所有 (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu) 版权所有 (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu) ) 版权所有 (c) 2011-2013 NYU
(Clement Farabet) 版权所有 (c) 2006-2010 NEC 美国实验室 (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) 版权所有 (c) 2006 Idiap Research Institute (Samy Bengio) 版权所有 ( c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz) 版权所有 (c) 2015 Google Inc. 版权所有 (c) 2015 Yangqing Jia 版权所有 (c) 2013-2016 Caffe 贡献者保留所有权利。

NVIDIA 深度学习分析器 (dlprof) 版权所有 (c) 2021,NVIDIA CORPORATION。版权所有。

各种文件包括修改 (c) NVIDIA CORPORATION。版权所有。

此容器映像及其内容受 NVIDIA 深度学习容器许可证的约束。通过拉取和使用容器,即表示您接受本许可的条款和条件: https ://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

警告:未检测到 NVIDIA 驱动程序。GPU 功能将不可用。使用“nvidia-docker run”启动这个容器;请参阅 https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker

注意:未检测到用于多节点通信的 MOFED 驱动程序。多节点通信性能可能会降低。

所以那里没有错误。
我安装了 pip 并使用 pip wandb 添加了 wandb。我使用wandb login并设置了我的 API 密钥。

我运行了以下命令:

# python train.py --img 640 --batch 16 --epochs 10 --data ./aquarium/data.yaml --weights yolov5s.pt --project ip5 --name aquarium5 --nosave --cache

并收到此输出:

github: skipping check (Docker image)
YOLOv5  v5.0-14-g238583b torch 1.9.0a0+df837d0 CPU

Namespace(adam=False, artifact_alias='latest', batch_size=16, bbox_interval=-1, bucket='', cache_images=True, cfg='', data='./aquarium/data.yaml', device='', entity=None, epochs=10, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[640, 640], label_smoothing=0.0, linear_lr=False, local_rank=-1, multi_scale=False, name='aquarium5', noautoanchor=False, nosave=True, notest=False, project='ip5', quad=False, rect=False, resume=False, save_dir='ip5/aquarium5', save_period=-1, single_cls=False, sync_bn=False, total_batch_size=16, upload_dataset=False, weights='yolov5s.pt', workers=8, world_size=1)
tensorboard: Start with 'tensorboard --logdir ip5', view at http://localhost:6006/
hyperparameters: lr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0
wandb: Currently logged in as: pebs (use `wandb login --relogin` to force relogin)
wandb: Tracking run with wandb version 0.10.26
wandb: Syncing run aquarium5
wandb: ⭐️ View project at https://wandb.ai/pebs/ip5
wandb:  View run at https://wandb.ai/pebs/ip5/runs/1c2j80ii
wandb: Run data is saved locally in /usr/src/app/wandb/run-20210419_102642-1c2j80ii
wandb: Run `wandb offline` to turn off syncing.

Overriding model.yaml nc=80 with nc=7

                 from  n    params  module                                  arguments                     
  0                -1  1      3520  models.common.Focus                     [3, 32, 3]                    
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]                
  2                -1  1     18816  models.common.C3                        [64, 64, 1]                   
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]               
  4                -1  1    156928  models.common.C3                        [128, 128, 3]                 
  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]              
  6                -1  1    625152  models.common.C3                        [256, 256, 3]                 
  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]              
  8                -1  1    656896  models.common.SPP                       [512, 512, [5, 9, 13]]        
  9                -1  1   1182720  models.common.C3                        [512, 512, 1, False]          
 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]              
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
 12           [-1, 6]  1         0  models.common.Concat                    [1]                           
 13                -1  1    361984  models.common.C3                        [512, 256, 1, False]          
 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]              
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
 16           [-1, 4]  1         0  models.common.Concat                    [1]                           
 17                -1  1     90880  models.common.C3                        [256, 128, 1, False]          
 18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]              
 19          [-1, 14]  1         0  models.common.Concat                    [1]                           
 20                -1  1    296448  models.common.C3                        [256, 256, 1, False]          
 21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]              
 22          [-1, 10]  1         0  models.common.Concat                    [1]                           
 23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]          
 24      [17, 20, 23]  1     32364  models.yolo.Detect                      [7, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
[W NNPACK.cpp:80] Could not initialize NNPACK! Reason: Unsupported hardware.
Model Summary: 283 layers, 7079724 parameters, 7079724 gradients, 16.4 GFLOPS

Transferred 356/362 items from yolov5s.pt
Scaled weight_decay = 0.0005
Optimizer groups: 62 .bias, 62 conv.weight, 59 other
train: Scanning '/usr/src/app/aquarium/train/labels.cache' images and labels... 448 found, 0 missing, 1 empty, 0 corrupted: 100%|█| 448/448 [00:00<?, ?
train: Caching images (0.4GB): 100%|████████████████████████████████████████████████████████████████████████████████| 448/448 [00:01<00:00, 313.77it/s]
val: Scanning '/usr/src/app/aquarium/valid/labels.cache' images and labels... 127 found, 0 missing, 0 empty, 0 corrupted: 100%|█| 127/127 [00:00<?, ?it
val: Caching images (0.1GB): 100%|██████████████████████████████████████████████████████████████████████████████████| 127/127 [00:00<00:00, 141.31it/s]
Plotting labels... 

autoanchor: Analyzing anchors... anchors/target = 5.17, Best Possible Recall (BPR) = 0.9997
Image sizes 640 train, 640 test
Using 3 dataloader workers
Logging results to ip5/aquarium5
Starting training for 10 epochs...

     Epoch   gpu_mem       box       obj       cls     total    labels  img_size
  0%|                                                                                                                           | 0/28 [00:00<?, ?it/s]Killed
root@cf40a6498016:~# /opt/conda/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

从这个输出中,我认为完成了 0 个时期。
我的 data.yaml 包含以下代码:

train: /usr/src/app/aquarium/train/images
val: /usr/src/app/aquarium/valid/images

nc: 7
names: ['fish', 'jellyfish', 'penguin', 'puffin', 'shark', 'starfish', 'stingray']

wandb.ai不显示任何指标,但我有文件 config.yaml、requirements.txt、wandb-metadata.json 和 wandb-summary.json。

为什么我没有得到任何输出?
实际上根本没有培训吗?
如果有培训,我该如何使用我的模型?

4

1 回答 1

1

问题是,VM 内存不足。解决方案是创建 16 GB 的交换内存,以便机器可以将虚拟硬盘驱动器用作 RAM。

于 2021-05-03T14:19:07.170 回答