0

获得ceph状态:

# ceph status
  cluster:
    id:     b683c5f1-fd15-4805-83c0-add6fbb7faae
    health: HEALTH_ERR
            1 backfillfull osd(s)
            8 pool(s) backfillfull
            50873/1090116 objects misplaced (4.667%)
            Degraded data redundancy: 34149/1090116 objects degraded (3.133%), 3 pgs degraded, 3 pgs undersized
            Degraded data redundancy (low space): 6 pgs backfill_toofull

  services:
    mon: 3 daemons, quorum tb-ceph-2-prod,tb-ceph-4-prod,tb-ceph-3-prod
    mgr: tb-ceph-1-prod(active)
    osd: 6 osds: 6 up, 6 in; 6 remapped pgs
    rgw: 4 daemons active

  data:
    pools:   8 pools, 232 pgs
    objects: 545.1 k objects, 153 GiB
    usage:   728 GiB used, 507 GiB / 1.2 TiB avail
    pgs:     34149/1090116 objects degraded (3.133%)
             50873/1090116 objects misplaced (4.667%)
             226 active+clean
             3   active+undersized+degraded+remapped+backfill_toofull
             3   active+remapped+backfill_toofull

  io:
    client:   286 KiB/s rd, 2 op/s rd, 0 op/s wr

以下是 OSD 状态:

# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
 2   hdd 0.09769  1.00000 100 GiB  32 GiB  68 GiB 32.38 0.55  30
 5   hdd 0.32230  1.00000 330 GiB 220 GiB 110 GiB 66.71 1.13 122
 0   hdd 0.32230  1.00000 330 GiB 194 GiB 136 GiB 58.90 1.00 125
 1   hdd 0.04390  0.95001  45 GiB  43 GiB 2.5 GiB 94.53 1.60  11
 3   hdd 0.09769  1.00000 100 GiB  42 GiB  58 GiB 42.37 0.72  44
 4   hdd 0.32230  0.95001 330 GiB 196 GiB 134 GiB 59.43 1.01 129
                    TOTAL 1.2 TiB 728 GiB 507 GiB 58.94
MIN/MAX VAR: 0.55/1.60  STDDEV: 19.50

我试过这些命令:

 ceph osd pool set default.rgw.buckets.data pg_num 32
 ceph osd pool set default.rgw.buckets.data pgp_num 32

但这也无济于事。我认为 pg_num 32 对于我的 OSD 计数来说太小了,但不确定在健康状态错误时将其设置得更大是否安全

4

1 回答 1

0

您的 OSD #1 已满。磁盘驱动器相当小,您可能应该将其更换为 100G 驱动器,就像您使用的其他两个驱动器一样。要纠正这种情况,请查看Ceph 控制命令

该命令ceph osd reweight-by-utilization将调整过度使用的 OSD 的权重并触发 PG 的重新平衡。另请参阅描述这种情况的博客文章。

于 2020-03-02T15:55:03.263 回答