0

Cephadm Pacific v16.2.7 我们的 Ceph 集群卡住了 pgs 降级和 osd 已关闭 原因:- OSD 已满

我们尝试过的事情

将 vale 更改为最大可能组合(不确定是否正确?) backfillfull <nearfull、nearfull < full 和 full < failsafe_full

ceph-objectstore-tool - 尝试删除一些 pgs 以恢复空间

尝试挂载 osd 并删除 pg 以恢复一些空间,但不确定如何在 bluestore 中执行此操作。

全球复苏事件 - 永远卡住


ceph -s 


cluster:
    id:     a089a4b8-2691-11ec-849f-07cde9cd0b53
    health: HEALTH_WARN
            6 failed cephadm daemon(s)
            1 hosts fail cephadm check
            Reduced data availability: 362 pgs inactive, 6 pgs down, 287 pgs peering, 48 pgs stale
            Degraded data redundancy: 5756984/22174447 objects degraded (25.962%), 91 pgs degraded, 84 pgs undersized
            13 daemons have recently crashed
            3 slow ops, oldest one blocked for 31 sec, daemons [mon.raspi4-8g-18,mon.raspi4-8g-20] have slow ops.

  services:
    mon: 5 daemons, quorum raspi4-8g-20,raspi4-8g-25,raspi4-8g-18,raspi4-8g-10,raspi4-4g-23 (age 2s)
    mgr: raspi4-8g-18.slyftn(active, since 3h), standbys: raspi4-8g-12.xuuxmp, raspi4-8g-10.udbcyy
    osd: 19 osds: 15 up (since 2h), 15 in (since 2h); 6 remapped pgs

  data:
    pools:   40 pools, 636 pgs
    objects: 4.28M objects, 4.9 TiB
    usage:   6.1 TiB used, 45 TiB / 51 TiB avail
    pgs:     56.918% pgs not active
             5756984/22174447 objects degraded (25.962%)
             2914/22174447 objects misplaced (0.013%)
             253 peering
             218 active+clean
             57  undersized+degraded+peered
             25  stale+peering
             20  stale+active+clean
             19  active+recovery_wait+undersized+degraded+remapped
             10  active+recovery_wait+degraded
             7   remapped+peering
             7   activating
             6   down
             2   active+undersized+remapped
             2   stale+remapped+peering
             2   undersized+remapped+peered
             2   activating+degraded
             1   active+remapped+backfill_wait
             1   active+recovering+undersized+degraded+remapped
             1   undersized+peered
             1   active+clean+scrubbing+deep
             1   active+undersized+degraded+remapped+backfill_wait
             1   stale+active+recovery_wait+undersized+degraded+remapped

  progress:
    Global Recovery Event (2h)
      [==========..................] (remaining: 4h)


'''
4

1 回答 1

0

某些版本的 BlueStore 容易受到 BlueFS 日志变得非常大的影响——超出了无法启动 OSD 的程度。这种状态通过引导需要很长时间并且在 _replay 功能中失败来指示。

这可以通过以下方式修复:ceph-bluestore-tool fsck –path osd path –bluefs_replay_recovery=true

建议先检查救援过程是否成功:: ceph-bluestore-tool fsck –path osd path –bluefs_replay_recovery=true –bluefs_replay_recovery_disable_compact=true

如果上述 fsck 成功,则可以应用修复程序

特别感谢,已经在dewDrive云备份教员的帮助下解决了

于 2022-02-13T12:44:06.197 回答