我时不时地在我的集群上收到一个 pg 不一致错误。正如文档所建议的那样,我运行 ceph pg repair pg.id 并且命令给出“在 osd y 上指示 pg x 进行修复”似乎按预期工作。但是它并没有立即启动,这可能是什么原因?我正在运行 24 小时磨砂,所以在任何给定时间我至少有 8-10 皮克被擦洗或深度擦洗。清理或修复等 pg 进程是否形成队列,我的修复命令是否只是等待轮到它?或者这背后还有其他问题吗?
编辑:
Ceph 运行状况详细信息输出:
pg 57.ee is active+clean+inconsistent, acting [16,46,74,59,5]
的输出
rados list-inconsistent-obj 57.ee --format=json-pretty
{
"epoch": 55281,
"inconsistents": [
{
"object": {
"name": "10001a447c7.00005b03",
"nspace": "",
"locator": "",
"snap": "head",
"version": 150876
},
"errors": [],
"union_shard_errors": [
"read_error"
],
"selected_object_info": {
"oid": {
"oid": "10001a447c7.00005b03",
"key": "",
"snapid": -2,
"hash": 3954101486,
"max": 0,
"pool": 57,
"namespace": ""
},
"version": "55268'150876",
"prior_version": "0'0",
"last_reqid": "client.42086585.0:355736",
"user_version": 150876,
"size": 4194304,
"mtime": "2021-03-15 21:52:43.651368",
"local_mtime": "2021-03-15 21:52:45.399035",
"lost": 0,
"flags": [
"dirty",
"data_digest"
],
"truncate_seq": 0,
"truncate_size": 0,
"data_digest": "0xf88f1537",
"omap_digest": "0xffffffff",
"expected_object_size": 0,
"expected_write_size": 0,
"alloc_hint_flags": 0,
"manifest": {
"type": 0
},
"watchers": {}
},
"shards": [
{
"osd": 5,
"primary": false,
"shard": 4,
"errors": [],
"size": 1400832,
"omap_digest": "0xffffffff",
"data_digest": "0x00000000"
},
{
"osd": 16,
"primary": true,
"shard": 0,
"errors": [],
"size": 1400832,
"omap_digest": "0xffffffff",
"data_digest": "0x00000000"
},
{
"osd": 46,
"primary": false,
"shard": 1,
"errors": [],
"size": 1400832,
"omap_digest": "0xffffffff",
"data_digest": "0x00000000"
},
{
"osd": 59,
"primary": false,
"shard": 3,
"errors": [
"read_error"
],
"size": 1400832
},
{
"osd": 74,
"primary": false,
"shard": 2,
"errors": [],
"size": 1400832,
"omap_digest": "0xffffffff",
"data_digest": "0x00000000"
}
]
}
]
}
此 pg 在 EC 池中。当我运行 ceph pg repair 57.ee 我得到输出:
instructing pg 57.ees0 on osd.16 to repair
但是,正如您从 pg 报告中看到的,不一致的分片位于 osd 59 中。我认为输出末尾的“s0”指的是第一个分片,所以我也尝试了这样的修复命令:
ceph pg repair 57.ees3 但我收到一个错误,告诉我这是无效命令。