11

我使用推荐值配置了 Ceph(使用文档中的公式)。我有 3 个 OSD,我的配置(我已经放在监控节点和所有 3 个 OSD 上)包括:

osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 150
osd pool default pgp num = 150

当我跑步时,ceph status我得到:

 health HEALTH_WARN
        too many PGs per OSD (1042 > max 300)

这令人困惑有两个原因。首先,因为推荐的公式没有满足 Ceph。其次,也是最令人费解的是,它说我每个 OSD 有 1042 个 PG,而我的配置是 150。

我究竟做错了什么?

4

2 回答 2

15

在设置 PG 计数之前,您需要了解 3 件事。

1. OSD 数量

ceph osd ls

Sample Output:
 0
 1
 2
 
 Here Total number of osd is three.

2. 池数

ceph osd pool ls或者rados lspools

Sample Output:
  rbd
  images
  vms
  volumes
  backups
     
Here Total number of pool is five.

3. 复制计数

ceph osd dump | grep repli

Sample Output:
 pool 0 'rbd' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 38 flags hashpspool stripe_width 0
 pool 1 'images' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 40 flags hashpspool stripe_width 0
 pool 2 'vms' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 42 flags hashpspool stripe_width 0
 pool 3 'volumes' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 36 flags hashpspool stripe_width 0
 pool 4 'backups' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 44 flags hashpspool stripe_width 0

You can see each pool has replication count two.

现在让我们开始计算

计算:

总 PG 计算:

Total PGs = (Total_number_of_OSD * 100) / max_replication_count

This result must be rounded up to the nearest power of 2.

例子:

OSD 数量:3
复制数量:2

总 PG = (3 * 100) / 2 = 150。150 到 2 的最近幂是 256。

所以最大推荐 PG 数为 256

您可以为每个池设置 PG

每个池的总 PG 计算:

Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool count

This result must be rounded up to the nearest power of 2.

例子:

OSD 数量:3
复制数量:2
池数量:5

总 PG = ((3 * 100) / 2 ) / 5 = 150 / 5 = 30 。30 到 2 的最近幂是 32。

所以每个池的 PG 总数为 32。

2 表的幂:

2^0     1
2^1     2
2^2     4
2^3     8
2^4     16
2^5     32
2^6     64
2^7     128
2^8     256
2^9     512
2^10    1024

有用的命令

ceph osd pool create <pool-name> <pg-number> <pgp-number> - To create a new pool

ceph osd pool get <pool-name> pg_num - To get number of PG in a pool

ceph osd pool get <pool-name> pgp_num - To get number of PGP in a pool

ceph osd pool set <pool-name> pg_num <number> - To increase number of PG in a pool

ceph osd pool set <pool-name> pgp_num <number> - To increase number of PGP in a pool

*usually pg and pgp number is same
于 2017-01-01T19:42:25.853 回答
3

我如何在 12.2.4 luminous 中修复它:

每个 OSD 的 PG 太多(380 > 最大 200)可能会导致很多阻塞请求。

首先你需要设置:

[global]

mon_max_pg_per_osd = 800  # < depends on you amount of PGs
osd max pg per osd hard ratio = 10 # < default is 2, try to set at least 5. It will be
mon allow pool delete = true # without it you can't remove a pool 

然后一一重启所有的MON和OSD。

检查值:

ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok config get  mon_max_pg_per_osd
ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok config get osd_max_pg_per_osd_hard_ratio

现在看这里:

rados lspools
ceph osd pool get .users.email pg_num

在我的情况下,默认情况下pg_num是 128 或类似的东西(我的集群已经 4 年了,它进行了很多升级很多更改)。你可以像这样减少它。

当心:

ceph osd pool create .users.email.new 8
rados cppool .users.email default.rgw.lc.new
ceph osd pool delete .users.email .users.email --yes-i-really-really-mean-it
ceph osd pool rename .users.email.new .users.email
ceph osd pool application enable .users.email rgw

如果这还不够,请尝试找到另一个可以切割的水池。

于 2018-04-03T21:36:01.103 回答