21

查询:

SELECT "replays_game".*
FROM "replays_game"
INNER JOIN
 "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 50027

如果我设置SET enable_seqscan = off,那么它会做的很快,即:

QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.00..27349.80 rows=3395 width=72) (actual time=28.726..65.056 rows=3398 loops=1)
   ->  Index Scan using replays_playeringame_player_id on replays_playeringame  (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.019..2.412 rows=3398 loops=1)
         Index Cond: (player_id = 50027)
   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..5.41 rows=1 width=72) (actual time=0.017..0.017 rows=1 loops=3398)
         Index Cond: (id = replays_playeringame.game_id)
 Total runtime: 65.437 ms

但是如果没有可怕的 enable_seqscan,它会选择做一件更慢的事情:

QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=7330.18..18145.24 rows=3395 width=72) (actual time=92.380..535.422 rows=3398 loops=1)
   Hash Cond: (replays_playeringame.game_id = replays_game.id)
   ->  Index Scan using replays_playeringame_player_id on replays_playeringame  (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.020..2.899 rows=3398 loops=1)
         Index Cond: (player_id = 50027)
   ->  Hash  (cost=3668.08..3668.08 rows=151208 width=72) (actual time=90.842..90.842 rows=151208 loops=1)
         Buckets: 1024  Batches: 32 (originally 16)  Memory Usage: 1025kB
         ->  Seq Scan on replays_game  (cost=0.00..3668.08 rows=151208 width=72) (actual time=0.020..29.061 rows=151208 loops=1)
 Total runtime: 535.821 ms

以下是相关指标:

Index "public.replays_game_pkey"
 Column |  Type   | Definition
--------+---------+------------
 id     | integer | id
primary key, btree, for table "public.replays_game"

Index "public.replays_playeringame_player_id"
  Column   |  Type   | Definition
-----------+---------+------------
 player_id | integer | player_id
btree, for table "public.replays_playeringame"

所以我的问题是,我做错了什么,Postgres 错误地估计了两种加入方式的相对成本?我在成本估算中看到它认为散列连接会更快。它对 index-join 成本的估计降低了 500 倍。

我怎样才能给 Postgres 更多的线索?我确实VACUUM ANALYZE在运行上述所有内容之前立即运行了。

有趣的是,如果我对游戏数量较少的玩家运行此查询,Postgres 会选择执行索引扫描 + 嵌套循环。因此,关于大量游戏的某些东西会引起这种不受欢迎的行为,即相对估计成本与实际估计成本不一致。

最后,我应该使用 Postgres 吗?我不希望成为数据库调优方面的专家,因此我正在寻找一种数据库,该数据库能够在尽职尽责的开发人员的关注水平下运行得相当好,而不是专门的 DBA。我担心如果我坚持使用 Postgres,我会遇到源源不断的此类问题,这将迫使我成为 Postgres 专家,也许另一个 DB 会更宽容地采用更随意的方法。


Postgres 专家 (RhodiumToad) 审查了我的完整数据库设置 ( http://pastebin.com/77QuiQSp ) 并推荐了set cpu_tuple_cost = 0.1. 这给了一个戏剧性的加速: http: //pastebin.com/nTHvSHVd

或者,切换到 MySQL 也很好地解决了这个问题。我在我的 OS X 机器上默认安装了 MySQL 和 Postgres,MySQL 的速度提高了 2 倍,通过重复执行查询来比较“预热”的查询。在“冷”查询上,即第一次执行给定查询时,MySQL 的速度要快 5 到 150 倍。冷查询的性能对于我的特定应用程序非常重要。

就我而言,最大的问题仍然悬而未决——Postgres 是否需要更多的摆弄和配置才能比 MySQL 运行得更好?例如,考虑到这里评论者提供的建议都没有奏效。

4

4 回答 4

13

我的猜测是您使用的是 default random_page_cost = 4,它太高了,使得索引扫描成本太高。

我尝试使用此脚本重建 2 个表:

CREATE TABLE replays_game (
    id integer NOT NULL,
    PRIMARY KEY (id)
);

CREATE TABLE replays_playeringame (
    player_id integer NOT NULL,
    game_id integer NOT NULL,
    PRIMARY KEY (player_id, game_id),
    CONSTRAINT replays_playeringame_game_fkey
        FOREIGN KEY (game_id) REFERENCES replays_game (id)
);

CREATE INDEX ix_replays_playeringame_game_id
    ON replays_playeringame (game_id);

-- 150k games
INSERT INTO replays_game
SELECT generate_series(1, 150000);

-- ~150k players, ~2 games each
INSERT INTO replays_playeringame
select trunc(random() * 149999 + 1), generate_series(1, 150000);

INSERT INTO replays_playeringame
SELECT *
FROM
    (
        SELECT
            trunc(random() * 149999 + 1) as player_id,
            generate_series(1, 150000) as game_id
    ) AS t
WHERE
    NOT EXISTS (
        SELECT 1
        FROM replays_playeringame
        WHERE
            t.player_id = replays_playeringame.player_id
            AND t.game_id = replays_playeringame.game_id
    )
;

-- the heavy player with 3000 games
INSERT INTO replays_playeringame
select 999999, generate_series(1, 3000);

默认值为 4:

game=# set random_page_cost = 4;
SET
game=# explain analyse SELECT "replays_game".*
FROM "replays_game"
INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 999999;
                                                                     QUERY PLAN                                                                      
-----------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=1483.54..4802.54 rows=3000 width=4) (actual time=3.640..110.212 rows=3000 loops=1)
   Hash Cond: (replays_game.id = replays_playeringame.game_id)
   ->  Seq Scan on replays_game  (cost=0.00..2164.00 rows=150000 width=4) (actual time=0.012..34.261 rows=150000 loops=1)
   ->  Hash  (cost=1446.04..1446.04 rows=3000 width=4) (actual time=3.598..3.598 rows=3000 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 106kB
         ->  Bitmap Heap Scan on replays_playeringame  (cost=67.54..1446.04 rows=3000 width=4) (actual time=0.586..2.041 rows=3000 loops=1)
               Recheck Cond: (player_id = 999999)
               ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..66.79 rows=3000 width=0) (actual time=0.560..0.560 rows=3000 loops=1)
                     Index Cond: (player_id = 999999)
 Total runtime: 110.621 ms

将其降低到 2 后:

game=# set random_page_cost = 2;
SET
game=# explain analyse SELECT "replays_game".*
FROM "replays_game"
INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 999999;
                                                                  QUERY PLAN                                                                   
-----------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=45.52..4444.86 rows=3000 width=4) (actual time=0.418..27.741 rows=3000 loops=1)
   ->  Bitmap Heap Scan on replays_playeringame  (cost=45.52..1424.02 rows=3000 width=4) (actual time=0.406..1.502 rows=3000 loops=1)
         Recheck Cond: (player_id = 999999)
         ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..44.77 rows=3000 width=0) (actual time=0.388..0.388 rows=3000 loops=1)
               Index Cond: (player_id = 999999)
   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..0.99 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=3000)
         Index Cond: (id = replays_playeringame.game_id)
 Total runtime: 28.542 ms
(8 rows)

如果使用 SSD,我会将其进一步降低到 1.1。

至于你的最后一个问题,我真的认为你应该坚持使用 postgresql。我有使用 postgresql 和 mssql 的经验,我需要在后者上付出三倍的努力才能使其性能达到前者的一半。

于 2012-05-17T22:29:39.137 回答
10

我运行了 sayap 的测试床代码(谢谢!),并进行了以下修改:

  • 代码运行四次,random_page_cost 设置为 8,4,2,1;以该顺序。(cpc=8 旨在启动磁盘缓冲区缓存)
  • 重复测试,减少 (1/2,1/4,1/8) 部分的强击手(分别为:3K、1K5,750 和 375 强击手;其余记录保持不变。
  • 使用 work_mem 的较低设置(最低 64K)重复这些 4*4 测试。

在这次跑步之后,我做了同样的跑步,但扩大了十倍:1M5 记录(30K 硬击球手)

目前,我正在以一百倍的放大率运行相同的测试,但初始化相当慢......

结果 单元格中的条目是以毫秒为单位的总时间加上表示所选查询计划的字符串。(只有少数计划发生)

Original 3K / 150K  work_mem=16M

rpc     |       3K      |       1K5     |       750     |       375
--------+---------------+---------------+---------------+------------
8*      | 50.8  H.BBi.HS| 44.3  H.BBi.HS| 38.5  H.BBi.HS| 41.0  H.BBi.HS
4       | 43.6  H.BBi.HS| 48.6  H.BBi.HS| 4.34  NBBi    | 1.33  NBBi
2       | 6.92  NBBi    | 3.51  NBBi    | 4.61  NBBi    | 1.24  NBBi
1       | 6.43  NII     | 3.49  NII     | 4.19  NII     | 1.18  NII


Original 3K / 150K work_mem=64K

rpc     |       3K      |       1K5     |       750     |       375
--------+---------------+---------------+---------------+------------
8*      | 74.2  H.BBi.HS| 69.6  NBBi    | 62.4  H.BBi.HS| 66.9  H.BBi.HS
4       | 6.67  NBBi    | 8.53  NBBi    | 1.91  NBBi    | 2.32  NBBi
2       | 6.66  NBBi    | 3.6   NBBi    | 1.77  NBBi    | 0.93  NBBi
1       | 7.81  NII     | 3.26  NII     | 1.67  NII     | 0.86  NII


Scaled 10*: 30K / 1M5  work_mem=16M

rpc     |       30K     |       15K     |       7k5     |       3k75
--------+---------------+---------------+---------------+------------
8*      | 623   H.BBi.HS| 556   H.BBi.HS| 531   H.BBi.HS| 14.9  NBBi
4       | 56.4  M.I.sBBi| 54.3  NBBi    | 27.1  NBBi    | 19.1  NBBi
2       | 71.0  NBBi    | 18.9  NBBi    | 9.7   NBBi    | 9.7   NBBi
1       | 79.0  NII     | 35.7  NII     | 17.7  NII     | 9.3   NII


Scaled 10*: 30K / 1M5  work_mem=64K

rpc     |       30K     |       15K     |       7k5     |       3k75
--------+---------------+---------------+---------------+------------
8*      | 729   H.BBi.HS| 722   H.BBi.HS| 723   H.BBi.HS| 19.6  NBBi
4       | 55.5  M.I.sBBi| 41.5  NBBi    | 19.3  NBBi    | 13.3  NBBi
2       | 70.5  NBBi    | 41.0  NBBi    | 26.3  NBBi    | 10.7  NBBi
1       | 69.7  NII     | 38.5  NII     | 20.0  NII     | 9.0   NII

Scaled 100*: 300K / 15M  work_mem=16M

rpc     |       300k    |       150K    |       75k     |       37k5
--------+---------------+---------------+---------------+---------------
8*      |7314   H.BBi.HS|9422   H.BBi.HS|6175   H.BBi.HS| 122   N.BBi.I
4       | 569   M.I.sBBi| 199   M.I.sBBi| 142   M.I.sBBi| 105   N.BBi.I
2       | 527   M.I.sBBi| 372   N.BBi.I | 198   N.BBi.I | 110   N.BBi.I
1       | 694   NII     | 362   NII     | 190   NII     | 107   NII

Scaled 100*: 300K / 15M  work_mem=64K

rpc     |       300k    |       150k    |       75k     |       37k5
--------+---------------+---------------+---------------+------------
8*      |22800 H.BBi.HS |21920 H.BBi.HS | 20630 N.BBi.I |19669  H.BBi.HS
4       |22095 H.BBi.HS |  284 M.I.msBBi| 205   B.BBi.I |  116  N.BBi.I
2       |  528 M.I.msBBi|  399  N.BBi.I | 211   N.BBi.I |  110  N.BBi.I
1       |  718 NII      |  364  NII     | 200   NII     |  105  NII

[8*] Note: the RandomPageCost=8 runs were only intended as a prerun to prime the disk buffer cache; the results should be ignored.

Legend for node types:
N := Nested loop
M := Merge join
H := Hash (or Hash join)
B := Bitmap heap scan
Bi := Bitmap index scan
S := Seq scan
s := sort
m := materialise

初步结论:

  • 原始查询的“工作集”太小:所有这些都适合核心,导致页面获取成本被严重高估。将 RPC 设置为 2(或 1)“解决”了这个问题,但是一旦查询规模扩大,页面成本就占主导地位,而 RPC=4 变得相当甚至更好。

  • 将 work_mem 设置为较低的值是使优化器转向索引扫描(而不是哈希+位图扫描)的另一种方法。我发现的差异比 Sayap 报告的要小。也许我有更有效的缓存大小,或者他忘记了缓存?

  • 众所周知,优化器存在“偏斜”分布(和“偏斜”或“峰值”多维分布)的问题。初始 3K/150K 硬击的 1/4 和 1/8 的测试运行表明,一旦“峰值“变平。
  • 在 2% 的边界上发生了一些事情:3000/150000 产生了不同的(更糟糕的)计划,与那些 <2% 的强硬者相比。这可能是直方图的粒度吗?
于 2012-05-18T18:28:33.273 回答
4

这是一篇旧帖子,但对我刚刚遇到类似问题很有帮助。

这是我到目前为止的发现。给定 151208 行replays_game,点击一个项目的平均成本约为log(151208)=12. 由于过滤后有3395记录replays_playeringame,平均成本为12*3395,相当高。此外,规划器高估了页面成本:它假设所有行都是随机分布的,而事实并非如此。如果这是真的,seq 扫描会好得多。所以基本上,查询计划试图避免最坏的情况。

@dsjoerg 的问题是replays_playeringame(game_id). 如果有索引,将始终使用索引扫描replays_playeringame(game_id):扫描索引的成本将变为3395+12(或接近该值)。

@Neil 建议在 上设置索引(player_id, game_id),这很接近但不准确。正确的索引是(game_id)or (game_id, player_id)

于 2016-03-10T02:34:22.470 回答
2

(player_id, game_id)使用表上的多列索引可能会获得更好的执行计划replays_playeringame。这避免了必须使用随机页面搜索来查找玩家 ID 的游戏 ID。

于 2012-05-17T23:02:51.123 回答