32

使用 PostgreSQL 8.4.9 查询的 PostgreSQL 性能有一个奇怪的问题。此查询正在选择 3D 体积内的一组点,使用 aLEFT OUTER JOIN添加相关 ID 列,其中相关 ID 存在。范围内的微小变化x会导致 PostgreSQL 选择不同的查询计划,执行时间从 0.01 秒到 50 秒。这是有问题的查询:

SELECT treenode.id AS id,
       treenode.parent_id AS parentid,
       (treenode.location).x AS x,
       (treenode.location).y AS y,
       (treenode.location).z AS z,
       treenode.confidence AS confidence,
       treenode.user_id AS user_id,
       treenode.radius AS radius,
       ((treenode.location).z - 50) AS z_diff,
       treenode_class_instance.class_instance_id AS skeleton_id
  FROM treenode LEFT OUTER JOIN
         (treenode_class_instance INNER JOIN
          class_instance ON treenode_class_instance.class_instance_id
                                                  = class_instance.id
                            AND class_instance.class_id = 7828307)
       ON (treenode_class_instance.treenode_id = treenode.id
           AND treenode_class_instance.relation_id = 7828321)
  WHERE treenode.project_id = 4
    AND (treenode.location).x >= 8000
    AND (treenode.location).x <= (8000 + 4736)
    AND (treenode.location).y >= 22244
    AND (treenode.location).y <= (22244 + 3248)
    AND (treenode.location).z >= 0
    AND (treenode.location).z <= 100
  ORDER BY parentid DESC, id, z_diff
  LIMIT 400;

该查询需要将近一分钟,如果我添加EXPLAIN到该查询的前面,似乎正在使用以下查询计划:

 Limit  (cost=56185.16..56185.17 rows=1 width=89)
   ->  Sort  (cost=56185.16..56185.17 rows=1 width=89)
         Sort Key: treenode.parent_id, treenode.id, (((treenode.location).z - 50::double precision))
         ->  Nested Loop Left Join  (cost=6715.16..56185.15 rows=1 width=89)
               Join Filter: (treenode_class_instance.treenode_id = treenode.id)
               ->  Bitmap Heap Scan on treenode  (cost=148.55..184.16 rows=1 width=81)
                     Recheck Cond: (((location).x >= 8000::double precision) AND ((location).x <= 12736::double precision) AND ((location).z >= 0::double precision) AND ((location).z <= 100::double precision))
                     Filter: (((location).y >= 22244::double precision) AND ((location).y <= 25492::double precision) AND (project_id = 4))
                     ->  BitmapAnd  (cost=148.55..148.55 rows=9 width=0)
                           ->  Bitmap Index Scan on location_x_index  (cost=0.00..67.38 rows=2700 width=0)
                                 Index Cond: (((location).x >= 8000::double precision) AND ((location).x <= 12736::double precision))
                           ->  Bitmap Index Scan on location_z_index  (cost=0.00..80.91 rows=3253 width=0)
                                 Index Cond: (((location).z >= 0::double precision) AND ((location).z <= 100::double precision))
               ->  Hash Join  (cost=6566.61..53361.69 rows=211144 width=16)
                     Hash Cond: (treenode_class_instance.class_instance_id = class_instance.id)
                     ->  Seq Scan on treenode_class_instance  (cost=0.00..25323.79 rows=969285 width=16)
                           Filter: (relation_id = 7828321)
                     ->  Hash  (cost=5723.54..5723.54 rows=51366 width=8)
                           ->  Seq Scan on class_instance  (cost=0.00..5723.54 rows=51366 width=8)
                                 Filter: (class_id = 7828307)
(20 rows)

但是,如果我将范围内的条件替换为8000x10644查询将在几分之一秒内执行并使用以下查询计划:

 Limit  (cost=58378.94..58378.95 rows=2 width=89)
   ->  Sort  (cost=58378.94..58378.95 rows=2 width=89)
         Sort Key: treenode.parent_id, treenode.id, (((treenode.location).z - 50::double precision))
         ->  Hash Left Join  (cost=57263.11..58378.93 rows=2 width=89)
               Hash Cond: (treenode.id = treenode_class_instance.treenode_id)
               ->  Bitmap Heap Scan on treenode  (cost=231.12..313.44 rows=2 width=81)
                     Recheck Cond: (((location).z >= 0::double precision) AND ((location).z <= 100::double precision) AND ((location).x >= 10644::double precision) AND ((location).x <= 15380::double precision))
                     Filter: (((location).y >= 22244::double precision) AND ((location).y <= 25492::double precision) AND (project_id = 4))
                     ->  BitmapAnd  (cost=231.12..231.12 rows=21 width=0)
                           ->  Bitmap Index Scan on location_z_index  (cost=0.00..80.91 rows=3253 width=0)
                                 Index Cond: (((location).z >= 0::double precision) AND ((location).z <= 100::double precision))
                           ->  Bitmap Index Scan on location_x_index  (cost=0.00..149.95 rows=6157 width=0)
                                 Index Cond: (((location).x >= 10644::double precision) AND ((location).x <= 15380::double precision))
               ->  Hash  (cost=53361.69..53361.69 rows=211144 width=16)
                     ->  Hash Join  (cost=6566.61..53361.69 rows=211144 width=16)
                           Hash Cond: (treenode_class_instance.class_instance_id = class_instance.id)
                           ->  Seq Scan on treenode_class_instance  (cost=0.00..25323.79 rows=969285 width=16)
                                 Filter: (relation_id = 7828321)
                           ->  Hash  (cost=5723.54..5723.54 rows=51366 width=8)
                                 ->  Seq Scan on class_instance  (cost=0.00..5723.54 rows=51366 width=8)
                                       Filter: (class_id = 7828307)
(21 rows)

我远不是解析这些查询计划的专家,但明显的区别似乎是在一个x范围内它使用 a Hash Left JoinLEFT OUTER JOIN非常快),而在另一个范围内它使用 a Nested Loop Left Join(这似乎非常慢的)。在这两种情况下,查询都会返回大约 90 行。如果我SET ENABLE_NESTLOOP TO FALSE在查询的慢版本之前这样做,它会非常快,但我知道使用该设置通常是一个坏主意

例如,我可以创建一个特定的索引以使查询计划者更有可能选择明显更有效的策略吗?谁能建议为什么 PostgreSQL 的查询计划器应该为这些查询之一选择如此糟糕的策略?下面我包含了可能有用的模式的详细信息。


treenode 表有 900,000 行,定义如下:

                                     Table "public.treenode"
    Column     |           Type           |                      Modifiers                       
---------------+--------------------------+------------------------------------------------------
 id            | bigint                   | not null default nextval('concept_id_seq'::regclass)
 user_id       | bigint                   | not null
 creation_time | timestamp with time zone | not null default now()
 edition_time  | timestamp with time zone | not null default now()
 project_id    | bigint                   | not null
 location      | double3d                 | not null
 parent_id     | bigint                   | 
 radius        | double precision         | not null default 0
 confidence    | integer                  | not null default 5
Indexes:
    "treenode_pkey" PRIMARY KEY, btree (id)
    "treenode_id_key" UNIQUE, btree (id)
    "location_x_index" btree (((location).x))
    "location_y_index" btree (((location).y))
    "location_z_index" btree (((location).z))
Foreign-key constraints:
    "treenode_parent_id_fkey" FOREIGN KEY (parent_id) REFERENCES treenode(id)
Referenced by:
    TABLE "treenode_class_instance" CONSTRAINT "treenode_class_instance_treenode_id_fkey" FOREIGN KEY (treenode_id) REFERENCES treenode(id) ON DELETE CASCADE
    TABLE "treenode" CONSTRAINT "treenode_parent_id_fkey" FOREIGN KEY (parent_id) REFERENCES treenode(id)
Triggers:
    on_edit_treenode BEFORE UPDATE ON treenode FOR EACH ROW EXECUTE PROCEDURE on_edit()
Inherits: location

double3d复合类型定义如下:

Composite type "public.double3d"
 Column |       Type       
--------+------------------
 x      | double precision
 y      | double precision
 z      | double precision

连接中涉及的另外两个表是treenode_class_instance

                               Table "public.treenode_class_instance"
      Column       |           Type           |                      Modifiers                       
-------------------+--------------------------+------------------------------------------------------
 id                | bigint                   | not null default nextval('concept_id_seq'::regclass)
 user_id           | bigint                   | not null
 creation_time     | timestamp with time zone | not null default now()
 edition_time      | timestamp with time zone | not null default now()
 project_id        | bigint                   | not null
 relation_id       | bigint                   | not null
 treenode_id       | bigint                   | not null
 class_instance_id | bigint                   | not null
Indexes:
    "treenode_class_instance_pkey" PRIMARY KEY, btree (id)
    "treenode_class_instance_id_key" UNIQUE, btree (id)
    "idx_class_instance_id" btree (class_instance_id)
Foreign-key constraints:
    "treenode_class_instance_class_instance_id_fkey" FOREIGN KEY (class_instance_id) REFERENCES class_instance(id) ON DELETE CASCADE
    "treenode_class_instance_relation_id_fkey" FOREIGN KEY (relation_id) REFERENCES relation(id)
    "treenode_class_instance_treenode_id_fkey" FOREIGN KEY (treenode_id) REFERENCES treenode(id) ON DELETE CASCADE
    "treenode_class_instance_user_id_fkey" FOREIGN KEY (user_id) REFERENCES "user"(id)
Triggers:
    on_edit_treenode_class_instance BEFORE UPDATE ON treenode_class_instance FOR EACH ROW EXECUTE PROCEDURE on_edit()
Inherits: relation_instance

...和class_instance

                                  Table "public.class_instance"
    Column     |           Type           |                      Modifiers                       
---------------+--------------------------+------------------------------------------------------
 id            | bigint                   | not null default nextval('concept_id_seq'::regclass)
 user_id       | bigint                   | not null
 creation_time | timestamp with time zone | not null default now()
 edition_time  | timestamp with time zone | not null default now()
 project_id    | bigint                   | not null
 class_id      | bigint                   | not null
 name          | character varying(255)   | not null
Indexes:
    "class_instance_pkey" PRIMARY KEY, btree (id)
    "class_instance_id_key" UNIQUE, btree (id)
Foreign-key constraints:
    "class_instance_class_id_fkey" FOREIGN KEY (class_id) REFERENCES class(id)
    "class_instance_user_id_fkey" FOREIGN KEY (user_id) REFERENCES "user"(id)
Referenced by:
    TABLE "class_instance_class_instance" CONSTRAINT "class_instance_class_instance_class_instance_a_fkey" FOREIGN KEY (class_instance_a) REFERENCES class_instance(id) ON DELETE CASCADE
    TABLE "class_instance_class_instance" CONSTRAINT "class_instance_class_instance_class_instance_b_fkey" FOREIGN KEY (class_instance_b) REFERENCES class_instance(id) ON DELETE CASCADE
    TABLE "connector_class_instance" CONSTRAINT "connector_class_instance_class_instance_id_fkey" FOREIGN KEY (class_instance_id) REFERENCES class_instance(id)
    TABLE "treenode_class_instance" CONSTRAINT "treenode_class_instance_class_instance_id_fkey" FOREIGN KEY (class_instance_id) REFERENCES class_instance(id) ON DELETE CASCADE
Triggers:
    on_edit_class_instance BEFORE UPDATE ON class_instance FOR EACH ROW EXECUTE PROCEDURE on_edit()
Inherits: concept
4

6 回答 6

60

如果查询计划器做出错误的决定,它主要是以下两种情况之一:

1、统计不准确。

你跑ANALYZE够了吗?它的组合形式也很受欢迎VACUUM ANALYZE。如果autovacuum开启(这是现代 Postgres 的默认设置),ANALYZE则会自动运行。但请考虑:

(前两个答案仍然适用于 Postgres 12。)

如果您的表很大并且数据分布不规则,则提高default_statistics_target可能会有所帮助。或者更确切地说,只需为相关列设置统计目标(基本上是查询 的WHERE或子句中的那些):JOIN

ALTER TABLE ... ALTER COLUMN ... SET STATISTICS 400;  -- calibrate number

目标可在 0 到 10000 范围内设置;

ANALYZE之后再次 运行(在相关表上)。

2.计划员估算的成本设置已关闭。

阅读手册中的计划器成本常数一章。

查看这个通常很有帮助的 PostgreSQL Wiki 页面上的default_statistics_targetrandom_page_cost章节。

还有许多其他可能的原因,但这些是迄今为止最常见的原因。

于 2011-11-22T15:07:25.953 回答
8

我怀疑这与不良统计数据有关,除非您考虑数据库统计数据和自定义数据类型的组合。

我的猜测是 PostgreSQL 选择了一个嵌套循环连接,因为它查看谓词(treenode.location).x >= 8000 AND (treenode.location).x <= (8000 + 4736)并在比较的算术中做了一些时髦的事情。当连接的内侧有少量数据时,通常会使用嵌套循环。

但是,一旦你将常数切换到 10736,你就会得到一个不同的计划。遗传查询优化 (GEQO)启动的计划总是有足够的复杂性,并且您会看到非确定性计划构建的副作用。查询中的评估顺序有足够的差异,让我认为这就是正在发生的事情。

一种选择是为此使用参数化/准备好的语句来检查,而不是使用临时代码。由于您在 3 维空间中工作,您可能还需要考虑使用PostGIS。虽然它可能有点矫枉过正,但它也可以为您提供使这些查询正常运行所需的性能。

虽然强迫计划者行为不是最好的选择,但有时我们最终会做出比软件更好的决策。

于 2011-11-23T02:42:13.783 回答
2

欧文对统计数据的看法。还:

ORDER BY parentid DESC, id, z_diff

排序

parentid DESC, id, z

可能会给优化器更多的洗牌空间。(我觉得没关系,因为是最后一学期,而且那种也不是很贵,但你可以试试)

于 2011-11-22T15:29:36.457 回答
2

我不确定这是您问题的根源,但似乎在 8.4.8 和 8.4.9 版本之间的 postgres 查询计划程序中进行了一些更改。您可以尝试使用旧版本,看看它是否有所作为。

http://postgresql.1045698.n5.nabble.com/BUG-6275-Horrible-performance-regression-td4944891.html

如果您更改版本,请不要忘记重新分析您的表格。

于 2011-11-28T19:01:43.933 回答
1

+1 用于调整统计目标和执行ANALYZE。对于 PostGIS(对于 OP)。

但是,与原始问题不太相关,但是,如果有人到这里来寻找如何处理,一般来说,复杂查询中不准确的计划者行数估计会导致不希望的计划。一个选项可能是将初始查询的一部分包装到一个函数中,并将其ROWS选项设置为或多或少预期的值。我从来没有这样做过,但显然应该工作。

中还有行估计指令pg_hint_plan。我一般不会建议规划师暗示,但调整行估计是一个更柔和的选择。

最后,为了强制执行嵌套循环扫描,有时可能会LATERAL JOIN使用子查询LIMIT N或仅OFFSET 0在子查询内执行。那会给你你想要的。但请注意,这是一个非常粗糙的技巧。在某些时候,如果条件发生变化,它会导致性能下降 - 因为表增长或只是不同的数据分布。但这仍然是一个不错的选择,只是为了紧急为遗留系统获得一些缓解。

于 2020-05-05T02:44:52.243 回答
1

如果计划不好,您总是可以求助于 pg_hint_plan 扩展。它为 PostgreSQL 提供 Oracle 风格的提示。

于 2021-06-28T00:41:57.363 回答