3
EXPLAIN ANALYZE SELECT     "alerts"."id", 
            "alerts"."created_at", 
            't1'::text AS src_table 
 FROM       "alerts" 
 INNER JOIN "devices" 
 ON         "devices"."id" = "alerts"."device_id" 
 INNER JOIN "sites" 
 ON         "sites"."id" = "devices"."site_id" 
 WHERE      "sites"."cloud_id" = 111
 AND        "alerts"."created_at" >= '2019-08-30'
 ORDER BY   "created_at" DESC limit 9;

 Limit  (cost=1.15..36021.60 rows=9 width=16) (actual time=30.505..29495.765 rows=9 loops=1)
  ->  Nested Loop  (cost=1.15..232132.92 rows=58 width=16) (actual time=30.504..29495.755 rows=9 loops=1)
        ->  Nested Loop  (cost=0.86..213766.42 rows=57231 width=24) (actual time=0.029..29086.323 rows=88858 loops=1)
              ->  Index Scan Backward using alerts_created_at_index on alerts  (cost=0.43..85542.16 rows=57231 width=24) (actual time=0.014..88.137 rows=88858 loops=1)
                    Index Cond: (created_at >= '2019-08-30 00:00:00'::timestamp without time zone)
              ->  Index Scan using devices_pkey on devices  (cost=0.43..2.23 rows=1 width=16) (actual time=0.016..0.325 rows=1 loops=88858)
                    Index Cond: (id = alerts.device_id)
        ->  Index Scan using sites_pkey on sites  (cost=0.29..0.31 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=88858)
              Index Cond: (id = devices.site_id)
              Filter: (cloud_id = 7231)
              Rows Removed by Filter: 1
Total runtime: 29495.816 ms

现在我们更改为 LIMIT 10:

 EXPLAIN ANALYZE SELECT     "alerts"."id", 
            "alerts"."created_at", 
            't1'::text AS src_table 
 FROM       "alerts" 
 INNER JOIN "devices" 
 ON         "devices"."id" = "alerts"."device_id" 
 INNER JOIN "sites" 
 ON         "sites"."id" = "devices"."site_id" 
 WHERE      "sites"."cloud_id" = 111
 AND        "alerts"."created_at" >= '2019-08-30'
 ORDER BY   "created_at" DESC limit 10;

Limit  (cost=39521.79..39521.81 rows=10 width=16) (actual time=1.557..1.559 rows=10 loops=1)
  ->  Sort  (cost=39521.79..39521.93 rows=58 width=16) (actual time=1.555..1.555 rows=10 loops=1)
        Sort Key: alerts.created_at
        Sort Method: quicksort  Memory: 25kB
        ->  Nested Loop  (cost=5.24..39520.53 rows=58 width=16) (actual time=0.150..1.543 rows=11 loops=1)
              ->  Nested Loop  (cost=4.81..16030.12 rows=2212 width=8) (actual time=0.137..0.643 rows=31 loops=1)
                    ->  Index Scan using sites_cloud_id_index on sites  (cost=0.29..64.53 rows=31 width=8) (actual time=0.014..0.057 rows=23 loops=1)
                          Index Cond: (cloud_id = 7231)
                    ->  Bitmap Heap Scan on devices  (cost=4.52..512.32 rows=270 width=16) (actual time=0.020..0.025 rows=1 loops=23)
                          Recheck Cond: (site_id = sites.id)
                          ->  Bitmap Index Scan on devices_site_id_index  (cost=0.00..4.46 rows=270 width=0) (actual time=0.006..0.006 rows=9 loops=23)
                                Index Cond: (site_id = sites.id)
              ->  Index Scan using alerts_device_id_index on alerts  (cost=0.43..10.59 rows=3 width=24) (actual time=0.024..0.028 rows=0 loops=31)
                    Index Cond: (device_id = devices.id)
                    Filter: (created_at >= '2019-08-30 00:00:00'::timestamp without time zone)
                    Rows Removed by Filter: 12
Total runtime: 1.603 ms

alerts 表有数百万条记录,其他表数以千计。

我已经可以通过简单地不使用 limit < 10 来优化查询。我不明白为什么 LIMIT 会影响性能。也许有比硬编码这个幻数“10”更好的方法。

4

1 回答 1

7

结果行数会影响 PostgreSQL 优化器,因为快速返回前几行的计划不一定是尽可能快地返回整个结果的计划。

在您的情况下,PostgreSQL认为对于较小的值,通过使用索引按子句顺序扫描表并使用嵌套循环连接其他表直到找到 9 行LIMIT,它会更快。alertsORDER BY

这种策略的好处是它不必计算连接的完整结果,然后对其进行排序并丢弃除前几行之外的所有结果。危险在于找到 9 个匹配行所需的时间比预期的要长,这就是打击你的原因:

Index Scan Backward using alerts_created_at_index on alerts (cost=0.43..85542.16 rows=57231 width=24) (actual time=0.014..88.137 rows=88858 loops=1)

因此 PostgreSQL 必须处理 88858 行并使用嵌套循环连接(如果必须经常循环,则效率很低),直到找到 9 个结果行。这可能是因为它低估了条件的选择性,或者因为许多匹配的行都恰好有 low created_at

数字 10 恰好是 PostgreSQL 认为使用该策略不再更有效的截止点,它是一个随着数据库中数据的变化而变化的值。

ORDER BY您可以通过使用与索引不匹配的子句来完全避免使用该计划:

ORDER BY (created_at + INTERVAL '0 days') DESC
于 2019-09-02T08:20:53.557 回答