2

我非常期待 PostgreSQL 9.5 的新特性,并且很快就会升级我们的数据库。但是当我发现时我很惊讶

SELECT col1, col2, count(*), grouping(col1,col2) 
FROM table1 
GROUP BY CUBE(col1, col2)

对我们数据集的查询实际上比等效数据的查询持续时间总和慢得多(约 3 秒)(所有 4 个查询总共约 1 秒,每个查询 100-300 毫秒)。col1 和 col2 都有索引。

这是预期的吗(这意味着该功能现在更多的是关于兼容性而不是关于性能)?还是可以以某种方式对其进行调整?

这是真空生产表的示例:

> explain analyze select service_name, state, res_id, count(*) from bookings group by rollup(service_name, state, res_id);
                                                          QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=43069.12..45216.05 rows=4161 width=24) (actual time=1027.341..1120.675 rows=428 loops=1)
   Group Key: service_name, state, res_id
   Group Key: service_name, state
   Group Key: service_name
   Group Key: ()
   ->  Sort  (cost=43069.12..43490.18 rows=168426 width=24) (actual time=1027.301..1070.321 rows=168426 loops=1)
         Sort Key: service_name, state, res_id
         Sort Method: external merge  Disk: 5728kB
         ->  Seq Scan on bookings  (cost=0.00..28448.26 rows=168426 width=24) (actual time=0.079..147.619 rows=168426 loops=1)
 Planning time: 0.118 ms
 Execution time: 1122.557 ms
(11 rows)

> explain analyze select service_name, state, res_id, count(*) from bookings group by service_name, state, res_id
UNION ALL select service_name, state, NULL, count(*) from bookings group by service_name, state
UNION ALL select service_name, NULL, NULL, count(*) from bookings group by service_name
UNION ALL select NULL, NULL, NULL, count(*) from bookings;
                                                               QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------
 Append  (cost=30132.52..118086.91 rows=4161 width=32) (actual time=208.986..706.347 rows=428 loops=1)
   ->  HashAggregate  (cost=30132.52..30172.12 rows=3960 width=24) (actual time=208.986..209.078 rows=305 loops=1)
         Group Key: bookings.service_name, bookings.state, bookings.res_id
         ->  Seq Scan on bookings  (cost=0.00..28448.26 rows=168426 width=24) (actual time=0.022..97.637 rows=168426 loops=1)
   ->  HashAggregate  (cost=29711.45..29713.25 rows=180 width=20) (actual time=195.851..195.879 rows=96 loops=1)
         Group Key: bookings_1.service_name, bookings_1.state
         ->  Seq Scan on bookings bookings_1  (cost=0.00..28448.26 rows=168426 width=20) (actual time=0.029..95.588 rows=168426 loops=1)
   ->  HashAggregate  (cost=29290.39..29290.59 rows=20 width=11) (actual time=181.955..181.960 rows=26 loops=1)
         Group Key: bookings_2.service_name
         ->  Seq Scan on bookings bookings_2  (cost=0.00..28448.26 rows=168426 width=11) (actual time=0.030..97.047 rows=168426 loops=1)
   ->  Aggregate  (cost=28869.32..28869.33 rows=1 width=0) (actual time=119.332..119.332 rows=1 loops=1)
         ->  Seq Scan on bookings bookings_3  (cost=0.00..28448.26 rows=168426 width=0) (actual time=0.039..93.508 rows=168426 loops=1)
 Planning time: 0.373 ms
 Execution time: 706.558 ms
(14 rows)

总时间是可比的,但后者使用四次扫描,不应该更慢吗?使用 rollup() 时“磁盘上的外部合并”很奇怪,我将 work_mem 设置为 16M。

4

2 回答 2

1

有趣,但在那个特定的例子中SET work_mem='32mb'摆脱了磁盘合并,现在使用 ROLLUP 比相应的联合快 2 倍。

解释分析现在包含:“排序方法:快速排序内存:19301kB”

我仍然想知道为什么仅仅 400 行输出需要这么多内存,为什么磁盘合并需要 7Mb 而不是 19Mb 的内存(快速排序开销?),但我的问题已经解决了。

于 2016-02-22T02:38:17.147 回答
0

似乎分组集在查询计划中总是有 GroupAggregate 和 Sort 。但是按频率划分的标准组使用 HashAggragate。

于 2016-05-25T13:03:18.143 回答