2

我使用 AWS RDS PG 12.5(db.t3.xlarge / 4vCPUs / 16GB RAM / SSD 存储)。

我试图通过调整work_mem参数来优化查询,以避免在磁盘上溢出数据以对数据进行排序。

正如预期的那样,当增加work_memfrom 4MBto时100MB,使用快速排序而不是外部合并磁盘。

但是,总执行时间更长(2293msvs 2541ms)。

  • 为什么快速排序没有显着的收益?我认为 RAM 优于磁盘排序。(540ms外部合并磁盘与527ms快速排序)
  • 为什么 seqscans、hash 和 merge 操作更慢?(为什么work_mem会影响这些操作?)

我发现了这个类似的 SO 帖子,但他们的问题是他们的排序只是整个执行时间的一小部分。

任何见解都会受到欢迎。

查询:

select
    dd.date
    , jf.job_id
    , od.id
    , od.company_size_category
    , od.invoicing_entity
    , os.sector_category_id
    , os.sector_id
    , jf.pageviews
    , jf.apply_clicks
    , concat(sector_category_id, '_', company_size_category, '_', invoicing_entity) as bench_id
from organization_dimensions od

left join job_facts jf
    on od.id = jf.organization_id
    
left join date_dimensions dd
    on jf.date = dd.date
    
left join organizations_sectors os
    on od.id = os.organization_id

where dd.date >= '2021-01-01' and dd.date < '2021-02-01'

order by 1, 2, 3
;

work_mem=4MB 的查询计划(链接到 depesz):


Gather Merge  (cost=182185.20..197262.15 rows=129222 width=76) (actual time=1988.652..2293.219 rows=981409 loops=1)
  Workers Planned: 2
  Workers Launched: 2
  Buffers: shared hit=105939, temp read=10557 written=10595
  ->  Sort  (cost=181185.18..181346.71 rows=64611 width=76) (actual time=1975.907..2076.591 rows=327136 loops=3)
        Sort Key: dd.date, jf.job_id, od.id
        Sort Method: external merge  Disk: 32088kB
        Worker 0:  Sort Method: external merge  Disk: 22672kB
        Worker 1:  Sort Method: external merge  Disk: 22048kB
        Buffers: shared hit=105939, temp read=10557 written=10595
        ->  Hash Join  (cost=1001.68..173149.42 rows=64611 width=76) (actual time=14.719..1536.513 rows=327136 loops=3)
              Hash Cond: (jf.organization_id = od.id)
              Buffers: shared hit=105821
              ->  Hash Join  (cost=177.27..171332.76 rows=36922 width=21) (actual time=0.797..1269.917 rows=148781 loops=3)
                    Hash Cond: (jf.date = dd.date)
                    Buffers: shared hit=104722
                    ->  Parallel Seq Scan on job_facts jf  (cost=0.00..152657.47 rows=4834347 width=21) (actual time=0.004..432.145 rows=3867527 loops=3)
                          Buffers: shared hit=104314
                    ->  Hash  (cost=176.88..176.88 rows=31 width=4) (actual time=0.554..0.555 rows=31 loops=3)
                          Buckets: 1024  Batches: 1  Memory Usage: 10kB
                          Buffers: shared hit=348
                          ->  Seq Scan on date_dimensions dd  (cost=0.00..176.88 rows=31 width=4) (actual time=0.011..0.543 rows=31 loops=3)
                                Filter: ((date >= '2021-01-01'::date) AND (date < '2021-02-01'::date))
                                Rows Removed by Filter: 4028
                                Buffers: shared hit=348
              ->  Hash  (cost=705.43..705.43 rows=9518 width=27) (actual time=13.813..13.815 rows=9828 loops=3)
                    Buckets: 16384  Batches: 1  Memory Usage: 709kB
                    Buffers: shared hit=1071
                    ->  Hash Right Join  (cost=367.38..705.43 rows=9518 width=27) (actual time=5.035..10.702 rows=9828 loops=3)
                          Hash Cond: (os.organization_id = od.id)
                          Buffers: shared hit=1071
                          ->  Seq Scan on organizations_sectors os  (cost=0.00..207.18 rows=9518 width=12) (actual time=0.015..0.995 rows=9518 loops=3)
                                Buffers: shared hit=336
                          ->  Hash  (cost=299.39..299.39 rows=5439 width=19) (actual time=4.961..4.962 rows=5439 loops=3)
                                Buckets: 8192  Batches: 1  Memory Usage: 339kB
                                Buffers: shared hit=735
                                ->  Seq Scan on organization_dimensions od  (cost=0.00..299.39 rows=5439 width=19) (actual time=0.011..3.536 rows=5439 loops=3)
                                      Buffers: shared hit=735
Planning Time: 0.220 ms
Execution Time: 2343.474 ms

work_mem=100MB 的查询计划(链接到 depesz):

Gather Merge  (cost=179311.70..194388.65 rows=129222 width=76) (actual time=2205.016..2541.827 rows=981409 loops=1)
  Workers Planned: 2
  Workers Launched: 2
  Buffers: shared hit=105939
  ->  Sort  (cost=178311.68..178473.21 rows=64611 width=76) (actual time=2173.869..2241.519 rows=327136 loops=3)
        Sort Key: dd.date, jf.job_id, od.id
        Sort Method: quicksort  Memory: 66835kB
        Worker 0:  Sort Method: quicksort  Memory: 56623kB
        Worker 1:  Sort Method: quicksort  Memory: 51417kB
        Buffers: shared hit=105939
        ->  Hash Join  (cost=1001.68..173149.42 rows=64611 width=76) (actual time=36.991..1714.073 rows=327136 loops=3)
              Hash Cond: (jf.organization_id = od.id)
              Buffers: shared hit=105821
              ->  Hash Join  (cost=177.27..171332.76 rows=36922 width=21) (actual time=2.232..1412.442 rows=148781 loops=3)
                    Hash Cond: (jf.date = dd.date)
                    Buffers: shared hit=104722
                    ->  Parallel Seq Scan on job_facts jf  (cost=0.00..152657.47 rows=4834347 width=21) (actual time=0.005..486.592 rows=3867527 loops=3)
                          Buffers: shared hit=104314
                    ->  Hash  (cost=176.88..176.88 rows=31 width=4) (actual time=1.904..1.906 rows=31 loops=3)
                          Buckets: 1024  Batches: 1  Memory Usage: 10kB
                          Buffers: shared hit=348
                          ->  Seq Scan on date_dimensions dd  (cost=0.00..176.88 rows=31 width=4) (actual time=0.013..1.892 rows=31 loops=3)
                                Filter: ((date >= '2021-01-01'::date) AND (date < '2021-02-01'::date))
                                Rows Removed by Filter: 4028
                                Buffers: shared hit=348
              ->  Hash  (cost=705.43..705.43 rows=9518 width=27) (actual time=34.586..34.589 rows=9828 loops=3)
                    Buckets: 16384  Batches: 1  Memory Usage: 709kB
                    Buffers: shared hit=1071
                    ->  Hash Right Join  (cost=367.38..705.43 rows=9518 width=27) (actual time=13.367..27.326 rows=9828 loops=3)
                          Hash Cond: (os.organization_id = od.id)
                          Buffers: shared hit=1071
                          ->  Seq Scan on organizations_sectors os  (cost=0.00..207.18 rows=9518 width=12) (actual time=0.019..1.443 rows=9518 loops=3)
                                Buffers: shared hit=336
                          ->  Hash  (cost=299.39..299.39 rows=5439 width=19) (actual time=13.314..13.315 rows=5439 loops=3)
                                Buckets: 8192  Batches: 1  Memory Usage: 339kB
                                Buffers: shared hit=735
                                ->  Seq Scan on organization_dimensions od  (cost=0.00..299.39 rows=5439 width=19) (actual time=0.016..6.407 rows=5439 loops=3)
                                      Buffers: shared hit=735
Planning Time: 0.221 ms
Execution Time: 2601.698 ms
4

1 回答 1

1

我想说有两个因素:

  1. 您的写入并没有真正命中磁盘,而是内核缓存。PostgreSQL 使用缓冲 I/O!

    要查看更多信息,请设置track_io_timing = on

  2. 随机噪音。例如,没有真正的原因为什么顺序扫描会慢 50 毫秒,而work_mem. 该参数在这里没有影响。

    重复实验几次,你会发现时间会有所不同。我怀疑更多的查询work_mem会执行得更慢。

于 2021-02-11T17:26:16.953 回答