3

语境

我有一个保存 netflow 数据的表(路由器拦截的所有数据包)。该表目前约有 590 万行。

问题

我正在尝试一个简单的查询来计算每天收到的数据包数量,这不会花费很长时间。

我第一次运行它时,查询需要88 seconds,然后在第二次运行后,33 seconds,然后是5 seconds用于所有后续运行。

主要问题不是查询的速度,而是同样的查询执行3次后,速度快了近20倍。
我了解查询缓存的概念,但是原始查询运行的性能对我来说毫无意义。

测试

我用来加入的列 (datetime) 的类型为timestamptz,并已编入索引:

CREATE INDEX date ON netflows USING btree (datetime);

看着EXPLAIN陈述。执行上的区别在于Nested Loop.

我已经有了VACUUM ANALYZE表格,结果完全相同。

当前环境

  • 在 VMware ESX 4.1 上运行的 Linux Ubuntu 12.04 VM
  • PostgreSQL 9.1
  • VM 有 2 GB RAM,2 个内核。
  • 数据库服务器完全致力于此,没有做任何其他事情
  • 每分钟插入表中(每分钟 100 行)
  • 非常低的磁盘、内存或 CPU 活动

询问

with date_list as (
    select
        series as start_date,
        series + '23:59:59' as end_date
    from
        generate_series(
            (select min(datetime) from netflows)::date, 
            (select max(datetime) from netflows)::date, 
            '1 day') as series
)
select
    start_date,
    end_date,
    count(*)
from
    netflows
    inner join date_list on (datetime between start_date and end_date)
group by
    start_date,
    end_date;

首次运行说明(88 秒)

Sort  (cost=27007355.59..27007356.09 rows=200 width=8) (actual time=89647.054..89647.055 rows=18 loops=1) 
  Sort Key: date_list.start_date 
  Sort Method: quicksort  Memory: 25kB 
  CTE date_list 
    ->  Function Scan on generate_series series  (cost=0.13..12.63 rows=1000 width=8) (actual time=92.567..92.667 rows=19 loops=1) 
          InitPlan 2 (returns $1) 
            ->  Result  (cost=0.05..0.06 rows=1 width=0) (actual time=71.270..71.270 rows=1 loops=1) 
                  InitPlan 1 (returns $0) 
                    ->  Limit  (cost=0.00..0.05 rows=1 width=8) (actual time=71.259..71.261 rows=1 loops=1) 
                          ->  Index Scan using date on netflows  (cost=0.00..303662.15 rows=5945591 width=8) (actual time=71.252..71.252 rows=1 loops=1) 
                                Index Cond: (datetime IS NOT NULL) 
          InitPlan 4 (returns $3) 
            ->  Result  (cost=0.05..0.06 rows=1 width=0) (actual time=11.786..11.787 rows=1 loops=1) 
                  InitPlan 3 (returns $2) 
                    ->  Limit  (cost=0.00..0.05 rows=1 width=8) (actual time=11.778..11.779 rows=1 loops=1) 
                          ->  Index Scan Backward using date on netflows  (cost=0.00..303662.15 rows=5945591 width=8) (actual time=11.776..11.776 rows=1 loops=1) 
                                Index Cond: (datetime IS NOT NULL) 
  ->  HashAggregate  (cost=27007333.31..27007335.31 rows=200 width=8) (actual time=89639.167..89639.179 rows=18 loops=1) 
        ->  Nested Loop  (cost=0.00..23704227.20 rows=660621222 width=8) (actual time=92.667..88059.576 rows=5945457 loops=1) 
              ->  CTE Scan on date_list  (cost=0.00..20.00 rows=1000 width=16) (actual time=92.578..92.785 rows=19 loops=1) 
              ->  Index Scan using date on netflows  (cost=0.00..13794.89 rows=660621 width=8) (actual time=2.438..4571.884 rows=312919 loops=19) 
                    Index Cond: ((datetime >= date_list.start_date) AND (datetime <= date_list.end_date)) 
Total runtime: 89668.047 ms 

第三次运行的解释(5秒)

Sort  (cost=27011357.45..27011357.95 rows=200 width=8) (actual time=5645.031..5645.032 rows=18 loops=1) 
  Sort Key: date_list.start_date 
  Sort Method: quicksort  Memory: 25kB 
  CTE date_list 
    ->  Function Scan on generate_series series  (cost=0.13..12.63 rows=1000 width=8) (actual time=0.108..0.204 rows=19 loops=1) 
          InitPlan 2 (returns $1) 
            ->  Result  (cost=0.05..0.06 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1) 
                  InitPlan 1 (returns $0) 
                    ->  Limit  (cost=0.00..0.05 rows=1 width=8) (actual time=0.046..0.046 rows=1 loops=1) 
                          ->  Index Scan using date on netflows  (cost=0.00..303705.14 rows=5946469 width=8) (actual time=0.046..0.046 rows=1 loops=1) 
                                Index Cond: (datetime IS NOT NULL) 
          InitPlan 4 (returns $3) 
            ->  Result  (cost=0.05..0.06 rows=1 width=0) (actual time=0.026..0.026 rows=1 loops=1) 
                  InitPlan 3 (returns $2) 
                    ->  Limit  (cost=0.00..0.05 rows=1 width=8) (actual time=0.026..0.026 rows=1 loops=1) 
                          ->  Index Scan Backward using date on netflows  (cost=0.00..303705.14 rows=5946469 width=8) (actual time=0.026..0.026 rows=1 loops=1) 
                                Index Cond: (datetime IS NOT NULL) 
  ->  HashAggregate  (cost=27011335.17..27011337.17 rows=200 width=8) (actual time=5645.005..5645.009 rows=18 loops=1) 
        ->  Nested Loop  (cost=0.00..23707741.28 rows=660718778 width=8) (actual time=0.134..4176.406 rows=5946329 loops=1) 
              ->  CTE Scan on date_list  (cost=0.00..20.00 rows=1000 width=16) (actual time=0.110..0.343 rows=19 loops=1) 
              ->  Index Scan using date on netflows  (cost=0.00..13796.94 rows=660719 width=8) (actual time=0.026..164.117 rows=312965 loops=19) 
                    Index Cond: ((datetime >= date_list.start_date) AND (datetime <= date_list.end_date)) 
Total runtime: 5645.189 ms
4

2 回答 2

3

如果你正在做一个INNER JOIN我认为你根本不需要 CTE。你可以定义

select
    datetime::date,
    count(*)
from netflows
group by datetime::date /* or GROUP BY 1 as Postgres extension */

我不明白你为什么需要日期表,除非你想LEFT JOIN在适当的地方得到零。这将意味着一次通过数据。

顺便说一句,我不鼓励您对实体和列使用日期和日期时间等关键字;即使合法,也不值得。

于 2012-12-06T00:57:12.237 回答
1
WITH date_list as (
    SELECT t                  AS start_date
         ,(t + interval '1d') AS end_date
    FROM  (
      SELECT generate_series((min(datetime))::date
                            ,(max(datetime))::date
                            ,'1d') AS t
      FROM   netflows
      ) x
   )
SELECT d.start_date
      ,count(*) AS ct
FROM   date_list     d
LEFT   JOIN netflows n ON n.datetime >= d.start_date
                      AND n.datetime <  d.end_date
GROUP  BY d.start_date;

并为您的索引使用正确的名称(@Andrew已经暗示):

CREATE INDEX netflows_date_idx ON netflows (datetime);

要点

  • 假设您希望日历的每一天都有一行,就像@Andrew 在他的回答中已经提到的那样,我JOINLEFT JOIN.

  • 在一个查询中抓取min()max()从网络流中获取效率要高得多。

  • 简化的类型转换。

  • 修复了日期范围。您的代码会因时间戳而失败,例如'2012-12-06 23:59:59.123'.

在一张大桌子上进行了测试,性能很好。
至于您最初的问题:毫无疑问,缓存效果是可以预料的——尤其是在 RAM 有限的情况下。

于 2012-12-06T11:53:24.727 回答