1

I'm experiencing strange PostgreSQL behavior. I have partitioned history table into smaller pieces based on time History -> History_part_YYYY-MM

Check constraints:
"History_part_2013-11_sentdate_check" CHECK (sentdate >= '2013-11-01 00:00:00-04'::timestamp with time zone AND sentdate < '2013-12-01 00:00:00-05'::timestamp with time zone)

Inherits: "History"

Each partition has its own index on transaction_id column.

History_part_2013-11_transaction_id_idx" btree (transaction_id)

It is as far as I know 'nothing special' way of partitioning, taken from postgres tutorial.

What the problem is that executing this query is slow:

SELECT * FROM "History" WHERE transaction_id = 'MMS-dev-23599-2013-12-11-13:03:53.349735' LIMIT 1;

I was able to narrow the problem down that this query is slow only FIRST TIME per script, if it is run second time it is fast. If it is run again in separate script it is slow again and second run (in script) will be fast again... I really have no explanation for this. It is not inside any transaction.

Here are sample execution times of two queries run one by one in same script:

1.33s   SELECT * FROM "History" WHERE transaction_id = 'MMS-dev-14970-2013-12-11-13:18:29.889376' LIMIT 1;...
0.019s  SELECT * FROM "History" WHERE transaction_id = 'MMS-dev-14970-2013-12-11-13:18:29.889376' LIMIT 1;

The first query is that slow that is trigger 'explain analyze' call and that looks like this (and is really really fast too):

    Limit  (cost=0.00..8.07 rows=1 width=2589) (actual time=0.972..0.973 rows=1 loops=1)
  ->  Result  (cost=0.00..581.07 rows=72 width=2589) (actual time=0.964..0.964 rows=1 loops=1)
        ->  Append  (cost=0.00..581.07 rows=72 width=2589) (actual time=0.958..0.958 rows=1 loops=1)
              ->  Seq Scan on "History"  (cost=0.00..1.00 rows=1 width=3760) (actual time=0.015..0.015 rows=0 loops=1)
                    Filter: ((transaction_id)::text = 'MMS-dev-23595-2013-12-11-13:20:10.422306'::text)
              ->  Index Scan using "History_part_2013-10_transaction_id_idx" on "History_part_2013-10" "History"  (cost=0.00..8.28 rows=1 width=1829) (actual time=0.040..0.040 rows=0 loops=1)
                    Index Cond: ((transaction_id)::text = 'MMS-dev-23595-2013-12-11-13:20:10.422306'::text)
              ->  Index Scan using "History_part_2013-02_transaction_id_idx" on "History_part_2013-02" "History"  (cost=0.00..8.32 rows=1 width=1707) (actual time=0.021..0.021 rows=0 loops=1)
                    Index Cond: ((transaction_id)::text = 'MMS-dev-23595-2013-12-11-13:20:10.422306'::text)

.... and it check all tables (around 54 now - few tables are empty created for future ) and at the end

->  Index Scan using "History_part_2014-10_transaction_id_idx" on "History_part_2014-10" "History"  (cost=0.00..8.27 rows=1 width=3760) (never executed)
                    Index Cond: ((transaction_id)::text = 'MMS-dev-23595-2013-12-11-13:20:10.422306'::text)

Total runtime: 6.390 ms

The Total runtime is 0,006s and the first query is always above 1s - if there is more concurrent scripts running (each with UNIQUE transaction_id) first execution can go up to 20s and the second execution is at few miliseconds.

Did anyone experience that? I wonder if there is something wrong I am doing or maybe this is postgres issue??

I upgraded postgres from 9.2.4 -> 9.2.5 - it seems it is slightly better but the issue definitely remains.

UPDATE: I use this query now:

SELECT * FROM "History" WHERE transaction_id = 'MMS-live-15425-2013-18-11-17:32:20.917198' AND sentdate>='2013-10-18' AND sentdate<'2013-11-19' LIMIT 1

First time it is run in the script - 3 to 8 SECONDS when many queries run at once against this table (if there s only on script at a time it is much faster).

When I change first query in script to (calls the partition table directly):

SELECT * FROM "History_part_2013-11" WHERE transaction_id = 'MMS-live-15425-2013-18-11-17:32:20.917198' AND sentdate>='2013-10-18' AND sentdate<'2013-11-19' LIMIT 1

It is like 0.03s - much faster BUT the next query in the script that uses query against "History" table is still around 3-8 SECONDS.

Here is the explain analyze of the first query against "History"

    Limit  (cost=0.00..25.41 rows=1 width=2540) (actual time=0.129..0.130 rows=1 loops=1)
  ->  Result  (cost=0.00..76.23 rows=3 width=2540) (actual time=0.121..0.121 rows=1 loops=1)
        ->  Append  (cost=0.00..76.23 rows=3 width=2540) (actual time=0.117..0.117 rows=1 loops=1)
              ->  Seq Scan on "History"  (cost=0.00..58.00 rows=1 width=3750) (actual time=0.060..0.060 rows=0 loops=1)
                    Filter: ((sentdate >= '2013-10-18 00:00:00-04'::timestamp with time zone) AND (sentdate < '2013-11-19 00:00:00-05'::timestamp with time zone) AND ((transaction_id)::text = 'MMS-live-15425-2013-18-11-17:32:20.917198'::text))
              ->  Index Scan using "History_part_2013-11_transaction_id_idx" on "History_part_2013-11" "History"  (cost=0.00..8.36 rows=1 width=1985) (actual time=0.051..0.051 rows=1 loops=1)
                    Index Cond: ((transaction_id)::text = 'MMS-live-15425-2013-18-11-17:32:20.917198'::text)
                    Filter: ((sentdate >= '2013-10-18 00:00:00-04'::timestamp with time zone) AND (sentdate < '2013-11-19 00:00:00-05'::timestamp with time zone))
              ->  Index Scan using "History_part_2013-10_transaction_id_idx" on "History_part_2013-10" "History"  (cost=0.00..9.87 rows=1 width=1884) (never executed)
                    Index Cond: ((transaction_id)::text = 'MMS-live-15425-2013-18-11-17:32:20.917198'::text)
                    Filter: ((sentdate >= '2013-10-18 00:00:00-04'::timestamp with time zone) AND (sentdate < '2013-11-19 00:00:00-05'::timestamp with time zone))
Total runtime: 0.572 ms

Seems like it is ALWAYS slow when running against 'main' History table (but not when calling partition directly) and only for the first time - is that some cashing thing? But then why calling partition directly is so much faster - calling main History table does not check all tables anymore.

4

1 回答 1

1

请参阅上面的注释,分区标准 ( sentdate) 必须包含在查询中,并且必须是分区排除工作的常量表达式。

于 2013-11-13T00:17:41.160 回答