我正在经历实时聚合而不是实时更新。有什么我想念的吗?
2.4.2
使用当前 docker 映像的版本的可重现示例timescale/timescaledb:latest-pg12
:
CREATE TABLE data
(
time TIMESTAMPTZ NOT NULL,
value DOUBLE PRECISION NOT NULL
);
SELECT create_hypertable('data', 'time', chunk_time_interval => interval '1d');
INSERT INTO data (time, value)
VALUES ('2020-01-01', 100);
CREATE MATERIALIZED VIEW data_daily WITH (timescaledb.continuous)
AS
SELECT time_bucket('1 day', time) AS time,
avg(value) AS avg,
count(*) AS count
FROM data
GROUP BY 1;
ALTER MATERIALIZED VIEW data_daily SET (timescaledb.materialized_only = false);
现在,当我运行时,SELECT * FROM data_daily
我得到了预期的结果:
time, avg, count
2020-01-01 00:00:00.000000, 100, 1
但是在插入另一个值并再次运行查询后,它不会更新。结果与上述相同。
INSERT INTO data (time, value) VALUES ('2020-01-01', 150);
SELECT * FROM data_daily;
输出:
time, avg, count
2020-01-01 00:00:00.000000, 100, 1
手动刷新然后再次查询将显示预期结果。
CALL refresh_continuous_aggregate('data_daily', '1900-01-01', '2100-01-01');
SELECT * FROM data_daily;
输出:
time, avg, count
2020-01-01 00:00:00.000000, 125, 2
是否需要配置其他任何东西才能使实时聚合起作用?从文档中我了解到设置materialized_only = false
应该足够了(甚至没有必要,因为它是默认设置)。
作为参考,这是第二次插入之后手动刷新之前的查询计划:
Append (cost=0.15..59.98 rows=400 width=24) (actual time=0.138..0.200 rows=1 loops=1)
-> GroupAggregate (cost=0.15..21.76 rows=200 width=24) (actual time=0.130..0.151 rows=1 loops=1)
Group Key: _materialized_hypertable_48."time"
-> Custom Scan (ChunkAppend) on _materialized_hypertable_48 (cost=0.15..16.81 rows=260 width=72) (actual time=0.021..0.046 rows=1 loops=1)
Order: _materialized_hypertable_48."time"
Chunks excluded during startup: 0
-> Index Scan Backward using _hyper_48_315_chunk__materialized_hypertable_48_time_idx on _hyper_48_315_chunk (cost=0.15..16.81 rows=260 width=72) (actual time=0.014..0.023 rows=1 loops=1)
Index Cond: ("time" < COALESCE(_timescaledb_internal.to_timestamp(_timescaledb_internal.cagg_watermark(48)), '-infinity'::timestamp with time zone))
-> GroupAggregate (cost=0.16..32.23 rows=200 width=24) (actual time=0.010..0.021 rows=0 loops=1)
Group Key: (time_bucket('1 day'::interval, data."time"))
-> Custom Scan (ChunkAppend) on data (cost=0.16..24.60 rows=617 width=16) (actual time=0.003..0.007 rows=0 loops=1)
Order: time_bucket('1 day'::interval, data."time")
Chunks excluded during startup: 1
Planning Time: 4.978 ms
Execution Time: 0.384 ms