我有一个包含数百万数据的简单数据库(PostgreSQL 11)。我想平均value
每天。为此,我正在使用time_bucket()
函数。
数据库架构
-- create database tables + indexes
CREATE TABLE IF NOT EXISTS machine (
id SMALLSERIAL PRIMARY KEY,
name TEXT UNIQUE
);
CREATE TABLE IF NOT EXISTS reject_rate (
time TIMESTAMPTZ UNIQUE NOT NULL,
machine_id SMALLINT REFERENCES machine(id) ON DELETE CASCADE,
value FLOAT NOT NULL,
PRIMARY KEY(time, machine_id)
);
CREATE INDEX ON reject_rate (machine_id, value, time DESC);
-- hypertable
SELECT create_hypertable('reject_rate', 'time');
-- generate data with 54M rows
-- value column is generated randomly
-- this tooks minutes to finish but that's OK
INSERT INTO machine (name) VALUES ('machine1'), ('machine2');
INSERT INTO reject_rate (time, machine_id, value)
SELECT to_timestamp(generate_series(1, 54e6)), 1, random();
我想要做的查询是:
询问
SELECT
time_bucket('1 day', reject_rate.time) AS day,
AVG(value)
FROM reject_rate
GROUP BY day
结果+解释
即使我使用索引,查询的运行时间也很慢。查询返回 626 行,需要 26.5 秒才能完成。已创建 90 个 TimescaleDB 块。这是此查询的 EXPLAIN 语句:
"GroupAggregate (cost=41.17..5095005.10 rows=54000000 width=16)"
" Group Key: (time_bucket('1 day'::interval, _hyper_120_45_chunk."time"))"
" -> Result (cost=41.17..4015005.10 rows=54000000 width=16)"
" -> Merge Append (cost=41.17..3340005.10 rows=54000000 width=16)"
" Sort Key: (time_bucket('1 day'::interval, _hyper_120_45_chunk."time"))"
" -> Index Scan using "45_86_reject_rate_time_key" on _hyper_120_45_chunk (cost=0.42..14752.62 rows=604800 width=16)"
" -> Index Scan using "50_96_reject_rate_time_key" on _hyper_120_50_chunk (cost=0.42..14752.62 rows=604800 width=16)"
" -> Index Scan using "55_106_reject_rate_time_key" on _hyper_120_55_chunk (cost=0.42..14752.62 rows=604800 width=16)"
" -> Index Scan using "60_116_reject_rate_time_key" on _hyper_120_60_chunk (cost=0.42..14752.62 rows=604800 width=16)"
" -> Index Scan using "65_126_reject_rate_time_key" on _hyper_120_65_chunk (cost=0.42..14752.62 rows=604800 width=16)"
" -> Index Scan using "70_136_reject_rate_time_key" on _hyper_120_70_chunk (cost=0.42..14752.62 rows=604800 width=16)"
" -> Index Scan using "75_146_reject_rate_time_key" on _hyper_120_75_chunk (cost=0.42..14752.62 rows=604800 width=16)"
+ ~80 another rows of Index scan
问题
我是否正确创建了索引?我是否正确创建了数据库?或者对于这么多的行,TimescaleDB 是否像这样慢?
time_bucket()
这可能是速度慢的原因: https ://github.com/timescale/timescaledb/issues/1229 。建议的解决方案是使用连续聚合视图。这是如何在 PostgreSQL 中使用时间序列的推荐方法吗?