为了让问题更容易理解,这篇文章已经完全改写了。
PostgreSQL 9.5
运行的设置Ubuntu Server 14.04 LTS
。
数据模型
我有数据集表,我在其中单独存储数据(时间序列),所有这些表必须共享相同的结构:
CREATE TABLE IF NOT EXISTS %s(
Id SERIAL NOT NULL,
ChannelId INTEGER NOT NULL,
GranulityIdIn INTEGER,
GranulityId INTEGER NOT NULL,
TimeValue TIMESTAMP NOT NULL,
FloatValue FLOAT DEFAULT(NULL),
Status BIGINT DEFAULT(NULL),
QualityCodeId INTEGER NOT NULL,
DataArray FLOAT[] DEFAULT(NULL),
DataCount BIGINT DEFAULT(NULL),
Performance FLOAT DEFAULT(NULL),
StepCount INTEGER NOT NULL DEFAULT(0),
TableRegClass regclass NOT NULL,
Updated TIMESTAMP NOT NULL,
Tags TEXT[] DEFAULT(NULL),
--
CONSTRAINT PK_%s PRIMARY KEY(Id),
CONSTRAINT FK_%s_Channel FOREIGN KEY(ChannelId) REFERENCES scientific.Channel(Id),
CONSTRAINT FK_%s_GranulityIn FOREIGN KEY(GranulityIdIn) REFERENCES quality.Granulity(Id),
CONSTRAINT FK_%s_Granulity FOREIGN KEY(GranulityId) REFERENCES quality.Granulity(Id),
CONSTRAINT FK_%s_QualityCode FOREIGN KEY(QualityCodeId) REFERENCES quality.QualityCode(Id),
CONSTRAINT UQ_%s UNIQUE(QualityCodeId, ChannelId, GranulityId, TimeValue)
);
CREATE INDEX IDX_%s_Channel ON %s USING btree(ChannelId);
CREATE INDEX IDX_%s_Quality ON %s USING btree(QualityCodeId);
CREATE INDEX IDX_%s_Granulity ON %s USING btree(GranulityId) WHERE GranulityId > 2;
CREATE INDEX IDX_%s_TimeValue ON %s USING btree(TimeValue);
此定义来自 a FUNCTION
,因此%s
代表数据集名称。
约束确保给UNIQUE
定数据集中不能有重复记录。此数据集中的记录是floatvalue
给定通道 ( ) 的值 ( ),在给定时间 ( ) 在给定间隔 ( ) 上channelid
采样,具有给定质量 ( )。无论值是什么,都不能有 的副本。timevalue
granulityid
qualitycodeid
(channelid, timevalue, granulityid, qualitycodeid)
数据集中的记录如下所示:
1;25;;1;"2015-01-01 00:00:00";0.54;160;6;"";;;0;"datastore.rtu";"2016-05-07 16:38:29.28106";""
2;25;;1;"2015-01-01 00:30:00";0.49;160;6;"";;;0;"datastore.rtu";"2016-05-07 16:38:29.28106";""
3;25;;1;"2015-01-01 01:00:00";0.47;160;6;"";;;0;"datastore.rtu";"2016-05-07 16:38:29.28106";""
我还有另一个卫星表,我在其中存储频道的有效数字,这个参数可以随时间变化。我以以下方式存储它:
CREATE TABLE SVPOLFactor (
Id SERIAL NOT NULL,
ChannelId INTEGER NOT NULL,
StartTimestamp TIMESTAMP NOT NULL,
Factor FLOAT NOT NULL,
UnitsId VARCHAR(8) NOT NULL,
--
CONSTRAINT PK_SVPOLFactor PRIMARY KEY(Id),
CONSTRAINT FK_SVPOLFactor_Units FOREIGN KEY(UnitsId) REFERENCES Units(Id),
CONSTRAINT UQ_SVPOLFactor UNIQUE(ChannelId, StartTimestamp)
);
当为通道定义了有效数字时,会在此表中添加一行。然后该因素自该日期起适用。第一条记录始终具有标记值'-infinity'::TIMESTAMP
,这意味着:该因子从一开始就适用。接下来的行必须有一个实际定义的值。如果给定通道没有行,则表示有效数字是单一的。
此表中的记录如下所示:
123;277;"-infinity";0.1;"_C"
124;1001;"-infinity";0.01;"-"
125;1001;"2014-03-01 00:00:00";0.1;"-"
126;1001;"2014-06-01 00:00:00";1;"-"
127;1001;"2014-09-01 00:00:00";10;"-"
5001;5181;"-infinity";0.1;"ug/m3"
目标
我的目标是对由不同进程填充的两个数据集进行比较审计。为了实现它,我必须:
- 比较数据集之间的记录并评估它们的差异;
- 检查相似记录之间的差异是否包含在有效数字内。
为此,我编写了以下查询,其行为方式我不理解:
WITH
-- Join records before records (regard to uniqueness constraint) from datastore templated tables in order to make audit comparison:
S0 AS (
SELECT
A.ChannelId
,A.GranulityIdIn AS gidInRef
,B.GranulityIdIn AS gidInAudit
,A.GranulityId AS GranulityId
,A.QualityCodeId
,A.TimeValue
,A.FloatValue AS xRef
,B.FloatValue AS xAudit
,A.StepCount AS scRef
,B.StepCount AS scAudit
,A.DataCount AS dcRef
,B.DataCount AS dcAudit
,round(A.Performance::NUMERIC, 4) AS pRef
,round(B.Performance::NUMERIC, 4) AS pAudit
FROM
datastore.rtu AS A JOIN datastore.audit0 AS B USING(ChannelId, GranulityId, QualityCodeId, TimeValue)
),
-- Join before SVPOL factors in order to determine decimal factor applied to records:
S1 AS (
SELECT
DISTINCT ON(ChannelId, TimeValue)
S0.*
,SF.Factor::NUMERIC AS svpolfactor
,COALESCE(-log(SF.Factor), 0)::INTEGER AS k
FROM
S0 LEFT JOIN settings.SVPOLFactor AS SF ON ((S0.ChannelId = SF.ChannelId) AND (SF.StartTimestamp <= S0.TimeValue))
ORDER BY
ChannelId, TimeValue, StartTimestamp DESC
),
-- Audit computation:
S2 AS (
SELECT
S1.*
,xaudit - xref AS dx
,(xaudit - xref)/NULLIF(xref, 0) AS rdx
,round(xaudit*pow(10, k))*pow(10, -k) AS xroundfloat
,round(xaudit::NUMERIC, k) AS xroundnum
,0.5*pow(10, -k) AS epsilon
FROM S1
)
SELECT
*
,ABS(dx) AS absdx
,ABS(rdx) AS absrdx
,(xroundfloat - xref) AS dxroundfloat
,(xroundnum - xref) AS dxroundnum
,(ABS(dx) - epsilon) AS dxeps
,(ABS(dx) - epsilon)/epsilon AS rdxeps
,(xroundfloat - xroundnum) AS dfround
FROM
S2
ORDER BY
k DESC
,ABS(rdx) DESC
,ChannelId;
该查询可能有点不可读,我大致期望它:
- 使用唯一性约束连接来自两个数据集的数据以比较相似记录并计算差异(
S0
); - 对于每个差异,找到
LEFT JOIN
适用于当前时间戳 ( ) 的有效数字 (S1
); - 执行一些其他有用的统计数据(
S2
和最终SELECT
的)。
问题
当我运行上面的查询时,我缺少行。例如:两个表(和)channelid=123
共有12 条记录。当我执行整个查询并将其存储在 a中时,只有不到 12 行。然后我开始调查以了解为什么我丢失了记录,并且我遇到了一个奇怪的子句行为。如果我执行这个查询,我得到:granulityid=4
datastore.rtu
datastore.audit0
MATERIALIZED VIEW
WHERE
EXPLAIN ANALIZE
"Sort (cost=332212.76..332212.77 rows=1 width=232) (actual time=6042.736..6157.235 rows=61692 loops=1)"
" Sort Key: s2.k DESC, (abs(s2.rdx)) DESC, s2.channelid"
" Sort Method: external merge Disk: 10688kB"
" CTE s0"
" -> Merge Join (cost=0.85..332208.25 rows=1 width=84) (actual time=20.408..3894.071 rows=63635 loops=1)"
" Merge Cond: ((a.qualitycodeid = b.qualitycodeid) AND (a.channelid = b.channelid) AND (a.granulityid = b.granulityid) AND (a.timevalue = b.timevalue))"
" -> Index Scan using uq_rtu on rtu a (cost=0.43..289906.29 rows=3101628 width=52) (actual time=0.059..2467.145 rows=3102319 loops=1)"
" -> Index Scan using uq_audit0 on audit0 b (cost=0.42..10305.46 rows=98020 width=52) (actual time=0.049..108.138 rows=98020 loops=1)"
" CTE s1"
" -> Unique (cost=4.37..4.38 rows=1 width=148) (actual time=4445.865..4509.839 rows=61692 loops=1)"
" -> Sort (cost=4.37..4.38 rows=1 width=148) (actual time=4445.863..4471.002 rows=63635 loops=1)"
" Sort Key: s0.channelid, s0.timevalue, sf.starttimestamp DESC"
" Sort Method: external merge Disk: 5624kB"
" -> Hash Right Join (cost=0.03..4.36 rows=1 width=148) (actual time=4102.842..4277.641 rows=63635 loops=1)"
" Hash Cond: (sf.channelid = s0.channelid)"
" Join Filter: (sf.starttimestamp <= s0.timevalue)"
" -> Seq Scan on svpolfactor sf (cost=0.00..3.68 rows=168 width=20) (actual time=0.013..0.083 rows=168 loops=1)"
" -> Hash (cost=0.02..0.02 rows=1 width=132) (actual time=4102.002..4102.002 rows=63635 loops=1)"
" Buckets: 65536 (originally 1024) Batches: 2 (originally 1) Memory Usage: 3841kB"
" -> CTE Scan on s0 (cost=0.00..0.02 rows=1 width=132) (actual time=20.413..4038.078 rows=63635 loops=1)"
" CTE s2"
" -> CTE Scan on s1 (cost=0.00..0.07 rows=1 width=168) (actual time=4445.910..4972.832 rows=61692 loops=1)"
" -> CTE Scan on s2 (cost=0.00..0.05 rows=1 width=232) (actual time=4445.934..5312.884 rows=61692 loops=1)"
"Planning time: 1.782 ms"
"Execution time: 6201.148 ms"
而且我知道我必须有 67106 行。
在撰写本文时,我知道S0
返回正确数量的行。所以问题肯定出在更远的地方CTE
。
我发现真正奇怪的是:
EXPLAIN ANALYZE
WITH
S0 AS (
SELECT * FROM datastore.audit0
),
S1 AS (
SELECT
DISTINCT ON(ChannelId, TimeValue)
S0.*
,SF.Factor::NUMERIC AS svpolfactor
,COALESCE(-log(SF.Factor), 0)::INTEGER AS k
FROM
S0 LEFT JOIN settings.SVPOLFactor AS SF ON ((S0.ChannelId = SF.ChannelId) AND (SF.StartTimestamp <= S0.TimeValue))
ORDER BY
ChannelId, TimeValue, StartTimestamp DESC
)
SELECT * FROM S1 WHERE Channelid=123 AND GranulityId=4 -- POST-FILTERING
返回 10 行:
"CTE Scan on s1 (cost=24554.34..24799.39 rows=1 width=196) (actual time=686.211..822.803 rows=10 loops=1)"
" Filter: ((channelid = 123) AND (granulityid = 4))"
" Rows Removed by Filter: 94890"
" CTE s0"
" -> Seq Scan on audit0 (cost=0.00..2603.20 rows=98020 width=160) (actual time=0.009..26.092 rows=98020 loops=1)"
" CTE s1"
" -> Unique (cost=21215.99..21951.14 rows=9802 width=176) (actual time=590.337..705.070 rows=94900 loops=1)"
" -> Sort (cost=21215.99..21461.04 rows=98020 width=176) (actual time=590.335..665.152 rows=99151 loops=1)"
" Sort Key: s0.channelid, s0.timevalue, sf.starttimestamp DESC"
" Sort Method: external merge Disk: 12376kB"
" -> Hash Left Join (cost=5.78..4710.74 rows=98020 width=176) (actual time=0.143..346.949 rows=99151 loops=1)"
" Hash Cond: (s0.channelid = sf.channelid)"
" Join Filter: (sf.starttimestamp <= s0.timevalue)"
" -> CTE Scan on s0 (cost=0.00..1960.40 rows=98020 width=160) (actual time=0.012..116.543 rows=98020 loops=1)"
" -> Hash (cost=3.68..3.68 rows=168 width=20) (actual time=0.096..0.096 rows=168 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 12kB"
" -> Seq Scan on svpolfactor sf (cost=0.00..3.68 rows=168 width=20) (actual time=0.006..0.045 rows=168 loops=1)"
"Planning time: 0.385 ms"
"Execution time: 846.179 ms"
下一个返回正确的行数:
EXPLAIN ANALYZE
WITH
S0 AS (
SELECT * FROM datastore.audit0
WHERE Channelid=123 AND GranulityId=4 -- PRE FILTERING
),
S1 AS (
SELECT
DISTINCT ON(ChannelId, TimeValue)
S0.*
,SF.Factor::NUMERIC AS svpolfactor
,COALESCE(-log(SF.Factor), 0)::INTEGER AS k
FROM
S0 LEFT JOIN settings.SVPOLFactor AS SF ON ((S0.ChannelId = SF.ChannelId) AND (SF.StartTimestamp <= S0.TimeValue))
ORDER BY
ChannelId, TimeValue, StartTimestamp DESC
)
SELECT * FROM S1
在哪里:
"CTE Scan on s1 (cost=133.62..133.86 rows=12 width=196) (actual time=0.580..0.598 rows=12 loops=1)"
" CTE s0"
" -> Bitmap Heap Scan on audit0 (cost=83.26..128.35 rows=12 width=160) (actual time=0.401..0.423 rows=12 loops=1)"
" Recheck Cond: ((channelid = 123) AND (granulityid = 4))"
" Heap Blocks: exact=12"
" -> BitmapAnd (cost=83.26..83.26 rows=12 width=0) (actual time=0.394..0.394 rows=0 loops=1)"
" -> Bitmap Index Scan on idx_audit0_channel (cost=0.00..11.12 rows=377 width=0) (actual time=0.055..0.055 rows=377 loops=1)"
" Index Cond: (channelid = 123)"
" -> Bitmap Index Scan on idx_audit0_granulity (cost=0.00..71.89 rows=3146 width=0) (actual time=0.331..0.331 rows=3120 loops=1)"
" Index Cond: (granulityid = 4)"
" CTE s1"
" -> Unique (cost=5.19..5.28 rows=12 width=176) (actual time=0.576..0.581 rows=12 loops=1)"
" -> Sort (cost=5.19..5.22 rows=12 width=176) (actual time=0.576..0.576 rows=12 loops=1)"
" Sort Key: s0.channelid, s0.timevalue, sf.starttimestamp DESC"
" Sort Method: quicksort Memory: 20kB"
" -> Hash Right Join (cost=0.39..4.97 rows=12 width=176) (actual time=0.522..0.552 rows=12 loops=1)"
" Hash Cond: (sf.channelid = s0.channelid)"
" Join Filter: (sf.starttimestamp <= s0.timevalue)"
" -> Seq Scan on svpolfactor sf (cost=0.00..3.68 rows=168 width=20) (actual time=0.006..0.022 rows=168 loops=1)"
" -> Hash (cost=0.24..0.24 rows=12 width=160) (actual time=0.446..0.446 rows=12 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 6kB"
" -> CTE Scan on s0 (cost=0.00..0.24 rows=12 width=160) (actual time=0.403..0.432 rows=12 loops=1)"
"Planning time: 0.448 ms"
"Execution time: 4.510 ms"
因此问题似乎出在S1
. 没有为 定义有效数字channelid = 123
,因此,不应在没有 的情况下生成这些记录LEFT JOIN
。但这并不能解释为什么会有一些缺失。
问题
- 我在这个查询中做错了什么?
当我获取有效数字时,我使用LEFT JOIN
它来保持正确的基数,因此它不能删除记录,之后它只是算术。
- 预过滤如何返回比后过滤更多的行?
这对我来说听起来有点麻烦。如果我不使用WHERE
子句,则会生成所有记录(或组合)(我知道这JOIN
是一个WHERE
子句),然后进行计算。当我不使用其他WHERE
(原始查询)时,我会错过行(如示例中所示)。当我添加 WHERE 子句进行过滤时,结果会有所不同(如果后过滤返回的记录多于预过滤,这可能很好)。
欢迎任何指出我的错误和对查询的误解的建设性答案。谢谢你。