我有一个 Rails 3 应用程序,其中 5 个表嵌套在 2 个级别(table1 有很多 > table2 有很多 > table 3),其中包含很多信息。可以将其想象为网站访问者的跟踪系统,其中保存了大量数据并且需要快速保存,并且当我们检索数据以进行显示时执行大量查询,因为要进行计数以提取数据。
起初我创建我的应用程序时并没有考虑太多 sql,只是为了让它运行,然后我想我会开始优化 db 部分,因为会有数据可以使用。
我现在在所有表中汇总了大约 100 万条记录,我认为是时候开始优化了,因为每个请求的响应时间为 1 秒。
我的 rails 应用程序对每个计数执行查询,而不涉及任何连接。只是 user.websites.hits 之类的默认行为(选择用户,然后执行另一个选择以获取网站,并为每个网站执行一个选择以获取访问者数量)。总的来说,我认为它会进行大约 80 个查询来获取我的页面结果(我知道...),其中包含我需要的所有内容,因此我创建了一个查询,该查询从单个请求中获取所有结果。
问题是,当我在数据库管理员中运行查询时,获取大约需要 2 秒,而页面设法执行 80 个查询、加载模板和资产并在 1.1 秒内呈现。
我不是数据库专家,但我的查询不好,或者有时最好不要像我一样跨多个表使用联接。如果我的数据继续以这种方式增长,我的联接查询会变得更快还是两个测试的加载速度都会变慢?
我对该查询的所有连接点和 WHERE 字段都有索引,所以我认为这不是问题所在。
我已经考虑过缓存,但我觉得对于 100 万条小数据记录开始这样做还为时过早。
有什么建议吗?
domain -> has_many: channels(we use it for split testing)
channel -> has_many: visits, visitors (unique visits by ip), sales
product -> has_many: visits, visitors (unique visits by ip), sales
The query tries to get the domains which includes:
domain_name,
channels_count,
visits_count,
visitors_count,
sales_count and
products_count via the visits table
ACTUAL QUERY:
SELECT
domains.id,
domains.domain,
COUNT(distinct kc.id) AS visits_count,
COUNT(distinct kv.id) AS visits_count,
COUNT(distinct kv.ip_address) AS visitors_count,
COUNT(distinct kp.id) AS products_count,
COUNT(distinct ks.id) AS sales_count
FROM
domains
LEFT JOIN
channels AS kc ON domains.id=kc.domain_id
LEFT JOIN
visits AS kv ON kc.id=kv.channel_id
LEFT JOIN
products AS kp ON kv.product_id=kp.id
LEFT JOIN
sales AS ks ON kc.id=ks.channel_id
WHERE
(domains.user_id=2)
GROUP BY
domains.id
LIMIT 20
OFFSET 0
"QUERY PLAN"
"Limit (cost=7449.20..18656.41 rows=20 width=50) (actual time=947.837..5093.929 rows=20 loops=1)"
" -> GroupAggregate (cost=7449.20..20897.86 rows=24 width=50) (actual time=947.832..5093.845 rows=20 loops=1)"
" -> Merge Left Join (cost=7449.20..17367.45 rows=282413 width=50) (actual time=947.463..4661.418 rows=99940 loops=1)"
" Merge Cond: (domains.id = kc.domain_id)"
" Filter: (kc.deleted_at IS NULL)"
" -> Index Scan using domains_pkey on domains (cost=0.00..12.67 rows=24 width=30) (actual time=0.022..0.146 rows=21 loops=1)"
" Filter: ((deleted_at IS NULL) AND (user_id = 2))"
" -> Materialize (cost=7449.20..16619.27 rows=58836 width=32) (actual time=947.430..4277.029 rows=99923 loops=1)"
" -> Nested Loop Left Join (cost=7449.20..16472.18 rows=58836 width=32) (actual time=947.424..3872.057 rows=99923 loops=1)"
" Join Filter: (kc.id = kv.channel_id)"
" -> Index Scan using index_channels_on_domain_id on channels kc (cost=0.00..12.33 rows=5 width=16) (actual time=0.008..0.090 rows=5 loops=1)"
" -> Materialize (cost=7449.20..10814.25 rows=58836 width=24) (actual time=189.470..536.745 rows=99923 loops=5)"
" -> Hash Right Join (cost=7449.20..10175.07 rows=58836 width=24) (actual time=947.296..1446.256 rows=99923 loops=1)"
" Hash Cond: (ks.product_id = kp.id)"
" -> Seq Scan on sales ks (cost=0.00..1082.22 rows=59022 width=8) (actual time=0.027..119.767 rows=59022 loops=1)"
" -> Hash (cost=6368.75..6368.75 rows=58836 width=20) (actual time=947.213..947.213 rows=58836 loops=1)"
" Buckets: 2048 Batches: 4 Memory Usage: 808kB"
" -> Hash Left Join (cost=3151.22..6368.75 rows=58836 width=20) (actual time=376.685..817.777 rows=58836 loops=1)"
" Hash Cond: (kv.product_id = kp.id)"
" -> Seq Scan on visits kv (cost=0.00..1079.36 rows=58836 width=20) (actual time=0.011..135.584 rows=58836 loops=1)"
" -> Hash (cost=1704.43..1704.43 rows=88143 width=4) (actual time=376.537..376.537 rows=88143 loops=1)"
" Buckets: 4096 Batches: 4 Memory Usage: 785kB"
" -> Seq Scan on products kp (cost=0.00..1704.43 rows=88143 width=4) (actual time=0.006..187.174 rows=88143 loops=1)"
"Total runtime: 5096.723 ms"