现在数据库已经增长到几百万条记录,我有一些查询需要很长时间(300 毫秒)。幸运的是,查询不需要查看这些数据的大部分,最新的 100,000 条记录就足够了,因此我的计划是维护一个包含最新 100,000 条记录的单独表并针对此运行查询。如果有人对这样做的更好方法有任何建议,那就太好了。我真正的问题是,如果查询确实需要针对历史数据运行,那么下一步是什么?我想到的事情:
- 升级硬件
- 使用内存数据库
- 在您自己的数据结构中手动缓存对象
这些事情是否正确,还有其他选择吗?一些数据库提供者是否比其他提供者有更多的功能来处理这些问题,例如指定一个特定的表/索引完全在内存中?
对不起,我应该提到这一点,我正在使用 mysql。
我忘了在上面提到索引。老实说,到目前为止,索引是我唯一的改进来源。为了识别瓶颈,我一直在使用 maatkit 进行查询,以显示是否使用了索引。
我知道我现在正在远离这个问题的目的,所以也许我应该再做一个。我的问题是EXPLAIN
说查询需要 10 毫秒而不是 jprofiler 报告的 300 毫秒。如果有人有任何建议,我将不胜感激。查询是:
select bv.*
from BerthVisit bv
inner join BerthVisitChainLinks on bv.berthVisitID = BerthVisitChainLinks.berthVisitID
inner join BerthVisitChain on BerthVisitChainLinks.berthVisitChainID = BerthVisitChain.berthVisitChainID
inner join BerthJourneyChains on BerthVisitChain.berthVisitChainID = BerthJourneyChains.berthVisitChainID
inner join BerthJourney on BerthJourneyChains.berthJourneyID = BerthJourney.berthJourneyID
inner join TDObjectBerthJourneyMap on BerthJourney.berthJourneyID = TDObjectBerthJourneyMap.berthJourneyID
inner join TDObject on TDObjectBerthJourneyMap.tdObjectID = TDObject.tdObjectID
where
BerthJourney.journeyType='A' and
bv.berthID=251860 and
TDObject.headcode='2L32' and
bv.depTime is null and
bv.arrTime > '2011-07-28 16:00:00'
的输出EXPLAIN
是:
+----+-------------+-------------------------+-------------+---------------------------------------------+-------------------------+---------+------------------------------------------------+------+-------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------------------+-------------+---------------------------------------------+-------------------------+---------+------------------------------------------------+------+-------------------------------------------------------+
| 1 | SIMPLE | bv | index_merge | PRIMARY,idx_berthID,idx_arrTime,idx_depTime | idx_berthID,idx_depTime | 9,9 | NULL | 117 | Using intersect(idx_berthID,idx_depTime); Using where |
| 1 | SIMPLE | BerthVisitChainLinks | ref | idx_berthVisitChainID,idx_berthVisitID | idx_berthVisitID | 8 | Network.bv.berthVisitID | 1 | Using where |
| 1 | SIMPLE | BerthVisitChain | eq_ref | PRIMARY | PRIMARY | 8 | Network.BerthVisitChainLinks.berthVisitChainID | 1 | Using where; Using index |
| 1 | SIMPLE | BerthJourneyChains | ref | idx_berthJourneyID,idx_berthVisitChainID | idx_berthVisitChainID | 8 | Network.BerthVisitChain.berthVisitChainID | 1 | Using where |
| 1 | SIMPLE | BerthJourney | eq_ref | PRIMARY,idx_journeyType | PRIMARY | 8 | Network.BerthJourneyChains.berthJourneyID | 1 | Using where |
| 1 | SIMPLE | TDObjectBerthJourneyMap | ref | idx_tdObjectID,idx_berthJourneyID | idx_berthJourneyID | 8 | Network.BerthJourney.berthJourneyID | 1 | Using where |
| 1 | SIMPLE | TDObject | eq_ref | PRIMARY,idx_headcode | PRIMARY | 8 | Network.TDObjectBerthJourneyMap.tdObjectID | 1 | Using where |
+----+-------------+-------------------------+-------------+---------------------------------------------+-------------------------+---------+------------------------------------------------+------+---------------------------------------
7 rows in set (0.01 sec)