5

我正在尝试查询接近 20M 行的分区表(按月)。我需要按 DATE(transaction_utc) 以及 country_id 进行分组。如果我关闭 group by 和聚合返回的行刚刚超过 40k,这并不算多,但是添加 group by 会使查询显着变慢,除非所述 GROUP BY 在 transaction_utc 列上,在这种情况下它变得很快。

我一直在尝试通过调整查询和/或索引来优化下面的第一个查询,并达到了下面的点(大约是最初的 2 倍)但是仍然坚持使用 5s 查询来汇总 45k 行,这似乎是方式太多了。

作为参考,这个盒子是一个全新的 24 个逻辑核心、64GB RAM、Mariadb-5.5.x 服务器,其可用的 INNODB 缓冲池比服务器上的索引空间多得多,因此不应该有任何 RAM 或 CPU 压力。

因此,我正在寻找有关导致速度变慢的原因的想法以及加快速度的建议。任何反馈将不胜感激!:)

好的,进入细节...

以下查询(我实际需要的查询)大约需要 5 秒 (+/-),并返回少于 100 行。

SELECT lss.`country_id` AS CountryId
, Date(lss.`transaction_utc`) AS TransactionDate
, c.`name` AS CountryName,  lss.`country_id` AS CountryId
, COALESCE(SUM(lss.`sale_usd`),0) AS SaleUSD
, COALESCE(SUM(lss.`commission_usd`),0) AS CommissionUSD  
FROM `sales` lss  
JOIN `countries` c ON lss.`country_id` = c.`country_id`  
WHERE ( lss.`transaction_utc` BETWEEN '2012-09-26' AND '2012-10-26' AND lss.`username` = 'someuser' )  GROUP BY lss.`country_id`, DATE(lss.`transaction_utc`)

相同查询的 EXPLAIN SELECT 如下所示。请注意,它没有使用 transaction_utc 键。它不应该使用我的覆盖索引吗?

id  select_type table   type    possible_keys   key key_len ref rows    Extra
1   SIMPLE  lss ref idx_unique,transaction_utc,country_id   idx_unique  50  const   1208802 Using where; Using temporary; Using filesort
1   SIMPLE  c   eq_ref  PRIMARY PRIMARY 4   georiot.lss.country_id  1   

现在谈谈我试图确定发生了什么的其他几个选项......

以下查询(更改分组依据)大约需要 5 秒 (+/-),并且仅返回 3 行:

SELECT lss.`country_id` AS CountryId
, DATE(lss.`transaction_utc`) AS TransactionDate
, c.`name` AS CountryName,  lss.`country_id` AS CountryId
, COALESCE(SUM(lss.`sale_usd`),0) AS SaleUSD
, COALESCE(SUM(lss.`commission_usd`),0) AS CommissionUSD  
FROM `sales` lss  
JOIN `countries` c ON lss.`country_id` = c.`country_id`  
WHERE ( lss.`transaction_utc` BETWEEN '2012-09-26' AND '2012-10-26' AND lss.`username` = 'someuser' )  GROUP BY lss.`country_id`

以下查询(删除分组依据)需要 4-5 秒 (+/-) 并返回 1 行:

SELECT lss.`country_id` AS CountryId
    , DATE(lss.`transaction_utc`) AS TransactionDate
    , c.`name` AS CountryName,  lss.`country_id` AS CountryId
    , COALESCE(SUM(lss.`sale_usd`),0) AS SaleUSD
    , COALESCE(SUM(lss.`commission_usd`),0) AS CommissionUSD  
    FROM `sales` lss  
    JOIN `countries` c ON lss.`country_id` = c.`country_id`  
    WHERE ( lss.`transaction_utc` BETWEEN '2012-09-26' AND '2012-10-26' AND lss.`username` = 'someuser' )

以下查询需要 .00X 秒 (+/-) 并返回约 45k 行。这对我来说表明,我们最多只尝试将 45K 行分组到少于 100 个组中(如在我的初始查询中):

SELECT lss.`country_id` AS CountryId
    , DATE(lss.`transaction_utc`) AS TransactionDate
    , c.`name` AS CountryName,  lss.`country_id` AS CountryId
    , COALESCE(SUM(lss.`sale_usd`),0) AS SaleUSD
    , COALESCE(SUM(lss.`commission_usd`),0) AS CommissionUSD  
    FROM `sales` lss  
    JOIN `countries` c ON lss.`country_id` = c.`country_id`  
    WHERE ( lss.`transaction_utc` BETWEEN '2012-09-26' AND '2012-10-26' AND lss.`username` = 'someuser' )
GROUP BY lss.`transaction_utc`

表模式:

CREATE TABLE IF NOT EXISTS `sales` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `user_linkshare_account_id` int(11) unsigned NOT NULL,
  `username` varchar(16) NOT NULL,
  `country_id` int(4) unsigned NOT NULL,
  `order` varchar(16) NOT NULL,
  `raw_tracking_code` varchar(255) DEFAULT NULL,
  `transaction_utc` datetime NOT NULL,
  `processed_utc` datetime NOT NULL ,
  `sku` varchar(16) NOT NULL,
  `sale_original` decimal(10,4) NOT NULL,
  `sale_usd` decimal(10,4) NOT NULL,
  `quantity` int(11) NOT NULL,
  `commission_original` decimal(10,4) NOT NULL,
  `commission_usd` decimal(10,4) NOT NULL,
  `original_currency` char(3) NOT NULL,
  PRIMARY KEY (`id`,`transaction_utc`),
  UNIQUE KEY `idx_unique` (`username`,`order`,`processed_utc`,`sku`,`transaction_utc`),
  KEY `raw_tracking_code` (`raw_tracking_code`),
  KEY `idx_usd_amounts` (`sale_usd`,`commission_usd`),
  KEY `idx_countries` (`country_id`),
  KEY `transaction_utc` (`transaction_utc`,`username`,`country_id`,`sale_usd`,`commission_usd`)
) ENGINE=InnoDB  DEFAULT CHARSET=utf8
/*!50100 PARTITION BY RANGE ( TO_DAYS(`transaction_utc`))
(PARTITION pOLD VALUES LESS THAN (735112) ENGINE = InnoDB,
 PARTITION p201209 VALUES LESS THAN (735142) ENGINE = InnoDB,
 PARTITION p201210 VALUES LESS THAN (735173) ENGINE = InnoDB,
 PARTITION p201211 VALUES LESS THAN (735203) ENGINE = InnoDB,
 PARTITION p201212 VALUES LESS THAN (735234) ENGINE = InnoDB,
 PARTITION pMAX VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */ AUTO_INCREMENT=19696320 ;
4

1 回答 1

11

有问题的部分可能是GROUP BY DATE(transaction_utc). 您还声称对此查询有一个覆盖索引,但我没有看到。您的 5 列索引包含查询中使用的所有列,但不是以最佳顺序(即:WHERE- GROUP BY- SELECT)。

因此,找不到有用索引的引擎将不得不为所有 20M 行评估此函数。username实际上,它找到一个以(the )开头的索引idx_unique并使用它,因此它必须为(仅)1.2M 行评估该函数。如果您有 a(transaction_utc)或 a(username, transaction_utc)它会选择三个中最有用的。

您能否通过将列拆分为日期和时间部分来更改表结构?如果可以,那么在(username, country_id, transaction_date)或(更改用于分组的两列的顺序)上的索引(username, transaction_date, country_id)将非常有效。

一个覆盖指数就(username, country_id, transaction_date, sale_usd, commission_usd)更好了。


如果要保留当前结构,请尝试将 5 列索引中的顺序更改为:

(username, country_id, transaction_utc, sale_usd, commission_usd)

或者:

(username, transaction_utc, country_id, sale_usd, commission_usd)

由于您使用的是 MariaDB,因此您可以使用VIRTUAL功能,而无需更改现有列:

添加一个虚拟(持久)列和适当的索引:

ALTER TABLE sales 
    ADD COLUMN transaction_date DATE NOT NULL
               AS DATE(transaction_utc) 
               PERSISTENT 
    ADD INDEX special_IDX 
        (username, country_id, transaction_date, sale_usd, commission_usd) ;
于 2012-10-27T20:35:16.340 回答