2

我在我们的 Teradata QA 环境中遇到了一个问题,一个在 1 分钟内运行的简单查询现在需要 12 分钟才能完成。此选择基于简单的内部连接提取 5 个字段

select a.material
    , b.season
    , b.theme
    , b.collection
from SalesOrders_view.Allocation_Deliveries_cur a
inner join SalesOrders_view.Material_Attributes_cur b
    on a.material = b.material;

我可以在我们的 Prod 环境中运行同样的查询,它会在不到一分钟的时间内返回,同时运行的记录比 QA 多约 20 万条。

SalesOrders.Allocation_Deliveries 中的总交易量低于 110 万条记录,SalesOrders.Material_Attributes 中的总数量低于 129k 条记录。这些是小型数据集。

我比较了两种环境中的 Explain 计划,在第一个 Join 步骤中估计的 spool 体积存在明显差异。生产中的估计是在金钱上,而质量保证中的估计是一个数量级。然而,两个系统中的数据和表格/视图是相同的,我们以各种可能的方式收集了统计数据,我们可以看到两个系统中的特定表格人口统计数据是相同的。

最后,这个查询在包括 QA 在内的所有环境中总是在一分钟内返回,因为它仍在生产中进行。这种潜在的行为是最近一周左右的。我与我们的 DBA 讨论了这个问题,我们没有对软件或配置进行任何更改。他是新人,但似乎知道自己在做什么,但仍被新环境赶上。

我正在寻找一些关于下一步检查的指示。我已经比较了 QA 和 Prod 的相关表/视图定义,它们是相同的。每个系统中的表格人口统计数据也是相同的(我与我们的 DBA 一起检查了这些以确保)

任何帮助表示赞赏。提前致谢。拍

这是 QA 的解释计划。请注意第 5 步(144 行)中的非常低的估计值。在 Prod 中,相同的解释显示 > 1 M 行,这将接近我所知道的。

Explain select a.material
    , b.season
    , b.theme
    , b.collection
from SalesOrders_view.Allocation_Deliveries a
inner join SalesOrders_view.Material_Attributes_cur b
    on a.material = b.material;

  1) First, we lock SalesOrders.Allocation_Deliveries in view
     SalesOrders_view.Allocation_Deliveries for access, and we lock
     SalesOrders.Material_Attributes in view SalesOrders_view.Material_Attributes_cur for
     access. 
  2) Next, we do an all-AMPs SUM step to aggregate from
     SalesOrders.Material_Attributes in view SalesOrders_view.Material_Attributes_cur by way
     of an all-rows scan with no residual conditions
     , grouping by field1 ( SalesOrders.Material_Attributes.material
     ,SalesOrders.Material_Attributes.season ,SalesOrders.Material_Attributes.theme
     ,SalesOrders.Material_Attributes.theme ,SalesOrders.Material_Attributes.af_grdval
     ,SalesOrders.Material_Attributes.af_stcat
     ,SalesOrders.Material_Attributes.Material_Attributes_SRC_SYS_NM).  Aggregate
     Intermediate Results are computed locally, then placed in Spool 4. 
     The size of Spool 4 is estimated with high confidence to be
     129,144 rows (41,713,512 bytes).  The estimated time for this step
     is 0.06 seconds. 
  3) We execute the following steps in parallel. 
       1) We do an all-AMPs RETRIEVE step from Spool 4 (Last Use) by
          way of an all-rows scan into Spool 2 (all_amps), which is
          redistributed by the hash code of (
          SalesOrders.Material_Attributes.Field_9,
          SalesOrders.Material_Attributes.Material_Attributes_SRC_SYS_NM,
          SalesOrders.Material_Attributes.Field_7, SalesOrders.Material_Attributes.Field_6,
          SalesOrders.Material_Attributes.theme, SalesOrders.Material_Attributes.theme,
          SalesOrders.Material_Attributes.season, SalesOrders.Material_Attributes.material)
          to all AMPs.  Then we do a SORT to order Spool 2 by row hash
          and the sort key in spool field1 eliminating duplicate rows. 
          The size of Spool 2 is estimated with low confidence to be
          129,144 rows (23,504,208 bytes).  The estimated time for this
          step is 0.11 seconds. 
       2) We do an all-AMPs RETRIEVE step from SalesOrders.Material_Attributes in
          view SalesOrders_view.Material_Attributes_cur by way of an all-rows scan
          with no residual conditions locking for access into Spool 6
          (all_amps), which is redistributed by the hash code of (
          SalesOrders.Material_Attributes.material, SalesOrders.Material_Attributes.season,
          SalesOrders.Material_Attributes.theme, SalesOrders.Material_Attributes.theme,
          SalesOrders.Material_Attributes.Material_Attributes_SRC_SYS_NM,
          SalesOrders.Material_Attributes.Material_Attributes_UPD_TS, (CASE WHEN (NOT
          (SalesOrders.Material_Attributes.af_stcat IS NULL )) THEN
          (SalesOrders.Material_Attributes.af_stcat) ELSE ('') END )(VARCHAR(16),
          CHARACTER SET UNICODE, NOT CASESPECIFIC), (CASE WHEN (NOT
          (SalesOrders.Material_Attributes.af_grdval IS NULL )) THEN
          (SalesOrders.Material_Attributes.af_grdval) ELSE ('') END )(VARCHAR(8),
          CHARACTER SET UNICODE, NOT CASESPECIFIC)) to all AMPs.  Then
          we do a SORT to order Spool 6 by row hash.  The size of Spool
          6 is estimated with high confidence to be 129,144 rows (
          13,430,976 bytes).  The estimated time for this step is 0.08
          seconds. 
  4) We do an all-AMPs RETRIEVE step from Spool 2 (Last Use) by way of
     an all-rows scan into Spool 7 (all_amps), which is built locally
     on the AMPs.  Then we do a SORT to order Spool 7 by the hash code
     of (SalesOrders.Material_Attributes.material, SalesOrders.Material_Attributes.season,
     SalesOrders.Material_Attributes.theme, SalesOrders.Material_Attributes.theme,
     SalesOrders.Material_Attributes.Field_6, SalesOrders.Material_Attributes.Field_7,
     SalesOrders.Material_Attributes.Material_Attributes_SRC_SYS_NM,
     SalesOrders.Material_Attributes.Field_9).  The size of Spool 7 is estimated
     with low confidence to be 129,144 rows (13,301,832 bytes).  The
     estimated time for this step is 0.05 seconds. 
  5) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of an
     all-rows scan, which is joined to Spool 7 (Last Use) by way of an
     all-rows scan.  Spool 6 and Spool 7 are joined using an inclusion
     merge join, with a join condition of ("(material = material) AND
     ((season = season) AND ((theme = theme) AND ((theme =
     theme) AND (((( CASE WHEN (NOT (af_grdval IS NULL )) THEN
     (af_grdval) ELSE ('') END ))= Field_6) AND (((( CASE WHEN (NOT
     (AF_STCAT IS NULL )) THEN (AF_STCAT) ELSE ('') END ))= Field_7)
     AND ((Material_Attributes_SRC_SYS_NM = Material_Attributes_SRC_SYS_NM) AND
     (Material_Attributes_UPD_TS = Field_9 )))))))").  The result goes into Spool
     8 (all_amps), which is duplicated on all AMPs.  The size of Spool
     8 is estimated with low confidence to be 144 rows (5,616 bytes). 
     The estimated time for this step is 0.04 seconds. 
  6) We do an all-AMPs JOIN step from Spool 8 (Last Use) by way of an
     all-rows scan, which is joined to SalesOrders.Allocation_Deliveries in view
     SalesOrders_view.Allocation_Deliveries by way of an all-rows scan with no
     residual conditions.  Spool 8 and SalesOrders.Allocation_Deliveries are
     joined using a single partition hash_ join, with a join condition
     of ("SalesOrders.Allocation_Deliveries.material = material").  The result goes
     into Spool 1 (group_amps), which is built locally on the AMPs. 
     The size of Spool 1 is estimated with low confidence to be 3,858
     rows (146,604 bytes).  The estimated time for this step is 0.44
     seconds. 
  7) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1.  The total estimated time is 0.70 seconds.

这是记录分布的样子和我用来生成结果集的 SQL

SELECT HASHAMP(HASHBUCKET(HASHROW( MATERIAL ))) AS
"AMP#",COUNT(*)
FROM EDW_LND_SAP_VIEW.EMDMMU01_CUR
GROUP BY 1
ORDER BY 2 DESC;

输出最高:AMP 137,1093 行 最低:AMP 72,768 行 AMP 总数:144

4

1 回答 1

2

统计建议

在 PROD 和 QA 中运行以下命令并发布差异(如果需要,请隐藏列名):

DIAGNOSTIC HELPSTATS ON FOR SESSION;

EXPLAIN
select a.material
    , b.season
    , b.theme
    , b.collection
from SalesOrders_view.Allocation_Deliveries_cur a
inner join SalesOrders_view.Material_Attributes_cur b
    on a.material = b.material;

当与 EXPLAIN 命令一起运行时,此诊断将生成推荐的统计信息列表,这些统计信息可能有利于优化器生成最低成本的查询计划。这可能不会产生任何差异,或者它可能指向环境(数据或其他)之间不同的东西。

视图和 JOIN 条件

根据您的 EXPLAIN 计划,SalesOrders_View 数据库中的一个或两个视图似乎正在使用 EXISTS 子句。这个 EXISTS 子句依赖于 COALESCE 条件(或显式 CASE 逻辑)来适应一个表中定义为 NOT NULL 的列与另一个表中定义为允许 NULL 值的列之间的比较。这可能会影响该联接的性能。

数据分布

您的分发结果似乎来自 PRODUCTION 环境。(基于 AMP 的数量和 AMP 上显示的最高行和最低行的行数。)这对于 QA 来说是什么样的?

编辑 - 2013-01-09 09:21

如果数据是在 2 个月前从 Prod 复制的,那么问这个问题可能看起来很愚蠢,但后来又重新收集了统计数据吗?替换数据之上的陈旧统计信息可能会导致环境之间的查询计划出现差异。

即使它们不是 PPI 表,您是否也在收集有关您的表的 PARTITION 统计信息?这有助于优化器进行基数估计。

您是唯一在 QA 系统上运行的工作负载吗?

您是否查看过 DBQL 指标来比较每个环境中查询的 CPU 和 IO 消耗?还要查看 IO 偏斜、CPU 偏斜和不必要的 IO 指标。

您是否在 QA 环境中设置了可能会延迟您的工作负载的延迟限制?这会让您感觉在 QA 环境中运行需要更长的时间,而实际上 QA 和 PROD 之间的实际 CPU 消耗和 IO 消耗是相同的。

您可以访问 Viewpoint 吗?

如果是这样,您是否使用 My Queries 和/或 Query Spotlight portlet 查看了您的查询以观察它的行为?

您知道查询计划中的哪个步骤最昂贵或最耗时吗?使用我提到的 portlet 或 DBQL 中的 Step logging 的 Viewpoint Rewind 可以向您展示这一点。

环境之间的 DBS Control 设置是否相同?请您的 DBA 看看这个。那里有一些设置会影响优化器使用的连接计划。

最后,如果硬件和 TDBMS 补丁级别相同的两个系统上的数据、表结构、索引和统计信息相同,则不应获得两个不同的 EXPLAIN 计划。如果最终出现这种情况,我建议您联系 GSC 并让他们参与进来。

于 2013-01-08T23:50:25.820 回答