假设我想加入 3 个表 A、B、C,内连接和 C 非常小。
#DUMMY EXAMPLE with IN-MEMORY table, but same issue if load table using spark.read.parquet("")
var A = (1 to 1000000).toSeq.toDF("A")
var B = (1 to 1000000).toSeq.toDF("B")
var C = (1 to 10).toSeq.toDF("C")
而且我无法控制加入的顺序:
CASE1 = A.join(B,expr("A=B"),"inner").join(C,expr("A=C"),"inner")
CASE2 = A.join(C,expr("A=C"),"inner").join(B,expr("A=B"),"inner")
运行两者都显示 CASE1 比 CASE2 慢 30-40%。
所以问题是:如何利用 Spark 的 CBO 自动将 CASE1 转换为 CASE2 用于内存表或从 Spark 的 parquet 读取器加载的表?
我试过做:
spark.conf.set("spark.sql.autoBroadcastJoinThreshold", -1)
spark.conf.set("spark.sql.cbo.enabled", "true")
A.createOrReplaceTempView("A")
spark.sql("ANALYZE TABLE A COMPUTE STATISTICS")
但这会引发:
org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'a' not found in database 'default'
无需在 Hive 中保存表即可激活 CBO 的任何其他方式?
附件:
- 即使使用 spark.conf.set("spark.sql.cbo.enabled", "true") 在 SparkWebUI 中也没有显示成本估算
- 显示 CASE1.explain != CASE2.explain
CASE1.解释
== Physical Plan ==
*(5) SortMergeJoin [A#3], [C#13], Inner
:- *(3) SortMergeJoin [A#3], [B#8], Inner
: :- *(1) Sort [A#3 ASC NULLS FIRST], false, 0
: : +- Exchange hashpartitioning(A#3, 200)
: : +- LocalTableScan [A#3]
: +- *(2) Sort [B#8 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(B#8, 200)
: +- LocalTableScan [B#8]
+- *(4) Sort [C#13 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(C#13, 200)
+- LocalTableScan [C#13]
CASE2.解释
== Physical Plan ==
*(5) SortMergeJoin [A#3], [B#8], Inner
:- *(3) SortMergeJoin [A#3], [C#13], Inner
: :- *(1) Sort [A#3 ASC NULLS FIRST], false, 0
: : +- Exchange hashpartitioning(A#3, 200)
: : +- LocalTableScan [A#3]
: +- *(2) Sort [C#13 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(C#13, 200)
: +- LocalTableScan [C#13]
+- *(4) Sort [B#8 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(B#8, 200)
+- LocalTableScan [B#8]