1

如何将以下查询转换为与不支持子查询的 Spark 1.6 兼容:

SELECT ne.device_id, sp.device_hostname
FROM `table1` ne INNER JOIN `table2` sp 
ON sp.device_hostname = 
                      (SELECT device_hostname FROM `table2` 
                      WHERE device_hostname LIKE 
                      CONCAT(ne.device_id,'%') ORDER BY device_hostname DESC LIMIT 1)

我读过它支持 FROM 中指定的子查询,但不支持 WHERE 但以下也不起作用:

SELECT * FROM (SELECT ne.device_id, sp.device_hostname
FROM `table1` ne INNER JOIN `table2` sp 
ON sp.device_hostname = 
                      (SELECT device_hostname FROM `table2` 
                      WHERE device_hostname LIKE 
                      CONCAT(ne.device_id,'%') ORDER BY device_hostname DESC LIMIT 1)) AS TA

我的总体目标是加入两个表,尽管只从 table2 获取最后一条记录。SQL 语句是有效的,但当我在 Spark 的 HiveContext.sql 中使用它们时,我得到一个分析异常。

4

1 回答 1

0

可以使用HiveContext和窗口函数(参考How to select the first row of each group?

scala> Seq((1L, "foo")).toDF("id", "device_id").registerTempTable("table1")

scala> Seq((1L, "foobar"), (2L, "foobaz")).toDF("id", "device_hostname").registerTempTable("table2")

scala> sqlContext.sql("""
     |   WITH tmp AS (
     |     SELECT ne.device_id, sp.device_hostname, row_number() OVER (PARTITION BY device_id ORDER BY device_hostname) AS rn
     |     FROM table1 ne INNER JOIN table2 sp 
     |     ON sp.device_hostname LIKE CONCAT(ne.device_id, '%'))
     |   SELECT device_id, device_hostname FROM tmp WHERE rn = 1
     | """).show
+---------+---------------+                                                     
|device_id|device_hostname|
+---------+---------------+
|      foo|         foobar|
+---------+---------------+

但只有两列可以聚合:

scala> sqlContext.sql("""
     |  WITH tmp AS (
     |    SELECT ne.device_id, sp.device_hostname
     |    FROM table1 ne INNER JOIN table2 sp 
     |    ON sp.device_hostname LIKE CONCAT(ne.device_id, '%'))
     |  SELECT device_id, min(device_hostname) AS device_hostname
     |  FROM tmp GROUP BY device_id 
     |""").show
+---------+---------------+                                                     
|device_id|device_hostname|
+---------+---------------+
|      foo|         foobar|
+---------+---------------+

为了提高性能,您应该尝试LIKE用相等条件替换我们如何使用 SQL 式“LIKE”标准来连接两个 Spark SQL 数据帧?

于 2018-01-22T12:46:44.287 回答