1

存储在 Hive 中的文件:

[
  {
    "occupation": "guitarist",
    "fav_game": "football",
    "name": "d1"
  },
  {
    "occupation": "dancer",
    "fav_game": "chess",
    "name": "k1"
  },
  {
    "occupation": "traveller",
    "fav_game": "cricket",
    "name": "p1"
  },
  {
    "occupation": "drummer",
    "fav_game": "archery",
    "name": "d2"
  },
  {
    "occupation": "farmer",
    "fav_game": "cricket",
    "name": "k2"
  },
  {
    "occupation": "singer",
    "fav_game": "football",
    "name": "s1"
  }
]

hadoop 中的 CSV 文件:

name,age,city
d1,23,delhi
k1,23,indore
p1,23,blore
d2,25,delhi
k2,30,delhi
s1,25,delhi

我单独询问了他们,​​它工作正常。然后,我尝试加入查询:

select * from hdfs.`/demo/distribution.csv` d join hive.demo.`user_details` u on d.name = u.name

我遇到了以下问题:

org.apache.drill.common.exceptions.UserRemoteException:系统错误:DrillRuntimeException:联接仅支持 1. 数字数据 2. Varchar、Varbinary 数据 3. 日期、时间戳数据之间的隐式转换 左类型:INT,右类型:VARCHAR。添加显式强制转换以避免此错误 Fragment 0:0 [Error Id: b01db9c8-fb35-4ef8-a1c0-31b68ff7ae8d on IMPETUS-DSRV03.IMPETUS.CO.IN:31010]

4

2 回答 2

0

请参考这个https://drill.apache.org/docs/data-type-conversion/ 我们需要做明确的类型转换来处理这种情况。

假设我们有一个 JSON 文件 employee.json 和一个 csv 文件 sample.csv。为了同时查询两者,我们需要在一个查询中进行类型转换。

0: jdbc:drill:zk=local> select emp.employee_id, dept.department_description, phy.columns[2], phy.columns[3] FROM cp.`employee.json` emp , cp.`department.json` dept, dfs.`/tmp/sample.csv` phy where CAST(emp.employee_id AS INT) =  CAST(phy.columns[0] AS INT) and emp.department_id = dept.department_id;

在这里,我们进行类型转换CAST(emp.employee_id AS INT) = CAST(phy.columns[0] AS INT)以便相等不会失败。

有关更多详细信息,请参阅此:- http://www.devinline.com/2015/11/apache-drill-setup-and-SQL-query-execution.html#multiple_src

于 2015-11-21T08:34:06.000 回答
0

即使默认情况下它已采用 varchar,您也需要强制转换。尝试这个:

select * from hdfs.`/demo/distribution.csv` d join hive.demo.`user_details` u on cast(d.name as VARCHAR) = cast(u.name as VARCHAR)

但是您不能直接从 csv 引用列名。您需要考虑列 [0] 作为名称。

于 2016-10-15T16:54:54.903 回答