1

我有一个包含一列的外部表 - 数据,其中数据是 json 对象

当我运行以下配置单元查询时

hive> select get_json_object(data, "$.ev") from data_table limit 3;     

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201212171824_0218, Tracking URL = http://master:50030/jobdetails.jsp?jobid=job_201212171824_0218
Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=master:8021 -kill job_201212171824_0218
 2013-01-24 10:41:37,271 Stage-1 map = 0%,  reduce = 0%
 ....
 2013-01-24 10:41:55,549 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201212171824_0218
 OK
 2
 2
 2
 Time taken: 21.449 seconds

但是当我运行 sum 聚合时,结果很奇怪

hive> select sum(get_json_object(data, "$.ev")) from data_table limit 3;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
 set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
 set mapred.reduce.tasks=<number>
Starting Job = job_201212171824_0217, Tracking URL =   http://master:50030/jobdetails.jsp?jobid=job_201212171824_0217
Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=master:8021 -kill  job_201212171824_0217
2013-01-24 10:39:24,485 Stage-1 map = 0%,  reduce = 0%
.....
2013-01-24 10:41:00,760 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201212171824_0217
OK
9.4031522E7
Time taken: 100.416 seconds

谁能解释我为什么会这样?我应该怎么做才能正常工作?

4

1 回答 1

1

Hive 似乎将 JSON 中的值作为floats 而不是ints,并且看起来您的表非常大,因此 Hive 可能使用“指数”表示法来表示大浮点数,因此9.4031522E7可能意味着94031522.

如果你想确保你正在做一个sumint,你可以将 JSON 的字段转换为 int,并且 sum 应该能够返回一个 int:

$ hive -e "select sum(get_json_object(dt, '$.ev')) from json_table"
8.806305E7
$ hive -e "select sum(cast(get_json_object(dt, '$.ev') as int)) from json_table"
88063050
于 2013-01-24T17:26:09.897 回答