0

我创建了如下表:

create table emp (
    > eid int,
    > fname string,
    > lname string,
    > salary double,
    > city string,
    > dept string )
    > row format delimited fields terminated by ',';

然后启用分区我设置了以下属性:

set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;

我创建的分区表如下:

create table part_emp (
    > eid int,
    > fname string,
    > lname string,
    > salary double,
    > dept string )
    > partitioned by ( city string )
    > row format delimited fields terminated by ',';

创建表后,我发出插入查询为

insert into table part_emp partition(city)
select eid,fname,lname,salary,dept,city from emp; 

但它不起作用..

WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = max_20180311015337_5a67813d-dcc5-46c0-ac4b-a54c11ffb912
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1520757649534_0004, Tracking URL = http://ubuntu:8088/proxy/application_1520757649534_0004/
Kill Command = /home/max/bigdata/hadoop-3.0.0/bin/hadoop job  -kill job_1520757649534_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2018-03-11 01:53:44,996 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1520757649534_0004 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

同样在 Hive 1.x 上成功运行

4

1 回答 1

1

我有同样的问题,并且set hive.exec.max.dynamic.partitions.pernode=1000;(默认 100)解决了我的问题。你可以试试。

PS:这个设置的意思是:<em>每个mapper/reducer节点允许创建的最大动态分区数。

于 2018-09-21T07:04:19.513 回答