1

我尝试在蜂巢中关注-

set hive.exec.reducers.max = 1;
set mapred.reduce.tasks = 1;

from flat_json
insert overwrite table aggr_pgm_measure PARTITION(dt='${START_TIME}')
reduce  log_time,
 req_id, ac_id, client_key, rulename, categoryname, bsid, visitorid, visitorgroupid, visitortargetid, targetpopulationid, windowsessionid, eventseq, event_code, eventstarttime
 using '${SCRIPT_LOC}/aggregator.pl' as 
 metric_id, metric_value, aggr_type, rule_name, category_name; 

尽管将最大数量和减少任务的数量设置为 1,但我看到生成了 2 个映射减少任务。请看下文——

 > set hive.exec.reducers.max = 1;
hive>  set mapred.reduce.tasks = 1;
hive>
    > from flat_json
    > insert overwrite table aggr_pgm_measure PARTITION(dt='${START_TIME}')
    > reduce  log_time,
    >  req_id, ac_id, client_key, rulename, categoryname, bsid, visitorid, visitorgroupid, visitortargetid, targetpopulationid, windowsessionid, eventseq, event_code, eventstarttime
    >  using '${SCRIPT_LOC}/aggregator.pl' as
    >  metric_id, metric_value, aggr_type, rule_name, category_name;
converting to local s3://dsp-emr-test/anurag/dsp-test/60mins/script/aggregator.pl
Added resource: /mnt/var/lib/hive_07_1/downloaded_resources/aggregator.pl
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201112270825_0009, Tracking URL = http://ip-10-85-66-9.ec2.internal:9100/jobdetails.jsp?jobid=job_201112270825_0009
Kill Command = /home/hadoop/.versions/0.20.205/libexec/../bin/hadoop job  -Dmapred.job.tracker=10.85.66.9:9001 -kill job_201112270825_0009
2011-12-27 10:30:03,542 Stage-1 map = 0%,  reduce = 0%
4

1 回答 1

1

你认为相关的两件事并不相关。您正在设置减少任务的数量,而不是 MapReduce作业。Hive 会将您的查询转换为几个 MapReduce 作业,就像需要完成的性质一样。每个 MapReduce 作业由多个 map任务和 reduce任务组成。

您设置的是最大任务数。这意味着,每个 MapReduce 作业都将受到它可以启动的任务数量的限制。不过,您仍然需要运行两个作业。对于 Hive 的 MapReduce 作业数量,您无能为力。它需要运行每个阶段才能执行您的查询。

于 2011-12-27T14:19:31.903 回答