2

我在大约一千个 avro 记录的数据集上有一个 Yarn MR(有两个用于 mapreduce 的 ec2 实例)作业,并且 map 阶段的行为不规律。请参阅下面的进度。当然,我检查了 resourcemanager 和 nodemanagers 上的日志,并没有发现任何可疑之处,但是这些日志太冗长了

那里发生了什么事?

        hive> select * from nikon where qs_cs_s_aid='VIEW' limit 10;

        Total MapReduce jobs = 1
        Launching Job 1 out of 1
        Number of reduce tasks is set to 0 since there's no reduce operator
        Starting Job = job_1352281315350_0020, Tracking URL = http://blabla.ec2.internal:8088/proxy/application_1352281315350_0020/
        Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=blabla.com:8032 -kill job_1352281315350_0020
        Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0

        2012-11-07 11:14:40,976 Stage-1 map = 0%,  reduce = 0%
        2012-11-07 11:15:06,136 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 10.38 sec
        2012-11-07 11:15:07,253 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 12.18 sec
        2012-11-07 11:15:08,371 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 12.18 sec
        2012-11-07 11:15:09,491 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 12.18 sec
        2012-11-07 11:15:10,643 Stage-1 map = 2%,  reduce = 0%, Cumulative CPU 15.42 sec
        (...)
        2012-11-07 11:15:35,441 Stage-1 map = 28%,  reduce = 0%, Cumulative CPU 37.77 sec
        2012-11-07 11:15:36,486 Stage-1 map = 28%,  reduce = 0%, Cumulative CPU 37.77 sec

here restart at 16% ?

        2012-11-07 11:15:37,692 Stage-1 map = 16%,  reduce = 0%, Cumulative CPU 21.15 sec
        2012-11-07 11:15:38,815 Stage-1 map = 16%,  reduce = 0%, Cumulative CPU 21.15 sec
        2012-11-07 11:15:39,865 Stage-1 map = 16%,  reduce = 0%, Cumulative CPU 21.15 sec
        2012-11-07 11:15:41,064 Stage-1 map = 18%,  reduce = 0%, Cumulative CPU 22.4 sec
        2012-11-07 11:15:42,181 Stage-1 map = 18%,  reduce = 0%, Cumulative CPU 22.4 sec
        2012-11-07 11:15:43,299 Stage-1 map = 18%,  reduce = 0%, Cumulative CPU 22.4 sec

here restart at 0% ?

        2012-11-07 11:15:44,418 Stage-1 map = 0%,  reduce = 0%
        2012-11-07 11:16:02,076 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 6.86 sec
        2012-11-07 11:16:03,193 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 6.86 sec
        2012-11-07 11:16:04,259 Stage-1 map = 2%,  reduce = 0%, Cumulative CPU 8.45 sec
        (...)
        2012-11-07 11:16:31,291 Stage-1 map = 22%,  reduce = 0%, Cumulative CPU 35.34 sec
        2012-11-07 11:16:32,414 Stage-1 map = 26%,  reduce = 0%, Cumulative CPU 37.93 sec

here restart at 11% ?

        2012-11-07 11:16:33,459 Stage-1 map = 11%,  reduce = 0%, Cumulative CPU 19.53 sec
        2012-11-07 11:16:34,507 Stage-1 map = 11%,  reduce = 0%, Cumulative CPU 19.53 sec
        2012-11-07 11:16:35,731 Stage-1 map = 13%,  reduce = 0%, Cumulative CPU 21.47 sec
        (...)
        2012-11-07 11:16:46,839 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 24.14 sec

here restart at 0% ?

        2012-11-07 11:16:47,939 Stage-1 map = 0%,  reduce = 0%
        2012-11-07 11:16:56,653 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 7.54 sec
        2012-11-07 11:16:57,814 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 7.54 sec
        (...)

不用说,一段时间后作业会因错误而崩溃:java.io.IOException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: -56

4

1 回答 1

1

这看起来就像 hadoop 在失败时重试 map 任务(默认情况下,它重试 3 次,每次在不同的主机上),这就是它使您的作业更具容错性的方式。

如果故障是由特定主机上的临时问题引起的(这种情况比您想象的要多),这很有用。但是,在您的情况下,您确实有一个由配置单元查询中的某些内容引起的数组越界异常。我会检查失败的任务日志以尝试调试原因。

于 2012-11-07T16:58:24.357 回答