我遇到了一个奇怪的问题,我向你保证我已经用谷歌搜索了很多。
我正在运行一组 AWS Elastic MapReduce 集群,并且我有一个包含大约 16 个分区的 Hive 表。它们是从 emr-s3distcp 创建的(因为原始 s3 存储桶中有大约 216K 文件),使用 --groupBy 并且限制设置为 64MiB(在这种情况下为 DFS 块大小),它们只是带有每行都有一个 json 对象,使用 JSON SerDe。
当我运行这个脚本时,它需要很长时间,然后由于一些 IPC 连接而放弃。
最初,从 s3distcp 到 HDFS 的压力如此之大,以至于我采取了一些措施(阅读:调整到更高容量的机器,然后将 dfs 权限设置为 3 倍复制,因为它是一个小集群,并且块大小设置为 64MiB )。这很奏效,复制不足的块数变为零(EMR 中小于 3 的默认值是 2,但我已更改为 3)。
查看 /mnt/var/log/apps/hive_081.log 会产生如下几行:
2013-05-12 09:56:12,120 DEBUG org.apache.hadoop.ipc.Client (Client.java:<init>(222)) - The ping interval is60000ms.
2013-05-12 09:56:12,120 DEBUG org.apache.hadoop.ipc.Client (Client.java:<init>(265)) - Use SIMPLE authentication for protocol ClientProtocol
2013-05-12 09:56:12,120 DEBUG org.apache.hadoop.ipc.Client (Client.java:setupIOstreams(551)) - Connecting to /10.17.17.243:9000
2013-05-12 09:56:12,121 DEBUG org.apache.hadoop.ipc.Client (Client.java:sendParam(769)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop sending #14
2013-05-12 09:56:12,121 DEBUG org.apache.hadoop.ipc.Client (Client.java:run(742)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop: starting, having connections 2
2013-05-12 09:56:12,125 DEBUG org.apache.hadoop.ipc.Client (Client.java:receiveResponse(804)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop got value #14
2013-05-12 09:56:12,126 DEBUG org.apache.hadoop.ipc.RPC (RPC.java:invoke(228)) - Call: getFileInfo 6
2013-05-12 09:56:21,523 INFO org.apache.hadoop.ipc.Client (Client.java:handleConnectionFailure(663)) - Retrying connect to server: domU-12-31-39-10-81-2A.compute-1.internal/10.198.130.216:9000. Already tried 6 time(s).
2013-05-12 09:56:22,122 DEBUG org.apache.hadoop.ipc.Client (Client.java:close(876)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop: closed
2013-05-12 09:56:22,122 DEBUG org.apache.hadoop.ipc.Client (Client.java:run(752)) - IPC Client (47) connection to /10.17.17.243:9000 from hadoop: stopped, remaining connections 1
2013-05-12 09:56:42,544 INFO org.apache.hadoop.ipc.Client (Client.java:handleConnectionFailure(663)) - Retrying connect to server: domU-12-31-39-10-81-2A.compute-1.internal/10.198.130.216:9000. Already tried 7 time(s).
依此类推,直到其中一个客户达到限制。
在 Elastic MapReduce 下的 Hive 中如何解决这个问题?
谢谢