0

我是一个新手,正在尝试获取一个大型(1.25 TB 未压缩)hdfs 文件并将其放入 Hive 托管表中。它已经在具有任意分区的 csv 格式(来自 sqoop)的 HDFS 上,我将其放入更有条理的格式以进行查询和加入。我在使用 Tez 的 HDP 3.0 上。这是我的hql

USE MYDB;

DROP TABLE IF EXISTS new_table;

CREATE TABLE IF NOT EXISTS new_table (
 svcpt_id VARCHAR(20),
 usage_value FLOAT,
 read_time SMALLINT)
PARTITIONED BY (read_date INT)
CLUSTERED BY (svcpt_id) INTO 9600 BUCKETS
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS ORC
TBLPROPERTIES("orc.compress"="snappy");

SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.exec.max.dynamic.partitions.pernode=2000;
SET hive.exec.max.dynamic.partitions=10000;
SET hive.vectorized.execution.enabled = true;
SET hive.vectorized.execution.reduce.enabled = true;
SET hive.enforce.bucketing = true;
SET mapred.reduce.tasks = 10000;

INSERT OVERWRITE TABLE new_table
PARTITION (read_date)
SELECT svcpt_id, usage, read_time, read_date
FROM raw_table;

Tez 设置它的方式是(来自我最近的失败):

--------------------------------------------------------------------------------
VERTICES      STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED
--------------------------------------------------------------------------------
Map 1      SUCCEEDED   1043       1043        0        0       0       0
Reducer 2    RUNNING   9600        735       19     8846       0       0
Reducer 3     INITED  10000          0        0    10000       0       0
--------------------------------------------------------------------------------
VERTICES: 01/03  [==>>------------------------] 8%    ELAPSED TIME: 45152.59 s
--------------------------------------------------------------------------------

我已经为此工作了一段时间。起初我无法让第一个map 1顶点运行,所以我添加了桶。96 个存储桶让第一个映射器运行,但reducer 2失败的引用磁盘空间问题没有意义。然后我将桶数增加到 9600 并将任务减少到 10000 并且reduce 2顶点开始运行,尽管速度很慢。今天早上我发现它出错了,因为我的namenode由于垃圾收集器的java堆空间错误而关闭。

有人对我有什么指导建议吗?我觉得我是在黑暗中拍摄减少任务的数量、存储桶的数量以及下面显示的所有配置。

hive.tez.container.size = 5120MB
hive.exec.reducers.bytes.per.reducer = 1GB
hive.exec.max.dynamic.partitions = 5000
hive.optimize.sort.dynamic.partition = FALSE
hive.vectorized.execution.enabled = TRUE
hive.vectorized.execution.reduce.enabled = TRUE
yarn.scheduler.minimum-allocation-mb = 2G
yarn.scheduler.maximum-allocation-mb = 8G
mapred.min.split.size=?
mapred.max.split.size=?
hive.input.format=?
mapred.min.split.size=?

未设置 LLAP

我的集群有 4 个节点、32 个内核和 120 GB 内存。我没有使用超过 1/3 的集群存储。

4

1 回答 1

0
SET hive.execution.engine = tez;
SET hive.vectorized.execution.enabled = false;
SET hive.vectorized.execution.reduce.enabled = false;
SET hive.enforce.bucketing = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
SET hive.stats.autogather = true;
SET hive.exec.parallel = true;
SET hive.exec.parallel.thread.number = 60;
SET mapreduce.job.skiprecords = true;
SET mapreduce.map.maxattempts =10;
SET mapreduce.reduce.maxattempts =10;
SET mapreduce.map.skip.maxrecords = 300;
SET mapreduce.task.skip.start.attempts = 1;
SET mapreduce.output.fileoutputformat.compress = false;
SET mapreduce.job.reduces = 1000;

你可以试试上面的一些设置!

于 2018-09-10T17:45:27.413 回答