我在配置单元中有一个分区表“t1”,其中包含许多不同大小的数据文件(总计:900Mb)。我想减少文件数量,以便将更少的文件放入另一个表“t2”。表“t1”和“t2”是这样创建的:
Set hive.exec.compress.output=true;
Set mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK;
use xxx;
CREATE EXTERNAL TABLE tX partitioned by (a string, b string, c string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES (
'avro.schema.literal'='
{
"type": "record",
"name": "Event",
"fields":[
{
"name": "headers",
"type": {
"type": "map",
"values": ["null","string"]
}
},
{
"name": "body",
"type": "bytes"
}
]
}')
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/hive/xxx.db/tX';
我开发了这个脚本:
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK;
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.merge.mapfiles=true;
SET hive.merge.mapredfiles=true;
SET hive.merge.size.per.task=268435456;
SET hive.merge.smallfiles.avgsize=134217728;
INSERT OVERWRITE TABLE xxx.t2 PARTITION (a, b, c) SELECT * FROM xxx.t1 WHERE a=1 and b=2 and c=3;
在带有 hive 0.10 的 CDH4 中,我得到:
242106023 /hive/xxx.db/t2/a=1/b=2/c=3/000000_0
232866517 /hive/xxx.db/t2/a=1/b=2/c=3/000001_0
217161082 /hive/xxx.db/t2/a=1/b=2/c=3/000002_0
37516541 /hive/xxx.db/t2/a=1/b=2/c=3/000003_0
现在,我想使用 hive 0.13.1 迁移到 CDH5。当我在 CDH5 中运行脚本时,我得到:
530348055 /hive/xxx.db/t2/a=1/b=2/c=3/000000_0
执行计划CDH4:
ABSTRACT SYNTAX TREE:
(TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME xxx t1))) (TOK_INSERT (TOK_DESTINATION (TOK_TAB (TOK_TABNAME xxx t2) (TOK_PARTSPEC (TOK_PARTVAL a) (TOK_PARTVAL b) (TOK_PARTVAL c)))) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)) (TOK_WHERE (and (and (= (TOK_TABLE_OR_COL a) 1) (= (TOK_TABLE_OR_COL b) 2)) (= (TOK_TABLE_OR_COL c) 3)))))
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-7 depends on stages: Stage-1 , consists of Stage-4, Stage-3, Stage-5
Stage-4
Stage-0 depends on stages: Stage-4, Stage-3, Stage-6
Stage-2 depends on stages: Stage-0
Stage-3
Stage-5
Stage-6 depends on stages: Stage-5
STAGE PLANS:
Stage: Stage-1
Map Reduce
Alias -> Map Operator Tree:
t1
TableScan
alias: t1
Select Operator
expressions:
expr: headers
type: map<string,string>
expr: body
type: array<tinyint>
expr: a
type: string
expr: b
type: string
expr: c
type: string
outputColumnNames: _col0, _col1, _col2, _col3, _col4
File Output Operator
compressed: false
GlobalTableId: 1
table:
input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
name: xxx.t2
Stage: Stage-7
Conditional Operator
Stage: Stage-4
Move Operator
files:
hdfs directory: true
destination: hdfs://node/tmp/hive-user/hive_2015-06-10_17-46-17_570_5009234087568150280-1/-ext-10000
Stage: Stage-0
Move Operator
tables:
partition:
a
b
c
replace: true
table:
input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
name: xxx.t2
Stage: Stage-2
Stats-Aggr Operator
Stage: Stage-3
Map Reduce
Alias -> Map Operator Tree:
hdfs://node/tmp/hive-user/hive_2015-06-10_17-46-17_570_5009234087568150280-1/-ext-10002
File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
name: xxx.t2
Stage: Stage-5
Map Reduce
Alias -> Map Operator Tree:
hdfs://node/tmp/hive-user/hive_2015-06-10_17-46-17_570_5009234087568150280-1/-ext-10002
File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
name: xxx.t2
Stage: Stage-6
Move Operator
files:
hdfs directory: true
destination: hdfs://node/tmp/hive-user/hive_2015-06-10_17-46-17_570_5009234087568150280-1/-ext-10000
执行计划CDH5:
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 depends on stages: Stage-1
Stage-2 depends on stages: Stage-0
STAGE PLANS:
Stage: Stage-1
Map Reduce
Map Operator Tree:
TableScan
alias: t1
Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: headers (type: map<string,string>), body (type: binary), a (type: string), b (type: string), c (type: string)
outputColumnNames: _col0, _col1, _col2, _col3, _col4
Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: _col2 (type: string), _col3 (type: string), _col4 (type: string)
sort order: +++
Map-reduce partition columns: _col2 (type: string), _col3 (type: string), _col4 (type: string)
Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: map<string,string>), _col1 (type: binary), _col2 (type: string), _col3 (type: string), _col4 (type: string)
Reduce Operator Tree:
Extract
Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
name: xxx.t2
Stage: Stage-0
Move Operator
tables:
partition:
a
b
c
replace: true
table:
input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
name: xxx.t2
Stage: Stage-2
Stats-Aggr Operator
我尝试修改脚本:
脚本 1:
SET mapreduce.job.reduces=2;
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK;
SET hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE xxx.t2 PARTITION (a, b, c) SELECT * FROM xxx.t1 WHERE a=1 and b=2 and c=3;
输出 1:
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
脚本 2:
SET mapreduce.job.reduces=0;
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK;
SET hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE xxx.t2 PARTITION (a, b, c) SELECT * FROM xxx.t1 WHERE a=1 and b=2 and c=3;
输出 2(在这种情况下,SET mapreduce.job.reduces=0;
不起作用):
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
脚本 3:
SET hive.exec.reducers.bytes.per.reducer=268435456;
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK;
SET hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE t2 PARTITION (a, b, c) SELECT * FROM t1 WHERE a=1 and b=2 and c=3;
输出 3:
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 4
尽管有多个 reducer,CDH5 中只写入了 1 个文件(500Mb)。
我的脚本有问题吗?可以设置reducers=0吗?如何在“插入”脚本中设置输出文件的数量或大小?
提前致谢。