我有一个 ACID 配置单元表,其中包含 ORC 格式的文件。尝试压缩时,出现以下错误:Task: ... exited : java.io.IOException: Two readers for ...
完整错误如下:
2019-06-03 07:01:05,357 ERROR [IPC Server handler 2 on 41085] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1558939181485_29861_m_000001_0 - exited : java.io.IOException: Two readers for {originalWriteId: 143, bucket: 536870912(1.0.0), row: 3386, currentWriteId 210}: new [key={originalWriteId: 143, bucket: 536870912(1.0.0), row: 3386, currentWriteId 210}, nextRecord={2, 143, 536870912, 3386, 210, null}, reader=Hive ORC Reader(hdfs://HdfsNameService/tbl/delete_delta_0000209_0000214/bucket_00001, 9223372036854775807)], old [key={originalWriteId: 143, bucket: 536870912(1.0.0), row: 3386, currentWriteId 210}, nextRecord={2, 143, 536870912, 3386, 210, null}, reader=Hive ORC Reader(hdfs://HdfsNameService/tbl/delete_delta_0000209_0000214/bucket_00000, 9223372036854775807)]
at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.ensurePutReader(OrcRawRecordMerger.java:1171)
at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:1126)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:2402)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:964)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:941)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
该表是通过将 avro 文件添加到一个 orc 表中来创建和更新的merge
,因此有一堆 deltadelete_delta
和delta
.
我有许多其他这样的表,它们没有这个问题。该表没有什么特别之处,实际上非常小(<100k 行,磁盘上 2.5M),并且在上个月更新了 100 次(更新了 20k 行,更新了 5M 数据)。DDL 是:
CREATE TABLE `contact_group`(
`id` bigint,
`license_name` string,
`campaign_id` bigint,
`name` string,
`is_system` boolean,
`is_test` boolean,
`is_active` boolean,
`remarks` string,
`updated_on_utc` timestamp,
`created_on_utc` timestamp,
`deleted_on_utc` timestamp,
`sys_schema_version` int,
`sys_server_ipv4` bigint,
`sys_server_name` string,
`load_ts` timestamp)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://HdfsNameService/dwh/vault/contact_group'
TBLPROPERTIES (
'bucketing_version'='2',
'last_modified_by'='hive',
'last_modified_time'='1553512639',
'transactional'='true',
'transactional_properties'='default',
'transient_lastDdlTime'='1559522011')
这种情况每隔几个月就会发生一次。由于其他一切(选择、合并)都有效,修复通常是创建第二个表(create table t as select * from contact_group
)并切换表,但我想找到真正的根本原因。
我发现的关于我的错误的唯一参考是代码本身,这对我没有多大帮助。
这是在 hdp3.1 上,带有 Hive 3。