0

我面临着非常奇怪的问题。我使用 Pig 进行多列数据处理。Pig 使用 HCatalogLoader 在 pig 脚本中加载数据。这些列包含多个整数数据、字符串数据以及双精度数据。整数类型的列之一(例如 C1)无法使用 ParquetStorer 存储。其他整数列没有问题,只有C1列存储失败。

以下是错误:

Backend error message
---------------------
AttemptID:attempt_1413268228935_0073_m_000002_0 Info:Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

Backend error message
---------------------
AttemptID:attempt_1413268228935_0073_m_000001_0 Info:Error: parquet.io.ParquetEncodingException: can not write value at 2 in tuple (,2003-11-22,840,00007,ABC,DEF,FFGG,10,0.0,0,0.0,11.11,0,7.122112,0.0,0,0.0) from type 'C1: int' to type 'optional int32 C1'
        at parquet.pig.TupleWriteSupport.writeValue(TupleWriteSupport.java:199)
        at parquet.pig.TupleWriteSupport.writeTuple(TupleWriteSupport.java:151)
        at parquet.pig.TupleWriteSupport.write(TupleWriteSupport.java:90)
        at parquet.pig.TupleWriteSupport.write(TupleWriteSupport.java:46)
        at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:111)
        at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:78)
        at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:35)
        at parquet.pig.ParquetStorer.putNext(ParquetStorer.java:121)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
        at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
        at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Number
        at parquet.pig.TupleWriteSupport.writeValue(TupleWriteSupport.java:178)
        ... 24 more

我已经描述了使用 ParquetStorer 存储数据的别名,列 C1 是 int 类型。ParquetStorer 仍然抱怨数据是字符串类型,并且无法将其类型转换为数字。

任何帮助表示赞赏。

4

2 回答 2

0

我有类似的问题,我的解决方法是将字段转换为 chararray,然后我能够以镶木地板格式保存输出。

这是函数 btw 的源代码, http://grepcode.com/file/repo1.maven.org/maven2/com.twitter/parquet-pig/1.2.0/parquet/pig/TupleWriteSupport.java

对我来说似乎很好,但在这种情况下听起来这可能是一个错误:

case INT32:
      recordConsumer.addInteger(((Number)t.get(i)).intValue());
      break;

t.get(i) 返回一个字符串,因此

Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Number
    at parquet.pig.TupleWriteSupport.writeValue(TupleWriteSupport.java:178)
    ... 24 more
于 2014-10-15T17:10:58.737 回答
0

我知道这是一个老问题,但我无法在任何地方找到答案,所以我将发布对我有用的内容:您必须在 LOAD 语句中指定架构。我以前只在 FOREACH/GENERATE 块中设置架构,但我得到了“无法转换为 java.lang.Number”异常。当我在 LOAD 语句中指定模式时,它将创建 parquet 文件而不会出错。

SET parquet.compression snappy;

REGISTER file://.../parquet-pig-bundle-1.5.0-cdh5.16.2.jar;

A = LOAD '$input' USING PigStorage('|') AS (aaa:long, bbb:chararray,
  ccc:chararray, ddd:chararray, eee:chararray, fff:chararray,
  ggg:long, hhh:chararray, iii:chararray);

B = FOREACH A GENERATE
  ...;

STORE B INTO '$output_dir' USING parquet.pig.ParquetStorer;

当然,还有一些没有答案的问题。

于 2022-01-18T17:52:05.250 回答