0

我有两个文件,在data1

1 3
1 2
5 1

data2

2 3
2 4

然后我试着把它们读成猪

d1 = LOAD 'data1';
d2 = foreach d1 generate flatten(STRSPLIT($0, ' +')) as (f1:int,f2:int);
d3 = LOAD 'data2' ;
d4 = foreach d3 generate flatten(STRSPLIT($0, ' +')) as (f1:int,f2:int);
data = join d2 by f1, d4 by f2;

然后我得到了

2013-08-04 00:48:26,032 [Thread-21] WARN  org.apache.hadoop.mapred.LocalJobRunner - job_local_0005
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer
    at org.apache.pig.backend.hadoop.HDataType.getWritableComparableTypes(HDataType.java:85)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map.collect(PigGenericMapReduce.java:112)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:285)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:278)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)

有人可以帮助我吗?谢谢你。

4

2 回答 2

8

首先,我会为输入定义一个简单的模式。根据您的示例,我假设您的输入是文本文件。
现在您得到 ClassCastException,因为不幸的是,仅应用架构(f1:int, f2:int)不会进行任何转换。您需要显式转换 to 的输出模式,STRSPLIT以便(tuple(int,int))flatten 可以从中生成f1:int and f2:int。IE:

d1 = LOAD 'data1' as (line:chararray);
d2 = foreach d1 generate flatten((tuple(int,int))(STRSPLIT($0, ' +'))) 
       as (f1:int,f2:int);

d3 = LOAD 'data2' as (line:chararray);
d4 = foreach d3 generate flatten((tuple(int,int))(STRSPLIT($0, ' +')))
       as (f1:int,f2:int);

data = join d2 by f1, d4 by f2;
于 2013-08-04T20:43:28.573 回答
0

如果您在 pig 中使用 UDF 并获得此转换异常,除了检查您的 pig 脚本外,还要检查您的 UDF 脚本并确保实际返回的值类型与类型匹配@outputSchema

于 2019-11-22T23:56:22.807 回答