2

我正在尝试创建一个动态地图缩减应用程序,该应用程序从外部属性文件中获取维度。主要问题在于变量,即密钥将是复合的,并且可以是任何数字,例如 3 个密钥对、4 个密钥对等。

我的映射器:

public void map(AvroKey<flumeLogs> key, NullWritable value, Context context) throws IOException, InterruptedException{
    Configuration conf = context.getConfiguration();
    int dimensionCount = Integer.parseInt(conf.get("dimensionCount"));
    String[] dimensions = conf.get("dimensions").split(","); //this gets the dimensions from the run method in main

    Text[] values = new Text[dimensionCount]; //This is supposed to be my composite key

    for (int i=0; i<dimensionCount; i++){
        switch(dimensions[i]){

        case "region":  values[i] = new Text("-");
            break;

        case "event":  values[i] = new Text("-");
            break;

        case "eventCode":  values[i] = new Text("-");
            break;

        case "mobile":  values[i] = new Text("-");
        }
    }
    context.write(new StringArrayWritable(values), new IntWritable(1));

}

这些值稍后将具有良好的逻辑。

我的 StringArrayWritable:

public class StringArrayWritable extends ArrayWritable {
public StringArrayWritable() {
    super(Text.class);
}

public StringArrayWritable(Text[] values){
    super(Text.class, values);
    Text[] texts = new Text[values.length];
    for (int i = 0; i < values.length; i++) {
        texts[i] = new Text(values[i]);
    }
    set(texts);
}

@Override
public String toString(){
    StringBuilder sb = new StringBuilder();

    for(String s : super.toStrings()){
        sb.append(s).append("\t");
    }

    return sb.toString();
}
}

我得到的错误:

    Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :class StringArrayWritable
    at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:414)
    at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.ClassCastException: class StringArrayWritable
    at java.lang.Class.asSubclass(Class.java:3165)
    at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:892)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:1005)
    at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402)
    ... 9 more

任何帮助将不胜感激。

非常感谢。

4

1 回答 1

1

您正在尝试使用 Writable 对象作为键。在 mapreduce 中,键必须实现WritableComparable接口。ArrayWritable只实现Writable接口。

两者的区别在于,comaprable 接口要求你实现一个compareTo方法,以便 mapreduce 能够正确地对键进行排序和分组。

于 2016-08-31T12:21:38.453 回答