1

我的映射器有一个输出:

Mapper: KEY, VALUE(Timestamp, someOtherAttrbibutes)

我的减速机确实收到:

Reducer: KEY, Iterable<VALUE(Timestamp, someOtherAttrbibutes)>

我想Iterable<VALUE(Timestamp, someOtherAttrbibutes)>Timestamp属性排序。有没有可能实现它?

我想避免在 Reducer 代码中进行手动排序。http://cornercases.wordpress.com/2011/08/18/hadoop-object-reuse-pitfall-all-my-reducer-values-are-the-same/

我必须从 Iterable 中“深度复制”所有对象,这会导致巨大的内存开销。:(((

4

2 回答 2

6

这相对容易,您需要为您的课程编写比较器VALUE类。

在这里仔细看看:http: //vangjee.wordpress.com/2012/03/20/secondary-sorting-aka-sorting-values-in-hadoops-mapreduce-programming-paradigm/特别是在A solution for secondary sort part .

于 2013-01-14T14:31:04.713 回答
-1

您需要为您的 VALUE 类编写比较器类。

@Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
    final SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
    sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
    List<String> list = new ArrayList<String>();
    for (Text val : values) {
        list.add(val.toString());

    }
    Collections.sort(list, new Comparator<String>() {
       public int compare(String s1, String s2) {
           String str1[] = s1.split(",");
           String str2[] = s2.split(",");
          int time1 = 0;
           int time2 = 0;
           try {
               time1 = (int)(sdf.parse(str1[0]).getTime());
               time2 = (int) (sdf.parse(str2[0]).getTime());

           } catch (ParseException e) {
               e.printStackTrace();
           } finally {
               return time1 - time2;
           }
       }
    });
    for(int i = 0; i < list.size(); ++i)
    context.write(key, new Text(list.get(i)));
}
于 2016-03-09T12:07:40.137 回答