0

非常愚蠢的问题..我的数据如下

id1, value
1, 20.2
1,20.4
....

我想找到 id1 的平均值和中位数?(注意.. 平均值,每个 id 的中值,而不是全局平均值,中值)我正在使用 python hadoop 流..

mapper.py
for line in sys.stdin:
    try:
    # remove leading and trailing whitespace
        line = line.rstrip(os.linesep)
        tokens = line.split(",")

            print '%s,%s' % (tokens[0],tokens[1])
    except Exception:
        continue


reducer.py
data_dict = defaultdict(list)
def mean(data_list):
    return sum(data_list)/float(len(data_list)) if len(data_list) else 0
def median(mylist):
    sorts = sorted(mylist)
    length = len(sorts)
    if not length % 2:
        return (sorts[length / 2] + sorts[length / 2 - 1]) / 2.0
    return sorts[length / 2]


for line in sys.stdin:
    try:
        line = line.rstrip(os.linesep)
        serial_id, duration = line.split(",")
        data_dict[serial_id].append(float(duration))
    except Exception:
        pass
for k,v in data_dict.items():
    print "%s,%s,%s" %(k, mean(v), median(v))

我期望每个键都有一个平均值,中位数但是我看到 id1 重复了不同的平均值和中位数..例如..在做 grep ..

mean_median/part-00003:SH002616940000,5.0,5.0   
mean_median/part-00008:SH002616940000,901.0,901.0   
mean_median/part-00018:SH002616940000,11.0,11.0 
mean_median/part-00000:SH002616940000,2.0,2.0   
mean_median/part-00025:SH002616940000,1800.0,1800.0 
mean_median/part-00002:SH002616940000,4.0,4.0   
mean_median/part-00006:SH002616940000,8.0,8.0   
mean_median/part-00021:SH002616940000,14.0,14.0 
mean_median/part-00001:SH002616940000,3.0,3.0   
mean_median/part-00022:SH002616940000,524.666666667,26.0    
mean_median/part-00017:SH002616940000,65.0,65.0 
mean_median/part-00016:SH002616940000,1384.0,1384.0 
mean_median/part-00020:SH002616940000,596.0,68.0    
mean_median/part-00014:SH002616940000,51.0,51.0 
mean_median/part-00004:SH002616940000,6.0,6.0   
mean_median/part-00005:SH002616940000,7.0,7.0   

有什么建议么?

4

1 回答 1

1

我在 hadoop-user 邮件列表中回答了同样的问题,如下所示:

你为这份工作开始了多少个减速器?如果您为此作业启动多个 Reducer,它将生成多个名为 part- *的输出文件。并且每个部分只是特定 Reducer 分区的局部均值和中值。

两种解决方法: 1、调用setNumReduceTasks(1)方法将Reducer个数设置为1,只会产生一个输出文件,每个不同的key只会产生一个均值和中值。2、参考Hadoop源码中的org.apache.hadoop.examples.WordMedian。它通过一个本地函数处理多个Reducer产生的所有输出文件,并产生最终结果。

于 2013-04-02T09:16:49.343 回答