我是 hadoop 框架和 map reduce 抽象的新手。
基本上,我想在一个巨大的文本文件中找到最小的数字(用“,”分隔)
所以,这是我的代码 mapper.py
#!/usr/bin/env python
import sys
# input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into words
numbers = line.split(",")
# increase counters
for number in numbers:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
print '%s\t%s' % (number, 1)
减速器
#!/usr/bin/env python
from operator import itemgetter
import sys
smallest_number = sys.float_info.max
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# parse the input we got from mapper.py
number, count = line.split('\t', 1)
try:
number = float(number)
except ValueError:
continue
if number < smallest_number:
smallest_number = number
print smallest_number <---- i think the error is here... there is no key value thingy
print smallest_number
我得到的错误:
12/10/04 12:07:22 ERROR streaming.StreamJob: Job not successful. Error: NA
12/10/04 12:07:22 INFO streaming.StreamJob: killJob...
Streaming Command Failed!