在一个 Ubuntu 虚拟机上,我按照 Michael Noll 的教程设置了一个单节点集群,这是我编写 Hadoop 程序的起点。
另外,作为参考,this .
我的程序是 Python 并使用 Hadoop Streaming。
我编写了一个简单的向量乘法程序,其中mapper.py
接受输入文件v1
和v2
,每个文件都包含表单中的向量12,33,10
并返回乘积。然后reducer.py
返回产品的总和,即:
映射器:map(mult,v1,v2)
减速机:sum(p1,p2,p3,...,pn)
映射器.py:
import sys
def mult(x,y):
return int(x)*int(y)
# Input comes from STDIN (standard input).
inputvec = tuple()
for i in sys.stdin:
i = i.strip()
inputvec += (tuple(i.split(",")),)
v1 = inputvec[0]
v2 = inputvec[1]
results = map(mult, v1, v2)
# Simply printing the results variable would print the tuple. This
# would be fine except that the STDIN of reduce.py takes all the
# output as input, including brackets, which can be problematic
# Cleaning the output ready to be input for the Reduce step:
for o in results:
print ' %s' % o,
减速器.py:
import sys
result = int()
for a in sys.stdin:
a = a.strip()
a = a.split()
for r in range(len(a)):
result += int(a[r])
print result
在in
子目录中,我有v1
contains5,12,20
和v2
contains 14,11,3
。
在本地测试,事情按预期工作:
hduser@ubuntu:~/VectMult$ cat in/* | python ./mapper.py
70 132 60
hduser@ubuntu:~/VectMult$ cat in/* | python ./mapper.py | sort
70 132 60
hduser@ubuntu:~/VectMult$ cat in/* | python ./mapper.py | sort | python ./reducer.py
262
当我在 Hadoop 中运行它时,它似乎成功了,并且没有抛出任何异常:
hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper python /home/hduser/VectMult3/mapper.py -reducer python /home/hduser/VectMult3/reducer.py -input /home/hduser/VectMult3/in -output /home/hduser/VectMult3/out4
Warning: $HADOOP_HOME is deprecated.
packageJobJar: [/app/hadoop/tmp/hadoop-unjar2168776605822419867/] [] /tmp/streamjob6920304075078514767.jar tmpDir=null
12/11/18 21:20:09 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/11/18 21:20:09 WARN snappy.LoadSnappy: Snappy native library not loaded
12/11/18 21:20:09 INFO mapred.FileInputFormat: Total input paths to process : 2
12/11/18 21:20:09 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local]
12/11/18 21:20:09 INFO streaming.StreamJob: Running job: job_201211181903_0009
12/11/18 21:20:09 INFO streaming.StreamJob: To kill this job, run:
12/11/18 21:20:09 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201211181903_0009
12/11/18 21:20:09 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201211181903_0009
12/11/18 21:20:10 INFO streaming.StreamJob: map 0% reduce 0%
12/11/18 21:20:24 INFO streaming.StreamJob: map 67% reduce 0%
12/11/18 21:20:33 INFO streaming.StreamJob: map 100% reduce 0%
12/11/18 21:20:36 INFO streaming.StreamJob: map 100% reduce 22%
12/11/18 21:20:45 INFO streaming.StreamJob: map 100% reduce 100%
12/11/18 21:20:51 INFO streaming.StreamJob: Job complete: job_201211181903_0009
12/11/18 21:20:51 INFO streaming.StreamJob: Output: /home/hduser/VectMult3/out4
hduser@ubuntu:/usr/local/hadoop$ bin/hadoop dfs -cat /home/hduser/VectMult3/out4/part-00000
Warning: $HADOOP_HOME is deprecated.
hduser@ubuntu:/usr/local/hadoop$ bin/hadoop dfs -ls /home/hduser/VectMult3/out4/
Warning: $HADOOP_HOME is deprecated.
Found 3 items
-rw-r--r-- 1 hduser supergroup 0 2012-11-18 22:05 /home/hduser/VectMult3/out4/_SUCCESS
drwxr-xr-x - hduser supergroup 0 2012-11-18 22:05 /home/hduser/VectMult3/out4/_logs
-rw-r--r-- 1 hduser supergroup 0 2012-11-18 22:05 /home/hduser/VectMult3/out4/part-00000
但是当我检查输出时,我发现的只是一个 0 字节的空文件。
我无法弄清楚出了什么问题。任何人都可以帮忙吗?
编辑:回复@DiJuMx
解决此问题的一种方法是从 map 输出到临时文件,然后在 reduce 中使用临时文件。
不确定 Hadoop 是否允许这样做?希望有更了解的人可以纠正我。
在尝试此操作之前,请尝试编写一个更简单的版本,该版本只是直接传递数据而不进行处理。
我认为这是一个好主意,只是为了检查数据是否正确流过。我为此使用了以下内容:
mapper.py 和 reducer.py 都
导入 sys
for i in sys.stdin:
print i,
出来的应该就是进去的。仍然输出一个空文件。
或者,如果输入为空白,则在 reduce 中编辑现有代码以将(错误)消息输出到输出文件
映射器.py
import sys
for i in sys.stdin:
print "mapped",
print "mapper",
减速器.py
import sys
for i in sys.stdin:
print "reduced",
print "reducer",
如果收到输入,它应该最终输出reduced
。无论哪种方式,它至少应该输出reducer
. 实际输出仍然是一个空文件。