我的执行代码和输出
[hduser@Janardhan hadoop]$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.0.jar -file /home/hduser/mapper.py -mapper mapper.py -file /home/hduser/reducer.py -reducer reducer.py -input /user/hduser/input.txt -output /home/hduser/outpututttt
Warning: $HADOOP_HOME is deprecated.
packageJobJar: [/home/hduser/mapper.py, /home/hduser/reducer.py, /app/hadoop/tmp/hadoop-unjar2185859252991058106/] [] /tmp/streamjob2973484922110272968.jar tmpDir=null
12/05/03 20:36:02 INFO mapred.FileInputFormat: Total input paths to process : 1
12/05/03 20:36:03 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local]
12/05/03 20:36:03 INFO streaming.StreamJob: Running job: job_201205032014_0003
12/05/03 20:36:03 INFO streaming.StreamJob: To kill this job, run:
12/05/03 20:36:03 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201205032014_0003
12/05/03 20:36:03 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201205032014_0003
12/05/03 20:36:04 INFO streaming.StreamJob: map 0% reduce 0%
12/05/03 20:36:21 INFO streaming.StreamJob: map 100% reduce 0%
12/05/03 20:36:24 INFO streaming.StreamJob: map 0% reduce 0%
12/05/03 20:37:00 INFO streaming.StreamJob: map 100% reduce 100%
12/05/03 20:37:00 INFO streaming.StreamJob: To kill this job, run:
12/05/03 20:37:00 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201205032014_0003
12/05/03 20:37:00 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201205032014_0003
12/05/03 20:37:00 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201205032014_0003_m_000000
12/05/03 20:37:00 INFO streaming.StreamJob: killJob...
Streaming Job Failed!
这是我从工作跟踪器得到的错误:
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
它使用以下代码在本地工作:
[hduser@Janardhan ~]$ cat input.txt | ./mapper.py | sort | ./reducer.py
('be', 'VB') 1
('ceremony', 'NN') 1
('first', 'JJ') 2
('for', 'IN') 2
('hi', 'NN') 1
('place', 'NN') 1
('the', 'DT') 2
('welcome', 'VBD') 1