1

Hadoop 流通过环境变量使文件名可用于每个地图任务。

Python:

os.environ["map.input.file"]

爪哇:

System.getenv(“map.input.file”).

鲁比呢?

mapper.rb
#!/usr/bin/env ruby

STDIN.each_line do |line|
  line.split.each do |word|
    word = word[/([a-zA-Z0-9]+)/] 
    word = word.gsub(/ /,"")
    puts [word, 1].join("\t")
  end
end

puts ENV['map.input.file']
4

3 回答 3

0

怎么样:

ENV['map.input.file']

Ruby 让您可以轻松地分配给 ENV 哈希:

ENV['map.input.file'] = '/path/to/file'
于 2013-09-13T17:07:32.803 回答
0

所有 JobConf 变量都由 hadoop-streaming 放入环境变量中。通过将任何不在 in 中的字符转换0-9 A-Z a-z_.

所以 map.input.file => map_input_file

尝试:puts ENV['map_input_file']

于 2013-09-18T20:55:50.310 回答
0

使用来自 op 的输入,我尝试了 mapper:

#!/usr/bin/python
import os
file_name = os.getenv('map_input_file')
print file_name

和使用命令的标准字数减少器:

hadoop fs -rmr /user/itsjeevs/wc && 
hadoop jar $STRMJAR  -files /home/jejoseph/wc_mapper.py,/home/jejoseph/wc_reducer.py \
    -mapper wc_mapper.py  \
    -reducer wc_reducer.py \
    -numReduceTasks 10  \
    -input "/data/*"  \
    -output wc

失败并出现错误:

 16/03/10 15:21:32 INFO mapreduce.Job: Task Id : attempt_1455931799889_822384_m_000043_0, Status : FAILED
Error: java.io.IOException: Stream closed
    at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:434)
    at java.io.OutputStream.write(OutputStream.java:116)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at org.apache.hadoop.streaming.io.TextInputWriter.writeUTF8(TextInputWriter.java:72)
    at org.apache.hadoop.streaming.io.TextInputWriter.writeValue(TextInputWriter.java:51)
    at org.apache.hadoop.streaming.PipeMapper.map(PipeMapper.java:106)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

16/03/10 15:21:32 INFO mapreduce.Job: Task Id : attempt_1455931799889_822384_m_000077_0, Status : FAILED
Error: java.io.IOException: Broken pipe
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:345)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at org.apache.hadoop.streaming.io.TextInputWriter.writeUTF8(TextInputWriter.java:72)
    at org.apache.hadoop.streaming.io.TextInputWriter.writeValue(TextInputWriter.java:51)
    at org.apache.hadoop.streaming.PipeMapper.map(PipeMapper.java:106)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

不知道发生了什么。

于 2016-03-10T23:06:10.160 回答