我正在尝试运行 MapReduce 流作业,该作业从与给定模式匹配的 s3 存储桶中的目录中获取输入文件。模式类似于bucket-name/[date]/product/logs/[hour]/[logfilename]
. 一个示例日志将在一段时间后出现bucket-name/2013-05-02/product/logs/05/log123456789
。
我可以通过仅将文件名的小时部分作为通配符传递来完成这项工作。例如:bucket-name/2013-05-02/product/logs/*/
。这成功地从每小时中挑选出每个日志文件,并将它们单独传递给映射器。
问题来了,我尝试将日期设为通配符,例如:bucket-name/*/product/logs/*/
. 当我这样做时,会创建作业,但不会创建任何任务,最终会失败。此错误会打印在 syslog 中。
2013-05-02 08:03:41,549 ERROR org.apache.hadoop.streaming.StreamJob (main): Job not successful. Error: Job initialization failed:
java.lang.OutOfMemoryError: Java heap space
at java.util.regex.Matcher.<init>(Matcher.java:207)
at java.util.regex.Pattern.matcher(Pattern.java:888)
at org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:378)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:418)
at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:523)
at org.apache.hadoop.mapred.SkipBadRecords.getMapperMaxSkipRecords(SkipBadRecords.java:247)
at org.apache.hadoop.mapred.TaskInProgress.<init>(TaskInProgress.java:146)
at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:722)
at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4238)
at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2013-05-02 08:03:41,549 INFO org.apache.hadoop.streaming.StreamJob (main): killJob...