0

我的工作在 map-reduce 步骤之前有一些例外,但工作没有被杀死。如何配置hadoop以使作业在异常后被杀死?

现在调用主类

心跳 心跳

主类调用完成

Oozie Launcher 结束

标准错误日志

org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Io exception: Unknown host specified )
    at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:82)
    at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:577)
    at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:792)
    at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:815)
    at com.seven.crcs.export.dao.ReportDAOImpl.recreateReportEntity(ReportDAOImpl.java:151)
    at com.seven.crcs.export.dao.ReportDAOImpl.saveActiveUserCount(ReportDAOImpl.java:93)
    at com.seven.crcs.export.ReportJdbcExporter.saveActiveUserCount(ReportJdbcExporter.java:55)
    at com.seven.dataprocessor.oc.jobs.reports.export.day.ExportDailyUserReducer.exportUserCounts(ExportDailyUserReducer.java:32)
    at com.seven.dataprocessor.oc.jobs.reports.export.ExportActiveUser
org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Io exception: Unknown host specified )

2013-02-28 06:06:46,487 INFO org.apache.hadoop.mapred.JobClient: Task Id : attempt_201302270945_0181_r_000000_0, Status : FAILED
2013-02-28 06:07:00,600 INFO org.apache.hadoop.mapred.JobClient: Task Id : attempt_201302270945_0181_r_000000_1, Status : FAILED
2013-02-28 06:07:16,650 INFO org.apache.hadoop.mapred.JobClient: Task Id : attempt_201302270945_0181_r_000000_2, Status : FAILED
2013-02-28 06:07:31,731 INFO org.apache.hadoop.mapred.JobClient: Job complete: job_201302270945_0181

但工作完成成功

4

1 回答 1

0

您的工作实际上已终止,但仅在地图任务尝试失败 3 次后,如任务 ID 所示:

  • 尝试_201302270945_0181_r_000000_0
  • 尝试_201302270945_0181_r_000000_1
  • 尝试_201302270945_0181_r_000000_2

您可以通过将参数 mapred.map.max.attempts设置为1或使用JobConf#setMaxMapAttempts(int)JobConf#setMaxMapAttempts来限制每个任务的最大尝试次数。

这将导致您的地图任务在第一次异常时失败,从而更快地终止您的工作。

于 2013-02-28T18:45:04.073 回答