3

我想使用 HPROF 来分析我的 Hadoop 工作。问题是我得到TRACES但文件中CPU SAMPLES没有profile.out。我在运行方法中使用的代码是:

    /** Get configuration */
    Configuration conf = getConf();
    conf.set("textinputformat.record.delimiter","\n\n");
    conf.setStrings("args", args);

    /** JVM PROFILING */
    conf.setBoolean("mapreduce.task.profile", true);
    conf.set("mapreduce.task.profile.params", "-agentlib:hprof=cpu=samples," +
       "heap=sites,depth=6,force=n,thread=y,verbose=n,file=%s");
    conf.set("mapreduce.task.profile.maps", "0-2");
    conf.set("mapreduce.task.profile.reduces", "");

    /** Job configuration */
    Job job = new Job(conf, "HadoopSearch");
    job.setJarByClass(Search.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(NullWritable.class);

    /** Set Mapper and Reducer, use identity reducer*/
    job.setMapperClass(Map.class);
    job.setReducerClass(Reducer.class);

    /** Set input and output formats */
    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);

    /** Set input and output path */
    FileInputFormat.addInputPath(job, new Path("/user/niko/16M"));  
    FileOutputFormat.setOutputPath(job, new Path(cmd.getOptionValue("output")));

    job.waitForCompletion(true);

    return 0;

我如何获得CPU SAMPLES要写入输出的内容?

我也有 s trange 错误消息,stderr但我认为它不相关,因为当分析设置为 false 或启用分析的代码被注释掉时,它也存在。错误是

 log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.impl.MetricsSystemImpl).
 log4j:WARN Please initialize the log4j system properly.
 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
4

2 回答 2

3

Yarn(或 MRv1)在您的工作完成后立即杀死容器。CPU 样本无法写入您的分析文件。事实上,你的痕迹也应该被截断。

您必须添加以下选项(或 Hadoop 版本上的等效项):

yarn.nodemanager.sleep-delay-before-sigkill.ms = 30000
# No. of ms to wait between sending a SIGTERM and SIGKILL to a container

yarn.nodemanager.process-kill-wait.ms = 30000
# Max time to wait for a process to come up when trying to cleanup a container

mapreduce.tasktracker.tasks.sleeptimebeforesigkill = 30000
# Same en MRv1 ?

(30 秒似乎足够了)

于 2014-12-16T17:35:22.367 回答
0

这可能是由https://issues.apache.org/jira/browse/MAPREDUCE-5465引起的,该问题在较新的 Hadoop 版本中已修复。

所以解决方案似乎是:

  • 使用 ALSimon 的回答中提到的设置,或者
  • 升级到 Hadoop >= 2.8.0
于 2017-09-29T12:47:14.820 回答