我正在使用 livy-server-0.2 运行 spark 作业,但是,我无法更改 spark.executor.cores 的默认设置,它无法生效,而其他设置可以。
它总是使用 1 个核心来启动一个执行器。
yarn 11893 11889 6 21:08 ? 00:00:01
/opt/jdk1.7.0_80/bin/java -server -XX:OnOutOfMemoryError=kill
%p -Xms1024m -Xmx1024m -Djava.io.tmpdir=/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1487813931557_0603/container_1487813931557_0603_01_000026/tmp
-Dspark.driver.port=51553
-Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1487813931557_0603/container_1487813931557_0603_01_000026
-XX:MaxPermSize=256m org.apache.spark.executor.CoarseGrainedExecutorBackend
--driver-url spark://CoarseGrainedScheduler@10.1.1.81:51553 --executor-id 19
--hostname master01.yscredit.com --cores 1 --app-id application_1487813931557_0603
--user-class-path file:/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1487813931557_0603/container_1487813931557_0603_01_000026/__app__.jar
这是我在 $SPARK_HOME/conf 中的 spark-defaults.conf 文件
spark.master=yarn
spark.submit.deployMode=cluster
spark.executor.instances=7
spark.executor.cores=6
spark.executor.memoryOverhead=1024
spark.yarn.executor.memoryOverhead=1400
spark.executor.memory=11264
spark.driver.memory=5g
spark.yarn.driver.memoryOverhead=600
spark.speculation=true
spark.yarn.executor.memoryOverhead=1400
有谁能够帮我?谢谢!