1

我正在尝试使用 AWS Cli 启动集群。我使用以下命令:

aws emr create-cluster --name "Config1" --release-label emr-5.0.0 --applications Name=Spark --use-default-role --log-uri 's3://aws-logs-813591802533-us-west-2/elasticmapreduce/' --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m1.medium InstanceGroupType=CORE,InstanceCount=2,InstanceType=m1.medium

集群创建成功。然后我添加这个命令:

aws emr add-steps --cluster-id ID_CLUSTER --region us-west-2 --steps Name=SparkSubmit,Jar="command-runner.jar",Args=[spark-submit,--deploy-mode,cluster,--master,yarn,--executor-memory,1G,--class,Traccia2014,s3://tracceale/params/scalaProgram.jar,s3://tracceale/params/configS3.txt,30,300,2,"s3a://tracceale/Tempi1"],ActionOnFailure=CONTINUE

一段时间后,该步骤失败。这是日志文件:

 17/02/22 11:00:07 INFO RMProxy: Connecting to ResourceManager at ip-172-31-  31-190.us-west-2.compute.internal/172.31.31.190:8032
 17/02/22 11:00:08 INFO Client: Requesting a new application from cluster with 2 NodeManagers
 17/02/22 11:00:08 INFO Client: Verifying our application has not requested  
 Exception in thread "main" org.apache.spark.SparkException: Application application_1487760984275_0001 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1132)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1175)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 17/02/22 11:01:02 INFO ShutdownHookManager: Shutdown hook called
 17/02/22 11:01:02 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-27baeaa9-8b3a-4ae6-97d0-abc1d3762c86
 Command exiting with ret '1'

在本地(在 SandBox Hortonworks HDP 2.5 上)我运行:

./spark-submit --class Traccia2014 --master local[*] --executor-memory 2G /usr/hdp/current/spark2-client/ScalaProjects/ScripRapportoBatch2.1/target/scala-2.11/traccia-22-ottobre_2.11-1.0.jar "/home/tracce/configHDFS.txt" 30 300 3

一切正常。我已经阅读了与我的问题相关的内容,但我无法弄清楚。

更新

签入Application Master,我收到此错误:

17/02/22 15:29:54 ERROR ApplicationMaster: User class threw exception: java.io.FileNotFoundException: s3:/tracceale/params/configS3.txt (No such file or directory)

at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at scala.io.Source$.fromFile(Source.scala:91)
at scala.io.Source$.fromFile(Source.scala:76)
at scala.io.Source$.fromFile(Source.scala:54)
at Traccia2014$.main(Rapporto.scala:40)
at Traccia2014.main(Rapporto.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
 17/02/22 15:29:55 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.io.FileNotFoundException: s3:/tracceale/params/configS3.txt (No such file or directory))

我将提到的路径“s3://tracceale/params/configS3.txt”从 S3 传递给函数“fromFile”,如下所示:

for(line <- scala.io.Source.fromFile(logFile).getLines())

我该如何解决?提前致谢。

4

3 回答 3

1

因为您使用的是集群部署模式,所以您包含的日志根本没有用。他们只是说应用程序失败了,但没有说失败的原因。要找出失败的原因,您至少需要查看 Application Master 日志,因为这是 Spark 驱动程序在集群部署模式下运行的地方,它可能会更好地提示应用程序失败的原因。

由于您已使用 --log-uri 配置集群,您将在 s3://aws-logs-813591802533-us-west-2/elasticmapreduce/<CLUSTER ID>/containers/< 下找到 Application Master 的日志YARN 应用程序 ID>/ 其中 YARN 应用程序 ID 是(基于您上面包含的日志)application_1487760984275_0001,容器 ID 应该类似于 container_1487760984275_0001_01_000001。(应用程序的第一个容器是 Application Master。)

于 2017-02-22T19:00:49.170 回答
0

该位置可能会丢失文件,可能是您可以在 ssh 进入 EMR 集群后看到它,但步骤命令仍然无法自行找出并开始抛出该文件未找到异常。

在这种情况下,我所做的是:

Step 1: Checked for the file existence in the project directory which we copied to EMR.

for example mine was in `//usr/local/project_folder/`

Step 2: Copy the script which you're expecting to run on the EMR.

for example I copied from `//usr/local/project_folder/script_name.sh` to `/home/hadoop/`

Step 3: Then executed the script from /home/hadoop/ by passing the absolute path to the command-runner.jar

command-runner.jar bash /home/hadoop/script_name.sh

因此,我发现我的脚本正在运行。希望这可能对某人有帮助

于 2019-03-13T15:24:48.490 回答
0

你有一个指向对象存储的 URL,可从 Hadoop 文件系统 API 访问,以及来自 java.io.File 的堆栈跟踪,它无法读取它,因为它不引用本地磁盘中的任何内容。

用作将SparkContext.hadoopRDD()路径转换为 ​​RDD 的操作

于 2017-02-23T10:39:37.240 回答