1

在 EMR 上启动集群

设置:

user: AdministratorPolicy (access all)
keypairs: yes
sandbox: Zeppelin
Application: Spark 1.5.0, Hadoop 2.6.0
IAM: defaultEMRRole
Bootstrap Action: no
IAM users: all
steps: no

然后我在本地机器上获得 Zeppelin UI,地址为:

instance-public-dns:8890

成功

创建一个新笔记本:运行

sc

返回

res42: org.apache.spark.SparkContext =org.apache.spark.SparkContext@523b1d4c

然后我尝试将数据从 S3 加载到 spark 中

sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId","++")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey","++")
var textFile = sc.textFile("s3n://<instance>/<bucket-name>/pagecounts-20081001-070000")
textFile.first()

然后得到错误

com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: FD784A9D96A0D54A), S3 Extended Request ID: oOgHwbN8tW2TIxpgagPIZ+NpsTmymzh6wiJ2a6zYhD8XeiH3pHVKpTOeYXOS0dzgBGqKsjr+ls8=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
4

1 回答 1

0

您不需要设置“fs.s3n.awsAccessKeyId”或“fs.s3n.awsSecretAccessKey”。您可以尝试不设置这些,然后只使用“s3”而不是“s3n”:

var textFile = sc.textFile("s3:////pagecounts-20081001-070000") textFile.first()

于 2015-10-17T16:56:57.997 回答