0

我刚刚开始使用 EMR Hadoop/spark 等,我正在尝试使用 spark-shell 运行 scala 代码以将文件上传到 EMRFS S3 位置但是我收到以下错误 -

没有任何导入如果我运行 =>

val bucketName = "bucket"
val outputPath = "test.txt"

scala> val putRequest = PutObjectRequest.builder.bucket(bucketName).key(outputPath).build()
<console>:27: error: not found: value PutObjectRequest
   val putRequest = PutObjectRequest.builder.bucket(bucketName).key(outputPath).build()
                    ^

一旦我为 PutObjectRequest 添加了 Import 包,我仍然会得到一个不同的错误。

scala> import com.amazonaws.services.s3.model.PutObjectRequest

导入 com.amazonaws.services.s3.model.PutObjectRequest

scala> val putRequest = PutObjectRequest.builder.bucket(bucketName).key(outputPath).build()
<console>:28: error: value builder is not a member of object com.amazonaws.services.s3.model.PutObjectRequest
   val putRequest = PutObjectRequest.builder.bucket(bucketName).key(outputPath).build()
                                     ^

我不确定我错过了什么。任何帮助,将不胜感激!

注:Spark 版本为 2.4.5

4

1 回答 1

2

通过合适的构造函数创建 PutObjectRequest 对象,而不是使用构建器。此外,使用 AmazonS3ClientBuilder 创建到 S3 的连接。

import com.amazonaws.regions.Regions
import com.amazonaws.services.s3.AmazonS3ClientBuilder
import com.amazonaws.services.s3.model.ObjectMetadata
import com.amazonaws.services.s3.model.PutObjectRequest

import java.io.File

val clientRegion = Regions.DEFAULT_REGION
val bucketName = "*** Bucket name ***"
val fileObjKeyName = "*** File object key name ***"
val fileName = "*** Path to file to upload ***"

val s3Client = AmazonS3ClientBuilder.standard.withRegion(clientRegion).build

// Upload a file as a new object with ContentType and title specified.
val request = new PutObjectRequest(bucketName, fileObjKeyName, new File(fileName))
val metadata = new ObjectMetadata()
metadata.setContentType("plain/text")
metadata.addUserMetadata("title", "someTitle")
request.setMetadata(metadata)
s3Client.putObject(request)
于 2020-08-04T04:22:41.390 回答