我正在尝试在本地读取带有火花作业的对象。我之前在本地使用另一个 Spark 作业创建。查看日志时,我没有看到任何奇怪的东西,并且在 spark UI 中,工作只是卡住了
在我开始读取作业之前,我将 spark 配置更新如下:
val hc = spark.sparkContext.hadoopConfiguration
hc.set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
hc.set("fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS")
hc.set("fs.gs.project.id", credential.projectId)
hc.set("fs.gs.auth.service.account.enable", "true")
hc.set("fs.gs.auth.service.account.email", credential.email)
hc.set("fs.gs.auth.service.account.private.key.id", credential.keyId)
hc.set("fs.gs.auth.service.account.private.key", credential.key)
然后我就这样读
val path = "gs://mybucket/data.csv"
val options = Map("credentials" -> credential.base64ServiceAccount, "parentProject" -> credential.projectId)
spark.read.format("csv")
.options(options)
.load(path)
我的服务帐户具有这些权限,我确实添加了我可以找到的对象存储的所有权限
Storage Admin
Storage Object Admin
Storage Object Creator
Storage Object Viewer
这就是我之前编写对象的方式
val path = "gs://mybucket/data.csv"
val options = Map("credentials" -> credential.base64ServiceAccount, "parentProject" -> credential.projectId, "header" -> "true")
var writer = df.write.format("csv").options(options)
writer.save(path)
这些是我的依赖
Seq(
"org.apache.spark" %% "spark-core" % "3.1.1",
"org.apache.hadoop" % "hadoop-client" % "3.3.1",
"com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.23.0",
"com.google.cloud.bigdataoss" % "gcs-connector" % "hadoop3-2.2.4",
"com.google.cloud" % "google-cloud-storage" % "2.2.1"
)
知道为什么写入会成功但读取会像这样卡住吗?