我正在尝试从我的 Databricks 笔记本作业 (Spark) 的输入中排除 Glacier 数据。它基本上是通过 AWS Glue Catalog 读取 S3 上的 parquet 数据。我已经添加excludeStorageClasses
到 Glue 表属性:
|Table Properties | [excludeStorageClasses=[GLACIER], transient_lastDdlTime=1637069663]|
但是当我读取表格时,它仍在尝试读取 Glacier 中的数据。
spark.sql("SELECT * FROM test_db.users").count()
错误:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 163, 172.19.249.237, executor 0): java.io.IOException: Failed to read job commit marker: S3AFileStatus{path=s3:...
任何想法如何使它工作或如何从 Spark 作业的输入源中排除 Glacier 数据?