我是 Pyspark 和 AWS Glue 的新手,当我尝试使用 Glue 写出文件时遇到问题。当我尝试使用 Glue 的 write_dynamic_frame_from_options 将一些输出写入 s3 时,它会出现异常并说
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 199.0 failed 4 times, most recent failure:
Lost task 0.3 in stage 199.0 (TID 7991, 10.135.30.121, executor 9): java.lang.IllegalArgumentException: Number of column in CSV header is not equal to number of fields in the schema:
Header length: 7, schema size: 6
CSV file: s3://************************************cache.csv
at org.apache.spark.sql.execution.datasources.csv.CSVDataSource$$anonfun$checkHeaderColumnNames$1.apply(CSVDataSource.scala:180)
at org.apache.spark.sql.execution.datasources.csv.CSVDataSource$$anonfun$checkHeaderColumnNames$1.apply(CSVDataSource.scala:176)
at scala.Option.foreach(Option.scala:257)
at .....
似乎它说我的数据框的架构有 6 个字段,但 csv 有 7 个。我不明白它在说哪个 csv,因为我实际上是在尝试从数据框创建一个新的 csv ......对此有任何见解具体问题或 write_dynamic_frame_from_options 方法的一般工作原理将非常有帮助!
这是我工作中导致此问题的函数的源代码。
def update_geocache(glueContext, originalDf, newDf):
logger.info("Got the two df's to union")
logger.info("Schema of the original df")
originalDf.printSchema()
logger.info("Schema of the new df")
newDf.printSchema()
# add the two Dataframes together
unioned_df = originalDf.unionByName(newDf).distinct()
logger.info("Schema of the union")
unioned_df.printSchema()
##root
#|-- location_key: string (nullable = true)
#|-- addr1: string (nullable = true)
#|-- addr2: string (nullable = true)
#|-- zip: string (nullable = true)
#|-- lat: string (nullable = true)
#|-- lon: string (nullable = true)
# Create just 1 partition, because there is so little data
unioned_df = unioned_df.repartition(1)
logger.info("Unioned the geocache and the new addresses")
# Convert back to dynamic frame
dynamic_frame = DynamicFrame.fromDF(
unioned_df, glueContext, "dynamic_frame")
logger.info("Converted the unioned tables to a Dynamic Frame")
# Write data back to S3
# THIS IS THE LINE THAT THROWS THE EXCEPTION
glueContext.write_dynamic_frame.from_options(
frame=dynamic_frame,
connection_type="s3",
connection_options={
"path": "s3://" + S3_BUCKET + "/" + TEMP_FILE_LOCATION
},
format="csv"
)