我在这里有一个 pyspark 脚本行,
df_output = df.select("*",$checkcol)
df_output.show()
通过对变量进行硬编码,它可以正常工作,但是当参数化时它会抛出一个错误说,
pyspark.sql.utils.AnalysisException: '无法解析\'`"*", F.....
其中checkcol
是一个具有如下值的变量,
校验-
F.when(F.col("colA")=='null',"Yes").otherwise(date_validation_udf("colA")).alias("colA_DateCheck"),
F.when(F.col("colB")=='null',"Yes").otherwise(date_validation_udf("colB")).alias("colB_DateCheck"),F.when(F.col("colC")=='null',"Yes").otherwise(date_validation_udf("colC")).alias("colC_DateCheck"),
F.when(F.col("colD")=='null',"Yes").otherwise(num_check_udf("colD")).alias("colD_NumCheck"),F.when(F.col("colE")=='null',"Yes").otherwise(num_check_udf("colE")).alias("colE_NumCheck"),
F.when(F.col("colF")=='null',"Yes").otherwise(num_check_udf("colF")).alias("colF_NumCheck"),F.when(F.col("colG")=='null',"Yes").otherwise(num_check_udf("colG")).alias("colG_NumCheck")