2

我正在使用 Azure Databricks Autoloader 将文件从 ADLS Gen 2 处理到 Delta Lake。我以以下方式编写了我的 Foreach 批处理函数(pyspark):

#Rename incoming dataframe columns
schemadf = transformschema.renameColumns(microBatchDF,fileconfig)

# Apply simple tranformation on schemadf using createOrReplaceTempView
modifieddf = applytransform(schemadf,targettable,targetdatabase,fileconfig)   

# Add audit cols to modifieddf
transformdf = auditlineage.addauditcols(modifieddf,fileconfig,appid

renameColumns 的代码

def renameColumns(dataframe, schema):
  str = schema['Schema']
  splitstr = list(str.split(','))   
  for c,n in zip(dataframe.columns,splitstr):
      dataframe=dataframe.withColumnRenamed(c,n)
  return dataframe

应用转换代码

def applytransform(inputdf,targettable,targetdatabase,fileconfig):  
  logger.info('Inside applytransform for Database/Table {}.{}',targetdatabase,targettable)
  inputdf.createOrReplaceTempView("src_to_transform")
  lspark = inputdf._jdf.sparkSession()
  if 'TransformQuery' in fileconfig and fileconfig['TransformQuery'] is not None:
    vsqlscript = fileconfig['TransformQuery']
    df = lspark.sql(vsqlscript)    
    logger.info("Applied Tranform")    
    return df
  else:
    logger.info("Passed DF")
    return inputdf

addauditcols 的代码

def addauditcols(inputdf,fileconfig,app_id):
    now = datetime.datetime.now()
    print(type(inputdf))    
    createdby = 'DatabricksJob-'+app_id
    datasource = fileconfig['Datasource']
    recordactiveind = 'Y'
    df = inputdf.withColumn('datasource',lit(datasource)).\
    withColumn('createdtimestamp',lit(now)).\
    withColumn('lastmodifiedtimestamp',lit(now)).\
    withColumn('createduserid',lit(createdby)).\
    withColumn('lastmodifieduserid',lit(createdby)).\
    withColumn('filepath',input_file_name()).\
    withColumn('recordactiveind',lit(recordactiveind))
    return df

applytransform模块返回py4j.java_gateway.JavaObject而不是常规的pyspark.sql.dataframe.DataFrame,因此我无法在addauditcols模块中对modifieddf执行简单的 withColumn() 类型转换

我得到的错误如下:

2021-12-05 21:09:57.274 | INFO     | __main__:main:73 - modifieddf Type::: 
<class 'py4j.java_gateway.JavaObject'>
2021-12-05 21:09:57.421 | ERROR    | __main__:main:91 - Operating Failed for md_customer, with Exception Column is not iterable
Traceback (most recent call last):

  File "c:/Users/asdsad/integration-app\load2cleansed.py", line 99, in <module>
    main()
    └ <function main at 0x000001C570C263A0>

> File "c:/Users/asdsad/integration-app\load2cleansed.py", line 76, in main
    transformdf = auditlineage.addauditcols(modifieddf,fileconfig,appid)
                  │            │            │          │          └ 'local-1638760184357'
                  │            │            │          └ {'Schema': 'customernumber,customername,addrln1,city,statename,statecode,postalcode,countrycode,activeflag,sourcelastmodified...
                  │            │            └ JavaObject id=o48
                  │            └ <function addauditcols at 0x000001C570B55CA0>
                  └ <module 'core.wrapper.auditlineage' from 'c:\\Users\\asdsad\integration-app\\core\\wrapper\\a...

  File "c:\Users\1232\Documents\Code\ntegration-app\core\wrapper\auditlineage.py", line 30, in addauditcols
    df = inputdf.withColumn('datasource',lit(datasource)).\
         │                               │   └ 'DUMMY-CUST'
         │                               └ <function lit at 0x000001C570B79F70>
         └ JavaObject id=o48

  File "C:\Users\testapp\lib\site-packages\py4j\java_gateway.py", line 1296, in __call__
    args_command, temp_args = self._build_args(*args)
                              │    │            └ ('datasource', Column<'DUMMY-CUST'>)
                              │    └ <function JavaMember._build_args at 0x000001C5704B9280>
                              └ <py4j.java_gateway.JavaMember object at 0x000001C570C5B910>

  File "C:\Users\testapp\lib\site-packages\py4j\java_gateway.py", line 1260, in _build_args
    (new_args, temp_args) = self._get_args(args)
                            │    │         └ ('datasource', Column<'DUMMY-CUST'>)
                            │    └ <function JavaMember._get_args at 0x000001C5704B91F0>
                            └ <py4j.java_gateway.JavaMember object at 0x000001C570C5B910>

  File "C:\Users\testapp\lib\site-packages\py4j\java_gateway.py", line 1247, in _get_args
    temp_arg = converter.convert(arg, self.gateway_client)
               │         │       │    │    └ <py4j.java_gateway.GatewayClient object at 0x000001C5705C89A0>
               │         │       │    └ <py4j.java_gateway.JavaMember object at 0x000001C570C5B910>
               │         │       └ Column<'DUMMY-CUST'>
               │         └ <function ListConverter.convert at 0x000001C5704CE5E0>
               └ <py4j.java_collections.ListConverter object at 0x000001C5704C3FD0>

  File "C:\Users\testapp\lib\site-packages\py4j\java_collections.py", line 510, in convert
    for element in object:
                   └ Column<'DUMMY-CUST'>

  File "C:\Users\testapp\lib\site-packages\pyspark\sql\column.py", line 470, in __iter__
    raise TypeError("Column is not iterable")

TypeError: Column is not iterable

任何帮助表示赞赏

4

2 回答 2

1

请删除lspark = inputdf._jdf.sparkSession()

它用于 sql upsert 命令以像合并一样进行增量而不返回数据帧。

请使用spark.sql(vsqlscript)

如果它没有帮助,请也分享您的 vsqlscript 代码。

于 2021-12-06T10:57:50.360 回答
1

谢谢。而不是 createOrReplaceTempView ,我更新为createOrReplaceGlobalTempView并更新 vsqlscript 以从select * from global_temp.src_to_transform中读取 在 applytransform 中进行 了以下更改:

def applytransform(inputdf,targettable,targetdatabase,fileconfig):
  
  logger.info('Inside applytransform for Database/Table {}.{}',targetdatabase,targettable)
  
  **## Store in global Temp databricks database**
  inputdf.createOrReplaceGlobalTempView("src_to_transform")
  lspark = inputdf._jdf.sparkSession()
  if 'TransformQuery' in fileconfig and fileconfig['TransformQuery'] is not None:
    vsqlscript = fileconfig['TransformQuery']
    #df = lspark.sql(vsqlscript)
    df = spark.sql(vsqlscript)    
    logger.info("Applied Tranform")
    df.show()
    return df
  else:
    logger.info("Passed DF")
    return inputdf 
于 2021-12-06T17:21:46.680 回答