我在 pyspark 中使用 udfs 时遇到以下问题。
只要我不使用任何 udfs,我的代码就可以正常工作。执行简单的操作(如选择列)或使用 sql 函数(如 concat)没有问题。一旦我对使用 udf 的 DataFrame 执行操作,程序就会崩溃,并出现以下异常:
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/szymonk/Desktop/Projects/SparkTest/venv/lib/python2.7/site-packages/pyspark/jars/spark-unsafe_2.11-2.4.3.jar) to method java.nio.Bits.unaligned()
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
19/06/05 09:24:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Traceback (most recent call last):
File "/Users/szymonk/Desktop/Projects/SparkTest/Application.py", line 59, in <module>
transformations.select(udf_example(col("gender")).alias("udf_example")).show()
File "/Users/szymonk/Desktop/Projects/SparkTest/venv/lib/python2.7/site-packages/pyspark/sql/dataframe.py", line 378, in show
print(self._jdf.showString(n, 20, vertical))
File "/Users/szymonk/Desktop/Projects/SparkTest/venv/lib/python2.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/Users/szymonk/Desktop/Projects/SparkTest/venv/lib/python2.7/site-packages/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u'Unsupported class file major version 55'
我已经尝试按照以下建议更改 JAVA_HOME:Pyspark 错误 - 不支持的类文件主要版本 55,但它没有帮助。
我的代码没有什么花哨的。我只定义了一个简单的 udf 函数,它应该在“性别”列中返回值的长度
from pprint import pprint
from pyspark.sql import SparkSession, Column
from pyspark.sql.functions import col, lit, struct, array, udf, concat, trim, when
from pyspark.sql.types import IntegerType
transformations = spark.read.csv("Resources/PersonalData.csv", header=True)
udf_example = udf(lambda x: len(x))
transformations.select(udf_example(col("gender")).alias("udf_example")).show()
我不确定它是否重要,但我在 Mac 上使用 Pycharm。