-1

我写了这些简单的 4 行代码:

import pyspark
from pyspark.sql import SparkSession
spa = SparkSession.builder.getOrCreate()

spa.createDataFrame([(1,2,3)], ["count"])

但是 createDataFrame 函数正在产生这个巨大的错误:

Py4JError Traceback(最近一次调用最后一次)在 3 spa = SparkSession.builder.getOrCreate() 4 ----> 5 spa.createDataFrame([(1,2,3)], ["count"])

c:\users\hp\appdata\local\programs\python\python37\lib\site-packages\pyspark\sql\session.py in createDataFrame(self, data, schema, samplingRatio, verifySchema) 690 else: 691 rdd, schema = self._createFromLocal(map(prepare, data), schema) --> 692 jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd()) 693 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema .json()) 694 df = DataFrame(jdf, self._wrapped)

c:\users\hp\appdata\local\programs\python\python37\lib\site-packages\pyspark\rdd.py in _to_java_object_rdd(self) 2294 """ 2295 rdd = self._pickled() -> 2296 return self .ctx._jvm.SerDeUtil.pythonToJava(rdd._jrdd, True) 2297 2298 def countApprox(self, timeout, confidence=0.95):

c:\users\hp\appdata\local\programs\python\python37\lib\site-packages\pyspark\rdd.py in _jrdd(self) 2472
self._jrdd_deserializer, profiler) 2473 python_rdd = self.ctx._jvm.PythonRDD (self._prev_jrdd.rdd(),wrapped_func,-> 2474 self.preservesPartitioning)2475 self._jrdd_val = python_rdd.asJavaRDD()2476

c:\users\hp\appdata\local\programs\python\python37\lib\site-packages\py4j\java_gateway.py in call (self, *args) 1523 answer = self._gateway_client.send_command(command) 1524
return_value = get_return_value( -> 1525 answer, self._gateway_client, None, self._fqn) 1526 1527 for temp_args in temp_args:

c:\users\hp\appdata\local\programs\python\python37\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw) 61 def deco(*a, **kw ): 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: 65 s = e.java_exception.toString()

c:\users\hp\appdata\local\programs\python\python37\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name) 330 raise Py4JError( 331 "调用时出错{0}{1}{2}. Trace:\n{3}\n".--> 332 format(target_id, ".", name, value)) 333 else: 334 raise Py4JError(

> Py4JError:调用 None.org.apache.spark.api.python.PythonRDD 时出错。跟踪:py4j.Py4JException:构造函数 org.apache.spark.api.python.PythonRDD([class org.apache.spark.rdd.ParallelCollectionRDD,class org.apache.spark.api.python.PythonFunction,class java.lang.Boolean ]) 不存在于 py4j.Gateway.invoke(Gateway.java:237) 的 py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:196) 的 py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:179) .commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(线程.java:748)

为什么会这样?该代码实际上与其他教程相同,并且在那里运行良好......

4

1 回答 1

-1

试试这个它的工作。在初始化时在值后放置一个逗号。

import pyspark
from pyspark.sql import SparkSession

spa = SparkSession.builder.getOrCreate()
df = spa.createDataFrame(sc.parallelize([(1,), (2,), (3,)]), ("count",),)

输出:

+-----+
|count|
+-----+
|    1|
|    2|
|    3|
+-----+

希望这可以帮助!

于 2020-01-09T06:32:15.343 回答