我正在使用以下代码使用 pyspark 读取 csv 文件
import os
import sys
os.environ["SPARK_HOME"] = "D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7"
os.environ["PYLIB"] = os.environ["SPARK_HOME"] + "/python/lib"
sys.path.insert(0, os.environ["PYLIB"] +"/py4j-0.10.4-src.zip")
sys.path.insert(0, os.environ["PYLIB"] +"/pyspark.zip")
from pyspark import SparkConf
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
conf = SparkConf()
conf.setMaster('local')
conf.setAppName('test')
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
df = qlContext.read.format("com.databricks.spark.csv").schema(customSchema).option("header", "true").option("mode", "DROPMALFORMED").load("iris.csv")
df.show()
错误抛出如下: -
文件“”,第 1 行,在 df = sqlContext.read.format("com.databricks.spark.csv").schema(customSchema).option("header", "true").option("mode", " DROPMALFORMED").load("iris.csv")
文件“D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\context.py”,第 464 行,读取返回 DataFrameReader(self)
文件“D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\readwriter.py”,第 70 行,在init self._jreader = spark._ssql_ctx.read ()
呼叫应答中的文件“D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py”,第 1133 行 ,self.gateway_client , self.target_id, self.name)
文件“D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py”,第 79 行,deco raise IllegalArgumentException(s.split(': ', 1)[1], 堆栈跟踪)
IllegalArgumentException:“实例化 'org.apache.spark.sql.internal.SessionState' 时出错:”