2

我正在使用以下代码使用 pyspark 读取 csv 文件

import os
import sys

os.environ["SPARK_HOME"] = "D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7"
os.environ["PYLIB"] = os.environ["SPARK_HOME"] + "/python/lib"
sys.path.insert(0, os.environ["PYLIB"] +"/py4j-0.10.4-src.zip")
sys.path.insert(0, os.environ["PYLIB"] +"/pyspark.zip")

from pyspark import SparkConf
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *

conf = SparkConf() 
conf.setMaster('local') 
conf.setAppName('test')
sc = SparkContext(conf=conf)

sqlContext = SQLContext(sc)

df = qlContext.read.format("com.databricks.spark.csv").schema(customSchema).option("header", "true").option("mode", "DROPMALFORMED").load("iris.csv")

df.show()

错误抛出如下: -

文件“”,第 1 行,在 df = sqlContext.read.format("com.databricks.spark.csv").schema(customSchema).option("header", "true").option("mode", " DROPMALFORMED").load("iris.csv")

文件“D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\context.py”,第 464 行,读取返回 DataFrameReader(self)

文件“D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\readwriter.py”,第 70 行,在init self._jreader = spark._ssql_ctx.read ()

呼叫应答中的文件“D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py”,第 1133 行 ,self.gateway_client , self.target_id, self.name)

文件“D:\ProgramFiles\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py”,第 79 行,deco raise IllegalArgumentException(s.split(': ', 1)[1], 堆栈跟踪)

IllegalArgumentException:“实例化 'org.apache.spark.sql.internal.SessionState' 时出错:”

4

1 回答 1

0

以上读取 csv 的方式适用于 spark 版本 < 2.0.0

对于 spark > 2.0.0,您需要使用 spark session 阅读,

spark.read.csv("some_file.csv", header=True, mode="DROPMALFORMED", schema=schema)

或者

(spark.read
 .schema(schema)
 .option("header", "true")
 .option("mode", "DROPMALFORMED")
 .csv("some_file.csv"))
于 2018-10-12T19:31:30.923 回答