0

I am using Spark 1.6.1, and Scala 2.10.5. I am trying to read the csv file through com.databricks. While launching the spark-shell, I use below lines as well

spark-shell --packages com.databricks:spark-csv_2.10:1.5.0 --driver-class-path path to/sqljdbc4.jar, and below is the whole code

import java.util.Properties
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.SQLContext


val conf = new SparkConf().setAppName("test").setMaster("local").set("spark.driver.allowMultipleContexts", "true");
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)

import sqlContext.implicits._

val df = SQLContext.read().format("com.databricks.spark.csv").option("inferScheme","true").option("header","true").load("path_to/data.csv");

I am getting below error:-

error: value read is not a member of object org.apache.spark.sql.SQLContext, and the "^" is pointing toward "SQLContext.read().format" in the error message.

I did try the suggestions available in stackoverflow, as well as other sites as well. but nothing seems to be working.

4

1 回答 1

1

SQLContext表示对象访问 - 类中的静态方法。

您应该使用sqlContext变量,因为方法不是静态的,而是在类中

所以代码应该是:

val df = sqlContext.read.format("com.databricks.spark.csv").option("inferScheme","true").option("header","true").load("path_to/data.csv");
于 2017-03-08T11:27:52.720 回答