我在尝试编写 SnappySQLJob 时遇到编译时错误。我是否缺少依赖项?
错误信息是:
org.apache.spark.sql.catalyst.TableIdentifier 类型无法解析。它是从所需的 .class 文件中间接引用的
@Override
public Object runJob(Object sparkContext, Config jobConfig) {
SnappyContext snappyContext = (SnappyContext)sparkContext;
String fileResource = "data.csv";
DataFrame dataFrame = snappyContext.read()
.format("com.databricks.spark.csv")
.option("header", "true")
.option("inferSchema", "true")
.load(fileResource);
// Compile-Time error is on this line
dataFrame.write().insertInto("example_table_col");
return null;
}
这是我的 pom.xml 依赖项:
<dependency>
<groupId>io.snappydata</groupId>
<artifactId>snappy-core_2.10</artifactId>
<version>0.2.1-PREVIEW</version>
</dependency>
<dependency>
<groupId>io.snappydata</groupId>
<artifactId>snappy-tools_2.10</artifactId>
<version>0.2.1-PREVIEW</version>
<exclusions>
<exclusion>
<artifactId>jdk.tools</artifactId>
<groupId>jdk.tools</groupId>
</exclusion>
<exclusion>
<artifactId>logback-classic</artifactId>
<groupId>ch.qos.logback</groupId>
</exclusion>
</exclusions>
</dependency>