我是 Apache Iceberg 的新手。我想使用 Apache Iceberg 执行读写操作。我正在使用 Spark 3.0.0。
代码:
System.setProperty("hadoop.home.dir","C:\\hadoop" )
val conf = new SparkConf()
conf.set("spark.sql.extensions","org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
conf.set("spark.sql.catalog.spark_catalog","org.apache.iceberg.spark.SparkSessionCatalog")
conf.set("spark.sql.catalog.spark_catalog.type","hive")
conf.set("spark.sql.catalog.local","org.apache.iceberg.spark.SparkCatalog")
conf.set("spark.sql.catalog.local.type","hadoop")
conf.set("spark.sql.catalog.local.warehouse","warehouse")
val spark = SparkSession.builder().master("local").config(conf).getOrCreate()
spark.sql("CREATE TABLE local.db.table (id bigint, data string) USING iceberg")
spark.sql("INSERT INTO local.db.table VALUES (1, 'a'), (2, 'b'), (3, 'c')")
依赖列表:
compile group: 'org.scala-lang', name: 'scala-library', version: '2.12.0'
compile group: 'org.apache.spark', name: 'spark-core_2.12', version: '3.0.0'
compile group: 'org.apache.spark', name: 'spark-sql_2.12', version: '3.0.0'
compile group: 'org.apache.iceberg', name: 'iceberg-spark', version: '0.11.1'
compile group: 'org.apache.iceberg', name: 'iceberg-spark3-runtime', version: '0.11.1'
compile group: 'org.apache.hive', name: 'hive-metastore', version: '2.0.1'
异常消息:
java.lang.ClassCastException: org.apache.iceberg.shaded.org.apache.parquet.schema.MessageType cannot be cast to org.apache.parquet.schema.MessageType
at org.apache.iceberg.parquet.ParquetWriter.<init>(ParquetWriter.java:96)
at org.apache.iceberg.parquet.Parquet$WriteBuilder.build(Parquet.java:250)
at org.apache.iceberg.spark.source.SparkAppenderFactory.newAppender(SparkAppenderFactory.java:110)
我是否需要任何其他依赖项来启用 MessageType 强制转换。