我正在尝试从读取 csv 平面文件切换到 spark 上的 avro 文件。遵循https://github.com/databricks/spark-avro 我使用:
import com.databricks.spark.avro._
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val df = sqlContext.read.avro("gs://logs.xyz.com/raw/2016/04/20/div1/div2/2016-04-20-08-28-35.UTC.blah-blah.avro")
并得到
java.lang.UnsupportedOperationException: This mix of union types is not supported (see README): ArrayBuffer(STRING)
自述文件明确指出:
该库支持读取所有 Avro 类型,复杂联合类型除外。它使用以下从 Avro 类型到 Spark SQL 类型的映射:
当我尝试对同一个文件进行文本读取时,我可以看到架构
val df = sc.textFile("gs://logs.xyz.com/raw/2016/04/20/div1/div2/2016-04-20-08-28-35.UTC.blah-blah.avro")
df.take(2).foreach(println)
{"name":"log_record","type":"record","fields":[{"name":"request","type":{"type":"record","name":"request_data ","fields":[{"name":"datetime","type":"string"},{"name":"ip","type":"string"},{"name":"host ","type":"string"},{"name":"uri","type":"string"},{"name":"request_uri","type":"string"},{"name ":"referer","type":"string"},{"name":"useragent","type":"string"}]}}
<-------完整回复的摘录------->
因为我对获取这些文件的格式几乎没有控制权,所以我的问题是 -有没有人测试过并且可以推荐的解决方法?
我使用 gc dataproc
MASTER=yarn-cluster spark-shell --num-executors 4 --executor-memory 4G --executor-cores 4 --packages com.databricks:spark-avro_2.10:2.0.1,com.databricks:spark-csv_2 .11:1.3.0
任何帮助将不胜感激.....