0

当启用 inferSchema 时,从 IBM Blue mix 对象存储加载的数据帧上的 count() 会引发以下异常:

Name: org.apache.spark.SparkException
Message: Job aborted due to stage failure: Task 3 in stage 43.0 failed 10 times, most recent failure: Lost task 3.9 in stage 43.0 (TID 166, yp-spark-dal09-env5-0034): java.lang.NumberFormatException: null
    at java.lang.Integer.parseInt(Integer.java:554)
    at java.lang.Integer.parseInt(Integer.java:627)
    at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:272)
    at scala.collection.immutable.StringOps.toInt(StringOps.scala:29)
    at org.apache.spark.sql.execution.datasources.csv.CSVTypeCast$.castTo(CSVInferSchema.scala:241)
    at org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$csvParser$3.apply(CSVRelation.scala:116)
    at org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$csvParser$3.apply(CSVRelation.scala:85)
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$2.apply(CSVFileFormat.scala:128)
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$2.apply(CSVFileFormat.scala:127)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)

如果禁用 inferSchema,则不会出现上述异常。为什么我会收到此异常?默认情况下,如果启用了 inferSchema,databricks 会读取多少行?

4

1 回答 1

2

实际上是spark-csv拖入. 它已被更正并推入。spark 2.0spark 2.1

这是相关的 PR:[SPARK-18269][SQL] CSV 数据源应在架构大于已解析标记时正确读取 null

由于您已经在使用 spark 2.0,您可以轻松升级到 2.1 并删除该spark-csv软件包。反正也不需要。

于 2017-06-11T16:29:27.797 回答