我有一个 hadoopFiles 对象,它是从sc.newAPIHadoopFile
.
scala> hadoopFiles
res1: org.apache.spark.rdd.RDD[(org.apache.hadoop.io.LongWritable, org.apache.hadoop.io.Text)] = UnionRDD[64] at union at <console>:24
我打算使用操作遍历 hadoopFiles 中的所有行并对其进行过滤,其中if
应用检查并将引发异常:
scala> val rowRDD = hadoopFiles.map(line =>
| line._2.toString.split("\\^") map {
| field => {
| var pair = field.split("=", 2)
| if(pair.length == 2)
| (pair(0) -> pair(1))
| }
| } toMap
| ).map(kvs => Row(kvs("uuid"), kvs("ip"), kvs("plt").trim))
<console>:33: error: Cannot prove that Any <:< (T, U).
} toMap
^
但是,如果我删除该if(pair.length == 2)
部分,它会正常工作:
scala> val rowRDD = hadoopFiles.map(line =>
| line._2.toString.split("\\^") map {
| field => {
| var pair = field.split("=", 2)
| (pair(0) -> pair(1))
| }
| } toMap
| ).map(kvs => Row(kvs("uuid"), kvs("ip"), kvs("plt").trim))
warning: there was one feature warning; re-run with -feature for details
rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.catalyst.expressions.Row] = MappedRDD[66] at map at <console>:33
谁能告诉我这种现象的原因,并告诉我应用该if
声明的正确方法。非常感谢!
PS我们可以用这个简化的例子来测试:
"1=a^2=b^3".split("\\^") map {
field => {
var pair = field.split("=", 2)
if(pair.length == 2)
pair(0) -> pair(1)
else
return
}
} toMap