我正在尝试使用 spark 在 kaggle上解决这个问题:
输入的层次结构是这样的:
drivers/{driver_id}/trip#.csv
e.g., drivers/1/1.csv
drivers/1/2.csv
drivers/2/1.csv
我想读取父目录“驱动程序”,对于每个子目录,我想创建一个pairRDD,键为(sub_directory,file_name),值作为文件的内容
我检查了这个链接并尝试使用
val text = sc.wholeTextFiles("drivers")
text.collect()
这失败并出现错误:
java.lang.ArrayIndexOutOfBoundsException: 0
at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat$OneFileInfo.<init>(CombineFileInputFormat.java:591)
at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:283)
at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:243)
at org.apache.spark.rdd.WholeTextFileRDD.getPartitions(NewHadoopRDD.scala:267)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1779)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:885)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
at org.apache.spark.rdd.RDD.collect(RDD.scala:884)
但是当我运行下面的代码时,它可以工作。
val text = sc.wholeTextFiles("drivers/1")
text.collect()
但我不想这样做,因为在这里我必须读取目录驱动程序并循环文件并为每个条目调用wholeTextFiles。