我正在尝试使用 Spark 在 IBM Cloud Object Storage 上部署文件,但是当我尝试调用 saveAsTextFile 方法时总是出现错误
Exception in thread "main" java.io.IOException: No FileSystem for scheme: s3d
我的代码如下所示(仅用于测试目的):
val sparkConf = new SparkConf().setAppName("Test").setMaster("local")
val sc = new SparkContext(sparkConf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
sc.hadoopConfiguration.set("fs.s3d.service.endpoint", endPoint)
sc.hadoopConfiguration.set("fs.s3d.service.access.key", accessKey)
sc.hadoopConfiguration.set("fs.s3d.service.secret.key", secretKey)
val warehouseLocation = "file:${system:user.dir}/spark-warehouse"
val spark = SparkSession
.builder()
.appName("Test")
.config("spark.sql.warehouse.dir", warehouseLocation)
.getOrCreate()
val file = sc.textFile("src/main/resources/test.csv").map(line => line.split(","))
file.saveAsTextFile("s3d://rollup.service/result")
你们能帮帮我吗?谢谢!