我使用带有 Scala 的 Spark-jdbc 从 MS SQL 服务器读取数据,我想按指定的列对这些数据进行分区。我不想手动设置分区列的下限和上限。我可以在该字段中读取某种最大值和最小值并将其设置为上限/下限吗?另外,使用这个查询我想从数据库中读取所有数据。目前,查询机制如下所示:
def jdbcOptions() = Map[String,String](
"driver" -> "db.driver",
"url" -> "db.url",
"user" -> "db.user",
"password" -> "db.password",
"customSchema" -> "db.custom_schema",
"dbtable" -> "(select * from TestAllData where dayColumn > 'dayValue') as subq",
"partitionColumn" -> "db.partitionColumn",
"lowerBound" -> "1",
"upperBound" -> "30",
"numPartitions" -> "5"
}
val dataDF = sparkSession
.read
.format("jdbc")
.options(jdbcOptions())
.load()