我正在使用 Pyspark 2.4,想将数据写入 SQL Server,但它不起作用。
我已将从此处下载的 jar 文件放在spark 路径中:
D:\spark-2.4.3-bin-hadoop2.7\spark-2.4.3-bin-hadoop2.7\jars\
但是,无济于事。以下是将数据写入 SQL Server 的 pyspark 代码。
sql_server_dtls = {'user': 'john', 'password': 'doe'}
ports_budget_joined_DF.write.jdbc(url="jdbc:sqlserver://endpoint:1433;databaseName=poc", table='dbo.test_tmp', mode='overwrite', properties=sql_server_dtls)
出现以下错误:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\aakash.basu\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pyspark\sql\readwriter.py", line 982, in jdbc
self.mode(mode)._jwrite.jdbc(url, table, jprop)
File "C:\Users\aakash.basu\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pyspark\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Users\aakash.basu\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pyspark\sql\utils.py", line 63, in deco
return f(*a, **kw)
File "C:\Users\aakash.basu\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pyspark\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o45.jdbc.
: java.sql.SQLException: No suitable driver
我错过了什么吗?另外,我想在将新数据写入之前先截断表。DF 编写器中的 mode='overwrite' 是否也为 SQL Server 目标系统处理相同的问题?