3

我正在尝试将.CSV文件批量插入 SQL Server,但没有取得多大成功。

一点背景:

1.我需要在 SQL Server (2017) 数据库中插入 1600 万条记录。每条记录有 130 列。我在我们的一个供应商的 API 调用的结果中有一个字段.CSV,我不允许提及。我有整数、浮点数和字符串数据类型。

2.我尝试了通常的方法:BULK INSERT但我无法通过数据类型错误。我在这里发布了一个问题,但无法使其发挥作用。

3.我尝试使用 python 进行实验,并尝试了我能找到的所有方法,但pandas.to_sql每个人都警告说它非常慢。我遇到了数据类型和字符串截断错误。与来自 的不同BULK INSERT

4.没有太多选择,我尝试了pd.to_sql,虽然它没有引发任何数据类型或截断错误,但由于我的 tmp SQL 数据库空间不足而失败。尽管我有足够的空间并且我的所有数据文件(和日志文件)都设置为无限制的自动增长,但我也无法传递此错误。

我当时就卡住了。我的代码(对于这pd.to_sql件作品)很简单:

import pandas as pd
from sqlalchemy import create_engine

engine = create_engine("mssql+pyodbc://@myDSN")

df.to_sql('myTable', engine, schema='dbo', if_exists='append',index=False,chunksize=100)

我不确定还有什么可以尝试的,欢迎任何建议。我见过的所有代码和示例都处理小型数据集(列不多)。我愿意尝试任何其他方法。我会很感激任何指示。

谢谢!

4

3 回答 3

2

我只是想分享这段肮脏的代码,以防它对其他人有所帮助。请注意,我非常清楚这根本不是最优的,它很慢,但我能够在十分钟内插入大约 1600 万条记录,而不会使我的机器超载。

我尝试使用以下方法小批量进行:

import pandas as pd
from sqlalchemy import create_engine

engine = create_engine("mssql+pyodbc://@myDSN")

a = 1
b = 1001

while b <= len(df):
    try:
        df[a:b].to_sql('myTable', engine, schema='dbo', if_exists='append',index=False,chunksize=100)
        a = b + 1
        b = b + 1000
    except:
        print(f'Error between {a} and {b}')
        continue

丑得要命,但对我有用。

我对所有批评和建议持开放态度。正如我所提到的,我发布这个以防它帮助其他人,但也期待收到一些建设性的反馈。

于 2020-10-18T16:10:52.087 回答
1

Loading data from pandas data frame to SQL database is very slow and when dealing with large datasets, running out of memory is a usual case. You want something that is much efficient than that when dealing with large datasets.

d6tstack is something that might solve your problems. Because it works with native DB import commands. It is a custom library that is specifically built for dealing with schema as wells as perfomance issues. Works for XLS, CSV, TXT which can be exported to CSV, Parquet, SQL and Pandas.

于 2020-10-18T16:38:01.627 回答
1

I think df.to_sql is pretty awesome! I have been using it a lot lately. It's a bit slow, when data sets are really huge. If you need speed, I think Bulk Insert will be the fastest option. You can even do the job in batches, so you don't run out of memory, and perhaps overwhelm you machine.

BEGIN TRANSACTION
BEGIN TRY
BULK INSERT  OurTable 
FROM 'c:\OurTable.txt' 
WITH (CODEPAGE = 'RAW', DATAFILETYPE = 'char', FIELDTERMINATOR = '\t', 
   ROWS_PER_BATCH = 10000, TABLOCK)
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
END CATCH 
于 2021-01-24T04:30:04.967 回答