1

我有一个 python 脚本,可以将一个大的(4GB !!!)CSV 文件读入 MySQL。它按原样工作,但 DOG 很慢。CSV 文件有超过 400 万行。将所有记录插入数据库需要很长时间。

我可以举个例子说明在这种情况下如何使用 executemany 吗?

这是我的代码:

source = os.path.join('source_files', 'aws_bills', 'march-bill-original-2019.csv')
try:
    with open(source) as csv_file:
        csv_reader = csv.reader(csv_file, delimiter=',')
        next(csv_reader)
        insert_sql = """ INSERT INTO billing_info (InvoiceId, PayerAccountId, LinkedAccountId, RecordType, RecordId, ProductName, RateId, SubscriptionId, PricingPlanId, UsageType, Operation, AvailabilityZone, ReservedInstance, ItemDescription, UsageStartDate, UsageEndDate, UsageQuantity, BlendedRate, BlendedCost, UnBlendedRate, UnBlendedCost, ResourceId, Engagement, Name, Owner, Parent) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s) """
        #for row in csv_reader:
        for row_idx, row in enumerate(csv_reader):
            try:
                cursor.execute(insert_sql,row)
                #cursor.executemany(insert_sql, 100)
                mydb.commit()
                print('row', row_idx, 'inserted with LinkedAccountId', row[2], 'at', datetime.now().isoformat())
            except Exception as e:
                print("MySQL Exception:", e)
        print("Done importing data.") 

同样,该代码用于将记录插入数据库。但是如果我能得到一个如何做到这一点的例子,我希望通过 executemany 来加速这个过程。

4

0 回答 0