0

我有一个 csv 文件,它有大约 900000 行和 13 列,一切正常,直到 28445 行,但之后它给出了错误

编程错误

异常值:格式字符串的参数不足

我试图通过打印行来检查列中是否有问题,但那里似乎没有任何问题。

['INDIA', '5', '1ST TIME MOTHER', 'PATNA', 'A2', 'BRAND DRIVERS', '', '', '很难找到', '', '', '1' , '0 到 12 个月']

def upload(request):
    if request.method == 'POST':
        cursor = connection.cursor()
        query = ''' INSERT INTO johnson_jnjusage (country,no_of_people_house,nursing_cnt,city,sec,bucket,category1,category2, final_category, responders, usageFrequency, base, child_age_group) 
                    VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s) '''
        x=[]
        reader = csv.reader(request.FILES['csvData'],delimiter=',')
        i = 0
        for row in reader:
            x.append(tuple(row))
            if i>=5000:
                cursor.executemany(query,tuple(x))
                transaction.commit()
                x=[]
                i=0

            i = i+1

        cursor.close()
        return HttpResponse( docfile.name + "'s data inserted into database successfully")

如果你们能帮助我提前谢谢

4

3 回答 3

1

你可以做这样的事情。

#models.py
class JNJUsage(models.Model):
    ...

# views.py (where ever def upload is)
to_create = []

for i, row in enumerate(reader):
    j = JNJUsage()
    j.country = row['country']
    j.no_of_people_house = row['no_of_people_house']
    j.nursing_cnt = row['nursing_cnt']
    j.city = row['city']
    j.sec = row['sec']
    j.bucket = row['bucket']
    j.category1 = row['category1']
    j.category2 = row['category2']
    j.final_category = row['final_category']
    j.responders = row['responders']
    j.usageFrequency = row['usageFrequency']
    j.base = row['base']
    j.child_age_group = row['child_age_group']

    to_create.append(j)

    # If 900k is too much then you could consider something like this
    if i % 10000 == 0:
        JNJUsage.objects.bulk_create(to_create)
        to_create = []

# Clean up the rest
JNJUsage.objects.bulk_create(to_create)
于 2014-08-07T12:16:14.633 回答
0

我知道了。只是检查了行的大小,然后转义了那个特定的行,而且我得到了一些像“\x00”这样的字符,所以使用正则表达式来删除它们。

def upload(request):
    start_time = time.time()
    print start_time
    if request.method == 'POST':
        cursor = connection.cursor()    
        x=[]
        docfile = request.FILES['csvData']

        reader = csv.reader(request.FILES['csvData'],delimiter=',')

        to_create = []
        for i, row in enumerate(reader):
            if len(row) != 13:
                reader.next()
                continue

            j = JnJUsage()
            j.country =row[0]
            j.no_of_people_house = row[1]
            j.nursing_cnt = row[2]
            j.city = row[3]
            j.sec = row[4]
            j.bucket = row[5]
            j.category1 = row[6]
            j.category2 = row[7]
            j.final_category = re.sub(r'[\x00-\x08\x0b\x0c\x0e-\x1f\x7f-\xff]', '', row[8])
            j.responders = row[9]
            j.usageFrequency = row[10]
            j.base = row[11]
            j.child_age_group = row[12]

            to_create.append(j)

            # If 900k is too much then you could consider something like this
            if i % 10000 == 0:
                JnJUsage.objects.bulk_create(to_create)
                to_create = []
        JnJUsage.objects.bulk_create(to_create)

        return HttpResponse( docfile.name + "'s data inserted into database successfully")

感谢 Dmitry Mikhaylov 和 Nathaniel 的帮助

于 2014-08-07T13:14:55.083 回答
0

当然,您可以使用 bulk_create。它看起来像这样:

def upload(request):
    if request.method == 'POST':
        cursor = connection.cursor()
        x=[]
        reader = csv.reader(request.FILES['csvData'],delimiter=',')
        i = 0
        for row in reader:
            obj = new MyObject()
            obj.country = row[0]
            obj.city = row[3]
            ...
            x.append(obj)
            if i>=5000:
                MyObject.objects.bulk_create(x)
                x=[]
                i=0

            i = i+1
        return HttpResponse( docfile.name + "'s data inserted into database successfully")

bulk_create您可以在docs中找到更多信息。

于 2014-08-07T12:08:45.457 回答