0

我有 3 个大 tsv 文件,结构如下:

 file1 : id,f1,f2,name,f3
 file2 : id,f4,blah1,f5
 file3 : id,f5,f6,blah2

我想创建从其他文件中提取的第三个文件:

 result: id,name,blah1,blah2

目前我不能,因为只是试图加载 panda|vaex 中的一个文件会使进程崩溃,因为它试图读取整个文件..

怎么做.. ?

我将在 vaex 中使用生成的文件......我认为它仍然是 ~1G


f1 = vaex.read_csv('stuff.tsv',convert=True,sep='\t') 

接着 :

f1.join(f2,left_on='id',right_on='id')
4

2 回答 2

1

'convert' 不会将文件加载到内存中......但以块的形式工作

f1 = vaex.read_csv('stuff.tsv',convert=True,sep='\t') 
f2 = vaex.read_csv('stuff2.tsv',convert=True,sep='\t') 

fx1 = f1['id','blah1']
fx2 = f2['id','blah2']

接着 :

ff = fx1.join(fx2,left_on='id',right_on='id')
ff.export_hdf5('file.hdf5')
于 2021-02-17T23:08:15.680 回答
0

像这样的策略可能会使您的工作更轻松。它跟踪通过merged_items跟踪项目的字典,并id保存 和 的值。然后,使用's逐行迭代每个文件,而不是一次全部迭代,以减少任何时候使用的必要内存。最后,它再次逐行写出项目。您需要对此进行修改以适合您的确切用例,但这应该是一个不错的开始。nameblah1blah2csvreader

merged_items = {}

with open ('file1.csv','r') as csv_file:
    reader = csv.reader(csv_file)
    next(reader) # skip first row
    for row in reader:
        row_id = row[0]
        name = row[3]
        merged_items[row_id] = {'name':name}


with open ('file2.csv','r') as csv_file:
    reader = csv.reader(csv_file)
    next(reader) # skip first row
    for row in reader:
        row_id = row[0]
        blah1 = row[2]
        merged_items[row_id]['blah1'] = blah1


with open ('file3.csv','r') as csv_file:
    reader = csv.reader(csv_file)
    next(reader) # skip first row
    for row in reader:
        row_id = row[0]
        blah2 = row[3]
        merged_items[row_id]['blah2'] = blah2

with open('output.csv','w', newline='') as output:
    writer = csv.writer(output, delimiter='\t') # change these options as you see fit
    for id, metadata in merged_items.items():
        writer.writerow([id, metadata['name'], metadata['blah1'], metadata['blah2'])
于 2021-02-17T19:13:38.223 回答