我想读入 T1 并将其写为 T2(注意两者都是 .csv)。T1 包含重复行;我不想在 T2 中写重复。
T1
+------+------+---------+---------+---------+
| Type | Year | Value 1 | Value 2 | Value 3 |
+------+------+---------+---------+---------+
| a | 8 | x | y | z |
| b | 10 | q | r | s |
+------+------+---------+---------+---------+
T2
+------+------+---------+-------+
| Type | Year | Value # | Value |
+------+------+---------+-------+
| a | 8 | 1 | x |
| a | 8 | 2 | y |
| a | 8 | 3 | z |
| b | 10 | 1 | q |
| ... | ... | ... | ... |
+------+------+---------+-------+
目前,我有这个极其缓慢的代码来过滤重复项:
no_dupes = []
for row in reader:
type = row[0]
year = row[1]
index = type,age
values_list = row[2:]
if index not in no_dupes:
for i,j in enumerate(values_list):
line = [type, year, str(i+1), str(j)]
writer.writerow(line) #using csv module
no_dupes.append(index)
当 T1 变大时,我无法夸大这段代码有多慢。
当我写入 T2 时,是否有更快的方法来过滤掉 T1 中的重复项?