0

我有 2 个 csv 文件。第一个,input,由带有各种错误的输入街道地址组成。第二,ref是一个干净的街道地址表。内的记录input需要与内的记录相匹配ref。将文件转换为具有唯一记录的列表很快,但是一旦我进入匹配过程,它就非常慢,需要整整 85 秒才能匹配两个地址inputref而不需要任何正则表达式!我意识到ref是这里的问题;它的长度超过 100 万条记录,文件大小为 30 MB。我预计这些大小会出现一些性能问题,但是只为两条记录花费这么长时间是不可接受的(实际上,我可能必须匹配多达 10,000 条或更多记录。此外,我最终需要将一些正则表达式嵌入到ref项目中以允许更灵活的匹配。测试新的正则表达式模块更糟糕,同样的两条input记录需要高达 185 秒的时间。有人知道大幅加快速度的最佳方法吗?例如,我可以按邮政编码索引吗?

以下是分别来自 input 和 ref 的示例地址(预处理后):

60651 N SPRINGFIELD AVE CHICAGO
60061 BROWNING CT VERNON HILLS

这是我到目前为止所拥有的。(作为一个新手,我意识到我的代码可能存在各种低效率,但这不是问题):

import csv, re

f = csv.reader(open('/Users/benjaminbauman/Documents/inputsample.csv','rU'))

columns = zip(*f)

l = list(columns)

inputaddr = l[0][1:]

f = csv.reader(open('/Users/benjaminbauman/Documents/navstreets.csv','rU'))
f.next()

reffull = []
for row in f:
    row = str(row[0:7]).strip(r'['']').replace("\'","")
    if not ", , , , ," in row: reffull.append(row) 

input = list(set(inputaddr))

ref1 = list(set(reffull))
ref2 = ref1

input_scrub = []
for i in inputaddr:
    t = i.replace(',',' ')
    input_scrub.append(' '.join(t.split()))

ref_scrub = []

for i in ref1:
    t = i.replace(',',' ')
    ref_scrub.append(' '.join(t.split()))

output_iter1 = dict([ (i, [ r for r in ref_scrub if re.match(r, i) ]) for i in input_scrub ])

unmatched_iter1 = [i for i, j in output_iter1.items() if len(j) < 1]
matched_iter1 = {i: str(j[0][1]).strip(r'['']') for i, j in output_iter1.items() if len(j) is 1}
tied_iter1 = {k: zip(*(v))[1] for k,v in output_iter1.iteritems() if len(v) > 1}
4

2 回答 2

1

如果执行时间可以接受,也许您可​​以使用 difflib 模块,而不是新模块中的模糊正则表达式:

import difflib


REF = ['455 Gateway Dr, Brooklyn, NY 11239',
       '10 Devoe St, Brooklyn, NY 11211',
       '8801 Queens Blvd, Elmhurst, NY 11373 ',
       '342 Wythe Ave, Brooklyn, NY 11249 ',
       '4488 E Live Oak Ave, Arcadia, CA 91006',
       '1134 N Vermont Ave, Los Angeles, CA 90029',
       '1101 17th St NW, Washington, DC 20036 ',
       '3001 Syringa St, Hopeful-City, AL 48798',
       '950 Laurel St, Minneapolis, KS 67467']


INPUT = ['4554 Gagate Dr, Brooklyn, NY 11239',
         '10 Devoe St, Brooklyn, NY 11211',
         '8801 Queens Blvd, Elmhurst, NY 11373 ',
         '342 Wythe Ave, Brooklyn, NY 11249 ',
         '4488 E Live Oak Ave, Arcadia, CA 91006',
         '1134 N Vermont Ave, Los Angeles, CA 90029',
         '1101 17th St NW, Washington, DC 20036 ',
         '3001 Syrinuy St, Hopeful Dam, AL 48798',
         '950 Laurel St, Minneapolis, KS 67467',
         '455 Gateway Doctor, Forgotten Place, NY 11239',
         '10 Devoe St, Brook., NY 11211',
         '82477 Queens Blvd, Elmerst, NY 11373 ',
         '342 Waithe Street, Brooklyn, MN 11249 ',
         '4488 E Live Poke Ave, Arcadia, CA 145',
         '1134 N Vermiculite Ave, Liz Angelicas, CA 90029',
         '1101 1st St NW, Washing, DC 20036 ']


def treatment(inp,reference,crit,gcm = difflib.get_close_matches):
    for input_item in inp:
        yield (input_item,gcm(input_item,reference,1000,crit))


for a,b in treatment(INPUT,REF,0.65):
    print '\n- %s\n     %s' % (a, '\n     '.join(b))

结果是:

- 4554 Gagate Dr, Brooklyn, NY 11239
     455 Gateway Dr, Brooklyn, NY 11239
     342 Wythe Ave, Brooklyn, NY 11249 

- 10 Devoe St, Brooklyn, NY 11211
     10 Devoe St, Brooklyn, NY 11211

- 8801 Queens Blvd, Elmhurst, NY 11373 
     8801 Queens Blvd, Elmhurst, NY 11373 

- 342 Wythe Ave, Brooklyn, NY 11249 
     342 Wythe Ave, Brooklyn, NY 11249 
     455 Gateway Dr, Brooklyn, NY 11239

- 4488 E Live Oak Ave, Arcadia, CA 91006
     4488 E Live Oak Ave, Arcadia, CA 91006

- 1134 N Vermont Ave, Los Angeles, CA 90029
     1134 N Vermont Ave, Los Angeles, CA 90029

- 1101 17th St NW, Washington, DC 20036 
     1101 17th St NW, Washington, DC 20036 

- 3001 Syrinuy St, Hopeful Dam, AL 48798
     3001 Syringa St, Hopeful-City, AL 48798

- 950 Laurel St, Minneapolis, KS 67467
     950 Laurel St, Minneapolis, KS 67467

- 455 Gateway Doctor, Forgotten Place, NY 11239
     455 Gateway Dr, Brooklyn, NY 11239

- 10 Devoe St, Brook., NY 11211
     10 Devoe St, Brooklyn, NY 11211

- 82477 Queens Blvd, Elmerst, NY 11373 
     8801 Queens Blvd, Elmhurst, NY 11373 

- 342 Waithe Street, Brooklyn, MN 11249 
     342 Wythe Ave, Brooklyn, NY 11249 
     455 Gateway Dr, Brooklyn, NY 11239

- 4488 E Live Poke Ave, Arcadia, CA 145
     4488 E Live Oak Ave, Arcadia, CA 91006

- 1134 N Vermiculite Ave, Liz Angelicas, CA 90029
     1134 N Vermont Ave, Los Angeles, CA 90029

- 1101 1st St NW, Washing, DC 20036 
     1101 17th St NW, Washington, DC 20036 
于 2013-02-09T02:49:29.567 回答
0

我突然想到为什么这条线

output_iter1 = dict([ (i, [ r for r in ref_scrub if re.match(r, i) ]) for i in input_scrub ])

花了这么长时间。匹配过程是为超大列表中的每个项目搜索ref与较小列表中的项目的匹配input,而不是相反。不幸的是,我希望它以这种方式构建,以便我可以将正则表达式嵌入到 中的项目中ref,因为这些项目是由地址属性标记的,以便于锚定。鉴于我对 sql 的理解有限,我想有两种解决方法。根据 eyquem 的建议,第一个可以使用我在上一条评论中提出的想法。第二个可以在使用 regex 或 difflib 进行更复杂的匹配之前,使用 equals to 语句按城市和邮政编码属性进行查找(索引?)。

我已经在和中拆分项目inputref以便城市和邮政编码属性是列表中的单独项目,例如:

ref ('COVE POINTE CT', 'BLOOMINGTON, 61704')
input ('S EBERHART', 'CHICAGO, 60628')

以下允许我将搜索范围缩小到ref共享相同城市和邮政编码的部分。input这将包含超过 1000 条记录的文件的时间长度缩短到 56 秒。这要好得多。

matchaddr = []
refaddr = []
unmatched = []
for i in ref:
    for t in input:
        if t[1] == i[1]:
            if re.match(i[0],t[0]):
                matchaddr.append(t[0] + ', ' + t[1])
                refaddr.append(i[0] + ', ' + i[1]) 

现在我可以再次使用我心爱的正则表达式(前提是表达式不会引起其他问题,例如灾难性的回溯)。此外,此代码的速度是因为首先找到与城市和邮政编码属性的完美匹配。如果我尝试允许灵活匹配城市和邮政编码,速度可能会大大牺牲。不幸的是,它可能不得不走到那一步(输入还包含混乱的城市和邮政编码属性)。

于 2013-02-09T22:33:58.127 回答