1

I want to "grep" multiple regex on multiple files. I have all those regex in a file (one per line), that I load in the following way, constructing a "super regex" :

dic = open('regex.dic')
rex = []
for l in iter(dic):
    if not l.startswith('#'):
        rex.append('^.*%s.*$' % l.strip())
rex = '|'.join(rex)
debug('rex='+rex)
global regex
regex = re.compile(rex, re.IGNORECASE|re.MULTILINE)
dic.close()

Then I check my files like this :

with open(fn, 'r') as f: data = f.readlines()
for i, line in enumerate(data):
    if len(line) <= 512: #Sanity check
        if regex.search(line):
            if not alreadyFound:
                log( "[!]Found in %s:" % fn)
                alreadyFound = True
                found = True
                copyFile(fn)
            log("\t%s" % '\t'.join(data[i-args.context:i+args.context+1]).strip())

This works. I feel this is really not efficient and dangerous (some regex in the dic could break the "super regex"). I was thinking about looping in the regex array, but that would mean scanning each file multiple time :/

Any brillant idea on how to do this ? Thanks!

4

1 回答 1

1
if l and l[0] != '#':
    try:
        re.compile(s)
    except:
        #handle any way you want
    else:
        rex.append('^.*({0}).*$'.format(l.strip()))

这将处理格式错误的正则表达式。

于 2013-05-07T15:25:22.827 回答