daringfireball
我已经使用下面的代码从使用正则表达式http://daringfireball.net/2010/07/improved_regex_for_matching_urls的 html 页面中提取 url ,即
(?i)\b((?:[a-z][\w-]+:(?:/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s
!()[]{};:'".,<>?«»“”'']))`
正则表达式的效果惊人,但使用re.findall()
几乎要花很长时间。无论如何,我可以快速获取 html 中的所有url 吗?
import urllib, re
seed = "http://web.archive.org/web/20100412111652/http://app.singaporeedu.gov.sg/asp/index.asp"
page = urllib.urlopen(seed).read().decode('utf8')
#print page
pattern = r'''(?i)\b((?:[a-z][\w-]+:(?:/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:'".,<>?«»“”‘’]))'''
match = re.search(pattern,page)
print match.group(0)
matches = re.findall(pattern,page) # this line takes more than 3 mins on my i3 laptop
print matches