0

这是我的代码,但它给了我一些我无法解决的错误。即使相同的代码在单个 url 和单个代理上运行良好,但对于 proxy 和 urls 文件却没有运行。

import urllib2
import time 
#bangalore, boston,china

with open('urls.txt') as f:
    urls = [line.strip() for line in f]
    print "list of urls",urls
with open('proxies.txt') as proxies:
    for proxy in proxies:
        print proxy
        proxy = proxy.rstrip()
        print proxy
        proxy_handler = urllib2.ProxyHandler(proxy)
        opener = urllib2.build_opener(proxy_handler)
        urllib2.install_opener(opener)
        try:
            for url in urls:
                request=urllib2.Request(url)
                start=time.time()
                try:
                    print "from try block"
                    response=urllib2.urlopen(urls[0])
                    response.read(1)
                    ttfb = time.time() - start
                    print "Latency:", ttfb
                    print "Status Code:", response.code
                    print "Headers:", response.headers
                    print "Redirected url:", response.url  
                except urllib2.URLError as e:
                    print "From except"
                    print "Error Reason:", e.reason
                    print "Error Message:", e.message
                   # print "Redirected URL:", e.url
                except urllib2.HTTPError as e:
                    print e.reason 
        except Exception,e:
            print e
4

1 回答 1

0

替换为:

proxy = json.loads(proxy.rstrip())

(并导入 json)

urls.txt 行如下:

http://www.google.com

proxies.txt 行如下:

{"http" : "http://ip:port"}

根据我对您帖子的评论,这也将始终引用第一个网址:

response=urllib2.urlopen(urls[0])
于 2013-11-15T10:06:38.910 回答