我正在使用带有 privoxy 和 tor 的爬虫网络爬虫。一切都已正确配置,我可以通过 privoxy 浏览 Tor 网络。
我希望用于抓取每个地址的 ip 随着每个请求/x 个请求而改变。我正在使用controller.signal(Signal.NEWNYM)
代理中间件来尝试这里的答案:Scrapy with Privoxy and Tor: how to renew IP,但我没有得到任何 ip 更改。
这是用于更改 Tor 电路和 ip 的中间件:
def _set_new_ip():
with Controller.from_port(port=9051) as controller:
controller.authenticate(password='password')
controller.signal(Signal.NEWNYM)
class ProxyMiddleware(object):
def process_request(self, request, spider):
_set_new_ip()
request.meta['proxy'] = 'http://127.0.0.1:8118'
spider.log('Proxy : %s' % request.meta['proxy'])
我知道改变tor电路并不一定意味着ip的改变,但是我controller.signal(Signal.NEWNYM)
在一个单独的脚本中进行了测试,发现tor电路的改变确实会导致ip的周期性变化。这是我用来测试的脚本:
def set_new_ip():
"""Change IP using TOR"""
with Controller.from_port(port=9051) as controller:
controller.authenticate(password='password')
controller.signal(Signal.NEWNYM)
while True:
set_new_ip()
local_proxy = '127.0.0.1:8118'
http_proxy = {
'http': local_proxy,
'https': local_proxy
}
current_ip = requests.get(
url='http://icanhazip.com/',
proxies=http_proxy,
verify=False
)
print(current_ip.content)
从这个脚本我会得到如下输出,显示周期性的 ip 变化:
09.70.100.27\n'
b'109.70.100.27\n'
b'109.70.100.27\n'
b'109.70.100.27\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
然而,对于我的蜘蛛,我没有得到这种周期性变化。在 ip-log.csv 中,我只是得到一个巨大的列表,其中包含一遍又一遍地重复的相同 IP 地址。我究竟做错了什么?
这是我正在使用的蜘蛛代码:
class Spider(scrapy.Spider):
name = 'spider'
dir = '/a/path'
with open(dir + 'results.csv', 'w', newline='') #open .csv file to record results from scraping
with open(dir + 'ip-log.csv', 'w', newline='') #open .csv file to record ip used for each request
def start_requests(self):
url = 'https://url.com'
yield scrapy.request(url, callback=self.parse)
#collect listed urls
def parse(self, response):
path = '/response/xpath/@href'
if response.xpath(path):
for href in response.xpath(path).extract():
yield Request(url=response.urljoin(href), callback=self.save_result)
url = response.request.url.split('&')
for item in url:
if item.startswith('index='):
page_index = item.split('=')[-1]
next_page = ['index=' + str(int(page_index) + 24) if x.startswith('index=') else x for x in url]
next_page = '&'.join(next_page)
yield scrapy.Request(url=next_page, callback=self.parse)
#use icanhazip.com to get ip used for request
yield scrapy.Request('https://icanhazip.com', callback=self.check_ip, dont_filter=True)
#record ip
def check_ip(self, response):
ip = response.xpath('/html/body/p').extract()
dir = '/a/path'
with open(dir + '/ip-log.csv', 'a+', newline='') as f: #write request ip in .csv file
writer = csv.writer(f)
writer.writerow([ip])
yield scrapy.Request('https://icanhazip.com', callback=self.parse, dont_filter=True)
#visit each url and save results
def save_result(self, response):
dir = '/a/path'
path = '/desired/xpath'
result = response.xpath(path).extract()
with open(dir + '/results.csv', 'a+', newline='') as f:
writer = csv.writer(f)
writer.writerow([price]) #save results to results.csv