2

我正在尝试从该站点抓取代理商的电话号码:

列表查看 http://www.authoradvance.com/agencies/

详情查看 http://www.authoradvance.com/agencies/b-personal-management/

电话号码隐藏在详细信息页面中。

那么是否可以通过上面的详细视图 url 之类的 url 浏览网站并抓取电话号码?

我对这段代码的尝试是:

from scrapy.item import Item, Field

class AgencyItem(Item):
    Phone = Field()

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from agentquery.items import AgencyItem


class AgencySpider(CrawlSpider):
   name = "agency"
   allowed_domains = ["authoradvance.com"]
   start_urls = ["http://www.authoradvance.com/agencies/"]
   rules = (Rule(SgmlLinkExtractor(allow=[r'agencies/*$']), callback='parse_item'),)

   def parse_item(self, response):
       hxs = HtmlXPathSelector(response)
       sites = hxs.select("//div[@class='section-content']")
       items = []
       for site in sites:
           item = AgencyItem()
           item['Phone'] = site.select('div[@class="phone"]/text()').extract()
           items.append(item)
       return(items)

然后我运行“scrapy crawl Agency -o items.csv -t csv”,结果爬取了0页。

怎么了?提前感谢您的帮助!

4

1 回答 1

3

页面上只有一个链接满足您的正则表达式 ( agencies/*$):

stav@maia:~$ scrapy shell http://www.authoradvance.com/agencies/
2013-04-24 13:14:13-0500 [scrapy] INFO: Scrapy 0.17.0 started (bot: scrapybot)

>>> SgmlLinkExtractor(allow=[r'agencies/*$']).extract_links(response)
[Link(url='http://www.authoradvance.com/agencies', text=u'Agencies', fragment='', nofollow=False)]

这只是一个指向自身的链接,它没有带section-content类的 div:

>>> fetch('http://www.authoradvance.com/agencies')
2013-04-24 13:15:22-0500 [default] DEBUG: Crawled (200) <GET http://www.authoradvance.com/agencies> (referer: None)

>>> hxs.select("//div[@class='section-content']")
[]

因此您的循环不会迭代并且items永远不会被附加。

所以将你的正则表达式更改为/agencies/.+

>>> len(SgmlLinkExtractor(allow=[r'/agencies/.+']).extract_links(response))
20

>>> fetch('http://www.authoradvance.com/agencies/agency-group')
2013-04-24 13:25:02-0500 [default] DEBUG: Crawled (200) <GET http://www.authoradvance.com/agencies/agency-group> (referer: None)

>>> hxs.select("//div[@class='section-content']")
[<HtmlXPathSelector xpath="//div[@class='section-content']" data=u'<div
class="section-content">\n\t      <di'>, <HtmlXPathSelector xpath="//div
[@class='section-content']" data=u'<div class="section-content"><div class='>]
于 2013-04-24T18:28:38.330 回答