0

我正在尝试抓取已修复的 chrome 错误列表。它适用于第一页和第二页,但是,由于某种原因,它停在第三页。我在 setting.py 中设置了 DEPTH_LIMIT = 1。这是否与 chrome 政策有关,其中可能限制了可以抓取的数据量?提前致谢!

class MySpider(CrawlSpider):
    name = "craig"
    start_urls = ["http://code.google.com/p/chromium/issues/list?can=1&q=status%3Afixed&sort=&groupby=&colspec=ID+Pri+M+Iteration+ReleaseBlock+Cr+Status+Owner+Summary+OS+Modified+Type+Priority+Milestone+Attachments+Stars+Opened+Closed+BlockedOn+Blocking+Blocked+MergedInto+Reporter+Cc+Project+Os+Mstone+Releaseblock+Build+Size+Restrict+Security_severity+Security_impact+Area+Stability+Not+Crash+Internals+Movedfrom+Okr+Review+Taskforce+Valgrind+Channel+3rd"]

    rules = (
        Rule(SgmlLinkExtractor(restrict_xpaths=('//a[starts-with(., "Next")]/@href'))),
        Rule(SgmlLinkExtractor(allow=("status%3Afixed",), deny=("detail?",)), callback="parse_items", follow=True)

    )

    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        table = hxs.select("//table[@id='resultstable']")

    items = []

        count = 1
        for count in range(1,100):
            row = table.select("tr[" + str(count) + "][@class='ifOpened cursor_off']")  
        item = CraiglistSampleItem()

        item["summary"] = row.select("td[@class='vt col_8'][2]/a/text()").extract()     
        item["summary"] = str(item["summary"][0].encode("ascii","ignore")).strip()

        item["id"] = row.select("td[@class='vt id col_0']/a/text()").extract()      
        item["id"] = str(item["id"][0].encode("ascii","ignore")).strip()

        print item["summary"]
            count = count + 1
            items.append(item)

        return(items)
4

1 回答 1

0

嗯,就是DEPTH_LIMIT = 1这样。第三页是深度2,所以不会被爬取。设置DEPTH_LIMIT = 0,您的爬虫将工作。

于 2013-09-08T16:55:43.960 回答