54

我有item对象,我需要将它传递给许多页面以将数据存储在单个项目中

就像我的项目是

class DmozItem(Item):
    title = Field()
    description1 = Field()
    description2 = Field()
    description3 = Field()

现在这三个描述在三个单独的页面中。我想做一些类似的事情

现在这适用于parseDescription1

def page_parser(self, response):
    sites = hxs.select('//div[@class="row"]')
    items = []
    request =  Request("http://www.example.com/lin1.cpp",  callback =self.parseDescription1)
    request.meta['item'] = item
    return request 

def parseDescription1(self,response):
    item = response.meta['item']
    item['desc1'] = "test"
    return item

但我想要类似的东西

def page_parser(self, response):
    sites = hxs.select('//div[@class="row"]')
    items = []
    request =  Request("http://www.example.com/lin1.cpp",  callback =self.parseDescription1)
    request.meta['item'] = item

    request =  Request("http://www.example.com/lin1.cpp",  callback =self.parseDescription2)
    request.meta['item'] = item

    request =  Request("http://www.example.com/lin1.cpp",  callback =self.parseDescription2)
    request.meta['item'] = item

    return request 

def parseDescription1(self,response):
    item = response.meta['item']
    item['desc1'] = "test"
    return item

def parseDescription2(self,response):
    item = response.meta['item']
    item['desc2'] = "test2"
    return item

def parseDescription3(self,response):
    item = response.meta['item']
    item['desc3'] = "test3"
    return item
4

4 回答 4

38

没问题。以下是您的代码的正确版本:

def page_parser(self, response):
      sites = hxs.select('//div[@class="row"]')
      items = []

      request = Request("http://www.example.com/lin1.cpp", callback=self.parseDescription1)
      request.meta['item'] = item
      yield request

      request = Request("http://www.example.com/lin1.cpp", callback=self.parseDescription2, meta={'item': item})
      yield request

      yield Request("http://www.example.com/lin1.cpp", callback=self.parseDescription3, meta={'item': item})

def parseDescription1(self,response):
            item = response.meta['item']
            item['desc1'] = "test"
            return item

def parseDescription2(self,response):
            item = response.meta['item']
            item['desc2'] = "test2"
            return item

def parseDescription3(self,response):
            item = response.meta['item']
            item['desc3'] = "test3"
            return item
于 2012-12-17T09:51:04.160 回答
31

In order to guarantee an ordering of the requests/callbacks and that only one item is ultimately returned you need to chain your requests using a form like:

  def page_parser(self, response):
        sites = hxs.select('//div[@class="row"]')
        items = []

        request = Request("http://www.example.com/lin1.cpp", callback=self.parseDescription1)
        request.meta['item'] = Item()
        return [request]


  def parseDescription1(self,response):
        item = response.meta['item']
        item['desc1'] = "test"
        return [Request("http://www.example.com/lin2.cpp", callback=self.parseDescription2, meta={'item': item})]


  def parseDescription2(self,response):
        item = response.meta['item']
        item['desc2'] = "test2"
        return [Request("http://www.example.com/lin3.cpp", callback=self.parseDescription3, meta={'item': item})]

  def parseDescription3(self,response):
        item = response.meta['item']
        item['desc3'] = "test3"
        return [item]

Each callback function returns an iterable of items or requests, requests are scheduled and items are run through your item pipeline.

If you return an item from each of the callbacks, you'll end up with 4 items in various states of completeness in your pipeline, but if you return the next request, then you can guaruntee the order of requests and that you will have exactly one item at the end of execution.

于 2013-04-23T19:22:44.913 回答
23

接受的答案总共返回三个项目 [其中 desc(i) 设置为 i=1,2,3]。

如果您想退回单个项目,Dave McLain 的项目确实可以工作,但是它需要parseDescription1parseDescription2parseDescription3才能成功并在没有错误的情况下运行才能退回项目。

对于我的用例,一些子请求可能会随机返回 HTTP 403/404 错误,因此我丢失了一些项目,即使我可以部分刮掉它们。


解决方法

因此,我目前采用以下解决方法:不是只在request.metadict 中传递项目,而是传递一个知道接下来要调用什么请求的调用堆栈。它将调用堆栈上的下一个项目(只要它不为空),如果堆栈为空,则返回该项目。

request 参数用于在errback出错时返回调度程序方法,并简单地继续下一个堆栈项。

def callnext(self, response):
    ''' Call next target for the item loader, or yields it if completed. '''

    # Get the meta object from the request, as the response
    # does not contain it.
    meta = response.request.meta

    # Items remaining in the stack? Execute them
    if len(meta['callstack']) > 0:
        target = meta['callstack'].pop(0)
        yield Request(target['url'], meta=meta, callback=target['callback'], errback=self.callnext)
    else:
        yield meta['loader'].load_item()

def parseDescription1(self, response):

    # Recover item(loader)
    l = response.meta['loader']

    # Use just as before
    l.add_css(...)

    # Build the call stack
    callstack = [
        {'url': "http://www.example.com/lin2.cpp",
        'callback': self.parseDescription2 },
        {'url': "http://www.example.com/lin3.cpp",
        'callback': self.parseDescription3 }
    ]

    return self.callnext(response)

def parseDescription2(self, response):

    # Recover item(loader)
    l = response.meta['loader']

    # Use just as before
    l.add_css(...)

    return self.callnext(response)


def parseDescription3(self, response):

    # ...

    return self.callnext(response)

警告

此解决方案仍然是同步的,如果回调中有任何异常,仍然会失败。

有关更多信息,请查看我写的关于该解决方案的博客文章

于 2014-08-29T15:16:55.527 回答
3

提供的所有答案都各有利弊。我只是添加了一个额外的,以演示由于代码库(Python 和 Scrapy)的变化而如何简化它。我们不再需要使用meta而是可以使用cb_kwargs(即传递给回调函数的关键字参数)。

所以不要这样做:

def page_parser(self, response):
    sites = hxs.select('//div[@class="row"]')
    items = []

    request = Request("http://www.example.com/lin1.cpp",
                      callback=self.parseDescription1)
    request.meta['item'] = Item()
    return [request]


def parseDescription1(self,response):
    item = response.meta['item']
    item['desc1'] = "test"
    return [Request("http://www.example.com/lin2.cpp",
                    callback=self.parseDescription2, meta={'item': item})]
...

我们做得到:

def page_parser(self, response):
    sites = hxs.select('//div[@class="row"]')
    items = []

    yield response.follow("http://www.example.com/lin1.cpp",
                          callback=self.parseDescription1,
                          cb_kwargs={"item": item()})


def parseDescription1(self,response, item):
    item['desc1'] = "More data from this new response"
    yield response.follow("http://www.example.com/lin2.cpp",
                          callback=self.parseDescription2,
                          cb_kwargs={'item': item})
...

如果由于某种原因您有多个要使用相同功能处理的链接,我们可以交换

yield response.follow(a_single_url,
                      callback=some_function,
                      cb_kwargs={"data": to_pass_to_callback})

yield from response.follow_all([many, urls, to, parse],
                               callback=some_function,
                               cb_kwargs={"data": to_pass_to_callback})
于 2020-05-14T09:57:27.170 回答