3

我对python很陌生,而且一般都在编码。我正在尝试制作一个从守望先锋播放器页面(例如:https : //playoverwatch.com/en-gb/career/pc/eu/Taimou-2526)抓取数据的网络爬虫我尝试使用portia,它工作在云中,但是当我将它导出为scrapy代码时,我无法让它工作。 是我的波西亚蜘蛛的截图。

这是我的蜘蛛的代码(从 portia 导出为 scrapy):owData.py

from __future__ import absolute_import



   #!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import absolute_import

from scrapy import Request
from scrapy.linkextractors import LinkExtractor
from scrapy.loader import ItemLoader
from scrapy.loader.processors import Identity
from scrapy.spiders import Rule

from utils.spiders import BasePortiaSpider
from utils.starturls import FeedGenerator, FragmentGenerator
from utils.processors import Item, Field, Text, Number, Price, Date, Url
(Image, Regex)
from items import PortiaItem


class Owdata(BasePortiaSpider):

    name = 'owData'
    allowed_domains = [u'playoverwatch.com']
    start_urls = \
        [u'https://playoverwatch.com/en-gb/career/pc/eu/Taimou-2526']
    rules = [Rule(LinkExtractor(allow=(), deny='.*'),
             callback='parse_item', follow=True)]


items = [[]]

这是我的 items.py 代码:

 from __future__ import absolute_import

import scrapy
from collections import defaultdict
from scrapy.loader.processors import Join, MapCompose, Identity
from w3lib.html import remove_tags
from .utils.processors import Text, Number, Price, Date, Url, Image


class PortiaItem(scrapy.Item):
    fields = defaultdict(
    lambda: scrapy.Field(
        input_processor=Identity(),
        output_processor=Identity()
    )
)

def __setitem__(self, key, value):
    self._values[key] = value

def __repr__(self):
    data = str(self)
    if not data:
        return '%s' % self.__class__.__name__
    return '%s(%s)' % (self.__class__.__name__, data)

def __str__(self):
    if not self._values:
        return ''
    string = super(PortiaItem, self).__repr__()
    return string


class CareerOverviewOverwatch1Item(PortiaItem):
field1 = scrapy.Field(
    input_processor=Text(),
    output_processor=Join(),
)
melee_final_blows = scrapy.Field(
    input_processor=Text(),
    output_processor=Join(),
)
table = scrapy.Field(
    input_processor=Text(),
    output_processor=Join(),
)
tr = scrapy.Field(
    input_processor=Text(),
    output_processor=Join(),

当我使用以下命令运行我的蜘蛛时:

scrapy crawl owData -o data.csv

我只是得到一个空的 data.csv 文件。我猜我的物品有问题?我认为 xPath 行应该只是 //tbody,但同样,我对 Python、xPath 或 scrapy 一无所知......

4

0 回答 0