0

我在一个项目中使用 Scrapy,在这个项目中我从 xml 中提取信息。

在 xml 文档中,我想实现 for 循环的格式:

<relatedPersonsList>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>
        <relatedPersonName>
            <firstName>Mark</firstName>
            <middleName>E.</middleName>
            <lastName>Lucas</lastName>
        </relatedPersonName>
        <relatedPersonAddress>
            <street1>1 IMATION WAY</street1>
            <city>OAKDALE</city>
            <stateOrCountry>MN</stateOrCountry>
            <stateOrCountryDescription>MINNESOTA</stateOrCountryDescription>
            <zipCode>55128</zipCode>
        </relatedPersonAddress>
        <relatedPersonRelationshipList>
            <relationship>Executive Officer</relationship>
            <relationship>Director</relationship>
        </relatedPersonRelationshipList>
        <relationshipClarification/>
    </relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
</relatedPersonsList>

正如你在 中看到的<relatedPersonsList>,你可以有多个<relatedPersonInfo>,当我尝试做一个 for 循环时,我仍然只能得到第一个人的信息。

这是我的实际代码:

    for person in xxs.select('./relatedPersonsList/relatedPersonInfo'):
        item = Myform() #even if get rid of it I get the same result

        item["firstName"] = person.select('./relatedPersonName/firstName/text()').extract()[0]
        item["middleName"] = person.select('./relatedPersonName/middleName/text()')
        if item["middleName"]:
            item["middleName"] = item["middleName"].extract()[0]
        else:
            item["middleName"] = "NA"

这是我在蜘蛛上使用的代码:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.selector import XmlXPathSelector

from scrapy.http import Request
import urlparse
from formds.items import SecformD

class SecDform(CrawlSpider):
    name = "DFORM"

    allowed_domain = ["http://www..gov"]
    start_urls = [
        ""
    ]

    rules = (

        Rule(
            SgmlLinkExtractor(restrict_xpaths=["/html/body/div/table/tr/td[3]/a[2]"]),
            callback='parse_formd',
            #follow= True no need of follow thing
        ),
        Rule(
            SgmlLinkExtractor(restrict_xpaths=('/html/body/div/center[1]/a[contains(., "[NEXT]")]')),
            follow=True
        ),
    )

    def parse_formd(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//*[@id="formDiv"]/div/table/tr[3]/td[3]/a/@href').extract()
        for site in sites:
            yield Request(url=urlparse.urljoin(response.url, site), callback=self.parse_xml_document)

    def parse_xml_document(self, response):
        xxs = XmlXPathSelector(response)
        item = SecformD()
        item["stateOrCountryDescription"] = xxs.select('./primaryIssuer/issuerAddress/stateOrCountryDescription/text()').extract()[0]
        item["zipCode"] = xxs.select('./primaryIssuer/issuerAddress/zipCode/text()').extract()[0]
        item["issuerPhoneNumber"] = xxs.select('./primaryIssuer/issuerPhoneNumber/text()').extract()[0]
        for person in xxs.select('./relatedPersonsList//relatedPersonInfo'):
            #item = SecDform()

            item["firstName"] = person.select('./relatedPersonName/firstName/text()').extract()[0]
            item["middleName"] = person.select('./relatedPersonName/middleName/text()')
            if item["middleName"]:
                item["middleName"] = item["middleName"].extract()[0]
            else:
                item["middleName"] = "NA"
        return item

我使用以下命令将信息提取到 .json 文件中:scrapy crawl DFORM -o tes4.json -t json

4

1 回答 1

1

尝试这样的事情:

def parse_xml_document(self, response):

    xxs = XmlXPathSelector(response)

    items = []

    # common field values
    stateOrCountryDescription = xxs.select('./primaryIssuer/issuerAddress/stateOrCountryDescription/text()').extract()[0]
    zipCode = xxs.select('./primaryIssuer/issuerAddress/zipCode/text()').extract()[0]
    issuerPhoneNumber = xxs.select('./primaryIssuer/issuerPhoneNumber/text()').extract()[0]

    for person in xxs.select('./relatedPersonsList//relatedPersonInfo'):

        # instantiate one item per loop iteration
        item = SecformD()

        # save common parameters
        item["stateOrCountryDescription"] = stateOrCountryDescription
        item["zipCode"] = zipCode
        item["issuerPhoneNumber"] = issuerPhoneNumber

        item["firstName"] = person.select('./relatedPersonName/firstName/text()').extract()[0]
        item["middleName"] = person.select('./relatedPersonName/middleName/text()')
        if item["middleName"]:
            item["middleName"] = item["middleName"].extract()[0]
        else:
            item["middleName"] = "NA"

        items.append(item)

    return items
于 2013-09-03T18:58:05.293 回答