0

我在scrapy框架中遇到了这个错误。这是我在 spiders 目录下的 dmoz.py:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector

from dirbot.items import Website


class DmozSpider(BaseSpider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    f = open("links.csv")
    start_urls = [url.strip() for url in f.readlines()]
    f.close()
    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//ul/li')
        items = []

        for site in sites:
            item = Website()
            item['name'] = site.select('a/text()').extract()
            item['url'] = site.select('a/@href').extract()
            item['description'] = site.select('text()').extract()
            items.append(item)

        return items

运行此代码时出现此错误:

<GET %22http://www.astate.edu/%22>: Unsupported URL scheme '': no handler available for that scheme in Scrapy

这是我的 links.csv 内容:

http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/
http://www.atsu.edu/

links.csv 中有 80 个 URL。如何解决此错误?

4

1 回答 1

4

%22"urlencoded。您的 CSV 文件可能包含如下行:

"http://example.com/"
  1. 使用csv模块读取文件,或者
  2. 剥离"s。

编辑:根据要求:

'"http://example.com/"'.strip('"')

编辑2:

import csv
from StringIO import StringIO

c = '"foo"\n"bar"\n"baz"\n'      # Since csv.reader needs a file-like-object,
reader = csv.reader(StringIO(c)) # wrap c into a StringIO.
for line in reader:
    print line[0]

最后编辑:

import csv

with open("links.csv") as f:
    r = csv.reader(f)
    start_urls = [l[0] for l in r]
于 2012-11-08T09:45:15.777 回答