1

这个问题是此处已解决问题的扩展,即。在使用scrapy进行身份验证时爬行linkedin。在使用 Scrapy @Gates进行身份验证时抓取 LinkedIn

虽然我保持脚本的基础相同,但只添加我自己的 session_key 和 session_password - 并且在更改特定于我的用例的启动 url 之后,如下所示。

class LinkedPySpider(InitSpider):
    name = 'Linkedin'
    allowed_domains = ['linkedin.com']
    login_page = 'https://www.linkedin.com/uas/login'
    start_urls=["http://www.linkedin.com/nhome/"]

[Also tried with this start url] 
start_urls =
["http://www.linkedin.com/profile/view?id=38210724&trk=nav_responsive_tab_profile"]

我还尝试将 start_url 更改为第二个(已评论),以查看是否可以从自己的个人资料页面开始抓取,但我无法这样做。

**Error that I get** - 
scrapy crawl Linkedin
**2013-07-29 11:37:10+0530 [Linkedin] DEBUG: Retrying <GET http://www.linkedin.com/nhome/> (failed 1 times): DNS lookup failed: address 'your.proxy.com' not found: [Errno -5] No address associated with hostname.**


**To see if the Name space was resolved, I tried -:**
nslookup www.linkedin.com #works
nslookup www.linkedin.com/uas/login # I think the depth of pages within a main website, does not resolve, and is normal right ?

Then I also tried to see if the error could have been due to Name Server not resolving and appended the Nameservers as below .
echo $http_proxy #gives http://username:password@your.proxy.com:80
sudo vi /etc/resolv.conf
and appended the free fast dns nameservers IP address as follows to this file :
nameserver 208.67.222.222
nameserver 208.67.220.220
nameserver 202.51.5.52

我不太擅长 NS 冲突和 DNS 查找失败,但这可能是因为我在虚拟机中 - 尽管其他抓取项目似乎工作得很好?

我的基本用例是能够提取连接和他们工作的公司列表,以及一堆其他属性。所以,我想从主个人资料页面中的“连接”(全部)中抓取/分页,如果我在 start_url 中使用公共个人资料,则不会显示,即。scrapy shell http://www.linkedin.com/in/ektagrover 通过 hxs.select 传递合法的 XPath - 这似乎有效,但如果我将它与蜘蛛一起使用,则不会,因为它不符合我的基本用例(如下)

问题:我的 start_url 是否有问题,或者只是我“假设发布身份验证时 start_page 可能会到达该站点中的任何网页,当我在https://www上重定向它发布身份验证时.linkedin.com/uas/login "

工作环境- 我在带有 ubuntu 12.04 LTS 和 Python 2.7.3 的 Oracle VM Virtual Box 上,带有 Scrapy 0.14.4

什么有效/答案 -看起来我的代理服务器错误地指向 echo $http_proxy - 它给出了http://username:password@your.proxy.com:80 [取消设置环境变量 $http_proxy ] 刚刚做了“ http_proxy= ” ,它取消了代理然后做了 echo $http_proxy ,它给出了 null 来确认。发布刚刚抓取 Linkedin 的帖子,它通过身份验证模块工作。虽然我在这里和那里被困在硒上,但这是另一个问题。谢谢你,@warwaruk

4

1 回答 1

1
**Error that I get** - 
scrapy crawl Linkedin
**2013-07-29 11:37:10+0530 [Linkedin] DEBUG: Retrying <GET http://www.linkedin.com/nhome/> (failed 1 times): DNS lookup failed: address 'your.proxy.com' not found: [Errno -5] No address associated with hostname.**


**To see if the Name space was resolved, I tried -:**
nslookup www.linkedin.com #works
nslookup www.linkedin.com/uas/login # I think the depth of pages within a main website, does not resolve, and is normal right ?

Then I also tried to see if the error could have been due to Name Server not resolving and appended the Nameservers as below .
echo $http_proxy #gives http://username:password@your.proxy.com:80

您有一个代理集:http://username:password@your.proxy.com:80.

显然,它在 Internet 中不存在:

$ nslookup your.proxy.com
Server:         127.0.1.1
Address:        127.0.1.1#53

** server can't find your.proxy.com: NXDOMAIN

取消设置环境变量$http_proxy或设置代理并更改环境。相应地变化。

于 2013-07-29T06:37:49.887 回答