7

我想从运行在 ubuntu 服务器上的脚本登录到我的雅虎帐户。我曾尝试将 python 与 mechanize 一起使用,但我的计划存在缺陷。

这是我目前拥有的代码。

        loginurl = "https://login.yahoo.com/config/login"
        br = mechanize.Browser()
        cj = cookielib.LWPCookieJar()
        br.set_cookiejar(cj)
        br.set_handle_equiv(True)
        br.set_handle_gzip(True)
        br.set_handle_redirect(True)
        br.set_handle_referer(True)
        br.set_handle_robots(False)
        br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
        br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
        r = br.open(loginurl)
        html = r.read()
        br.select_form(nr=0)
        br.form['login']='[mylogin]'
        br.form['passwd']='[mypassword]'
        br.submit()

        print br.response().read()

我得到的响应是一个带有粗体红色文本的雅虎登录页面。“必须在您的浏览器上启用 Javascript”或类似内容。mechanize docs 上有一个部分提到了使用 JS 创建 cookie 的页面,但是帮助页面返回 HTTP 400(只是我的运气)

弄清楚 javascript 做了什么,然后手动执行,这听起来是一项非常困难的任务。我非常愿意切换到任何工具/语言,只要它可以在 ubuntu 服务器上运行。即使这意味着使用不同的登录工具,然后将登录 cookie 传递回我的 python 脚本。任何帮助/建议表示赞赏。

更新:

  • 我不想使用 Yahoo API

  • 我也尝试过使用scrapy,但我认为会出现同样的问题

我的scrapy脚本

class YahooSpider(BaseSpider):
name = "yahoo"
start_urls = [
    "https://login.yahoo.com/config/login?.intl=us&.lang=en-US&.partner=&.last=&.src=&.pd=_ver%3D0%26c%3D%26ivt%3D%26sg%3D&pkg=&stepid=&.done=http%3a//my.yahoo.com"
]

def parse(self, response):
    x = HtmlXPathSelector(response)
    print x.select("//input/@value").extract()
    return [FormRequest.from_response(response,
                formdata={'login': '[my username]', 'passwd': '[mypassword]'},
                callback=self.after_login)]

def after_login(self, response):
    # check login succeed before going on
    if response.url == 'http://my.yahoo.com':
        return Request("[where i want to go next]",
                  callback=self.next_page, errback=self.error, dont_filter=True)
    else:
        print response.url
        self.log("Login failed.", level=log.CRITICAL)

def next_page(sekf, response):
    x = HtmlXPathSelector(response)
    print x.select("//title/text()").extract()

scrapy 脚本只输出“https://login.yahoo.com/config/login”......嘘

4

6 回答 6

3

我很惊讶这有效:

Python 2.6.6 (r266:84292, Dec 26 2010, 22:31:48)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from BeautifulSoup import BeautifulSoup as BS
>>> import requests
>>> r = requests.get('https://login.yahoo.com/')
>>> soup = BS(r.text)
>>> login_form = soup.find('form', attrs={'name':'login_form'})
>>> hiddens = login_form.findAll('input', attrs={'type':'hidden'})
>>> payload = {}
>>> for h in hiddens:
...     payload[str(h.get('name'))] = str(h.get('value'))
...
>>> payload['login'] = 'testtest481@yahoo.com'
>>> payload['passwd'] = '********'
>>> post_url = str(login_form.get('action'))
>>> r2 = requests.post(post_url, cookies=r.cookies, data=payload)
>>> r3 = requests.get('http://my.yahoo.com', cookies=r2.cookies)
>>> page = r3.text
>>> pos = page.find('testtest481')
>>> print page[ pos - 50 : pos + 300 ]
   You are signed in as: <span class="yuhead-yid">testtest481</span>        </li>    </ul></li><li id="yuhead-me-signout" class="yuhead-me"><a href="
http://login.yahoo.com/config/login?logout=1&.direct=2&.done=http://www.yahoo.com&amp;.src=my&amp;.intl=us&amp;.lang=en-US" target="_top" rel="nofoll
ow">            Sign Out        </a><img width='0' h
>>>

请试一试:

"""                                                                        
ylogin.py - how-to-login-to-yahoo-programatically-from-an-ubuntu-server    

http://stackoverflow.com/questions/11974478/                               
Test my.yahoo.com login using requests and BeautifulSoup.                  
"""                                                                        

from BeautifulSoup import BeautifulSoup as BS                              
import requests                                                            

CREDS = {'login': 'CHANGE ME',                                             
         'passwd': 'CHANGE ME'}                                            
URLS = {'login': 'https://login.yahoo.com/',                               
        'post': 'https://login.yahoo.com/config/login?',                   
        'home': 'http://my.yahoo.com/'}                                    

def test():                                                                
    cookies = get_logged_in_cookies()                                      
    req_with_logged_in_cookies = requests.get(URLS['home'], cookies=cookies)    
    assert 'You are signed in' in req_with_logged_in_cookies.text
    print "If you can see this message you must be logged in." 

def get_logged_in_cookies():                                               
    req = requests.get(URLS['login'])                                      
    hidden_inputs = BS(req.text).find('form', attrs={'name':'login_form'})\
                                .findAll('input', attrs={'type':'hidden'}) 
    data = dict(CREDS.items() + dict( (h.get('name'), h.get('value')) \    
                                         for h in hidden_inputs).items() ) 
    post_req = requests.post(URLS['post'], cookies=req.cookies, data=data) 
    return post_req.cookies                                                

test()                                                                     

根据需要添加错误处理。

于 2012-08-17T23:17:09.123 回答
2

如果页面使用 javascript,您可能会考虑使用ghost.py之类的东西,而不是 requests 或 mechanize。ghost.py 托管一个 WebKit 客户端,应该能够以最小的努力处理这些棘手的情况。

于 2012-08-24T06:22:22.907 回答
1

当需要启用 js 并且没有显示可用时,phantomjs 是一个不错的解决方案,认为它是 js,而不是 python :$

于 2012-08-23T21:36:26.540 回答
1

你的 Scrapy 脚本对我有用:

from scrapy.spider import BaseSpider
from scrapy.http import FormRequest
from scrapy.selector import HtmlXPathSelector

class YahooSpider(BaseSpider):
    name = "yahoo"
    start_urls = [
        "https://login.yahoo.com/config/login?.intl=us&.lang=en-US&.partner=&.last=&.src=&.pd=_ver%3D0%26c%3D%26ivt%3D%26sg%3D&pkg=&stepid=&.done=http%3a//my.yahoo.com"
    ]

    def parse(self, response):
        x = HtmlXPathSelector(response)
        print x.select("//input/@value").extract()
        return [FormRequest.from_response(response,
                    formdata={'login': '<username>', 'passwd': '<password>'},
                    callback=self.after_login)]

    def after_login(self, response):
        self.log('Login successful: %s' % response.url)

输出:

stav@maia:myproj$ scrapy crawl yahoo
2012-08-22 20:55:31-0500 [scrapy] INFO: Scrapy 0.15.1 started (bot: drzyahoo)
2012-08-22 20:55:31-0500 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2012-08-22 20:55:31-0500 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-08-22 20:55:31-0500 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-08-22 20:55:31-0500 [scrapy] DEBUG: Enabled item pipelines:
2012-08-22 20:55:31-0500 [yahoo] INFO: Spider opened
2012-08-22 20:55:31-0500 [yahoo] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-08-22 20:55:31-0500 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-08-22 20:55:31-0500 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-08-22 20:55:32-0500 [yahoo] DEBUG: Crawled (200) <GET https://login.yahoo.com/config/login?.intl=us&.lang=en-US&.partner=&.last=&.src=&.pd=_ver%3D0%26c%3D%26ivt%3D%26sg%3D&pkg=&stepid=&.done=http%3a//my.yahoo.com> (referer: None)
[u'1', u'', u'', u'', u'', u'', u'', u'us', u'en-US', u'', u'', u'93s42g583b3cg', u'0', u'L0iOlEQ1EbZ24TfLRpA43s5offgQ', u'', u'', u'', u'', u'', u'0', u'Y', u'http://my.yahoo.com', u'_ver=0&c=&ivt=&sg=', u'0', u'0', u'0', u'5', u'5', u'', u'y']
2012-08-22 20:55:32-0500 [yahoo] DEBUG: Redirecting (meta refresh) to <GET http://my.yahoo.com> from <POST https://login.yahoo.com/config/login>
2012-08-22 20:55:33-0500 [yahoo] DEBUG: Crawled (200) <GET http://my.yahoo.com> (referer: https://login.yahoo.com/config/login?.intl=us&.lang=en-US&.partner=&.last=&.src=&.pd=_ver%3D0%26c%3D%26ivt%3D%26sg%3D&pkg=&stepid=&.done=http%3a//my.yahoo.com)
2012-08-22 20:55:33-0500 [yahoo] DEBUG: Login successful: http://my.yahoo.com
2012-08-22 20:55:33-0500 [yahoo] INFO: Closing spider (finished)
2012-08-22 20:55:33-0500 [yahoo] INFO: Dumping spider stats:
    {'downloader/request_bytes': 2447,
     'downloader/request_count': 3,
     'downloader/request_method_count/GET': 2,
     'downloader/request_method_count/POST': 1,
     'downloader/response_bytes': 77766,
     'downloader/response_count': 3,
     'downloader/response_status_count/200': 3,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2012, 8, 23, 1, 55, 33, 837619),
     'request_depth_max': 1,
     'scheduler/memory_enqueued': 3,
     'start_time': datetime.datetime(2012, 8, 23, 1, 55, 31, 271262)}

环境:

stav@maia:myproj$ scrapy version -v
Scrapy  : 0.15.1
lxml    : 2.3.2.0
libxml2 : 2.7.8
Twisted : 11.1.0
Python  : 2.7.3 (default, Aug  1 2012, 05:14:39) - [GCC 4.6.3]
Platform: Linux-3.2.0-29-generic-x86_64-with-Ubuntu-12.04-precise
于 2012-08-23T14:08:46.603 回答
0

你可以试试 PhantomJS,一个带有 Javascript API http://phantomjs.org/的无头 webkit,它支持支持 Javascript 的编程浏览。

于 2012-08-29T16:21:09.793 回答
0

为什么不使用FancyURLOpener?它处理标准 HTTP 错误并具有prompt_user_passwd()功能。从链接:

执行基本身份验证时,FancyURLopener实例调用其prompt_user_passwd()方法。默认实现在控制终端上要求用户提供所需的信息。如果需要,子类可以覆盖此方法以支持更合适的行为。

于 2012-08-29T17:06:22.597 回答