0

I am trying to extract links from specific web page like: http://www.directmirror.com/files/0GR7ZPCY

but it doesn't work as the example in bs4's document. Can anyone point out the reason for me? My code is as following:

import urllib2
from bs4 import BeautifulSoup
response = urllib2.urlopen('http://www.directmirror.com/files/0GR7ZPCY')
html = response.read()
sp = BeautifulSoup(html)
ll = sp.find_all('a')

The 'll' variable I got is empty.

4

1 回答 1

0

EDIT: The problem appeared to be with Ubuntu's installation of BS4 - uninstalling and reinstalling with pip solved the problem


This actually works for me using both cases (find_all for BS4 and and the older findAll). Have you verified that you have content in your sp variable?

In [1]: import urllib2

In [2]: from bs4 import BeautifulSoup

In [3]: response = urllib2.urlopen('http://www.directmirror.com/files/0GR7ZPCY')

In [4]: html = response.read()

In [5]: sp = BeautifulSoup(html)

In [6]: ll = sp.find_all('a')

In [7]: ll
Out[7]:
[<a class="twitter-share-button" data-count="vertical" data-via="DirectMirror" href="http://twitter.com/share">Tweet</a>,
 <a href="/"><img alt="logo" border="0" src="/images/logo2.png"/></a>,
 <a href="/register.php" style="color:#ffffff">Register</a>,
 <a href="/login.php" style="color:#ffffff">Login</a>,
 # Continues...
于 2012-10-23T03:06:21.670 回答