0

我有这个代码:

import urllib
import urlparse
from bs4 import BeautifulSoup

url = "http://www.downloadcrew.com/?act=search&cat=51"
pageHtml = urllib.urlopen(url)
soup = BeautifulSoup(pageHtml)

for a in soup.select("div.productListingTitle a[href]"):
    try:
        print (a["href"]).encode("utf-8","replace")
    except:
        print "no link"

        pass

但是当我运行它时,我只得到 20 个链接。输出应超过 20 个链接。

4

1 回答 1

1

因为你只下载了第一页的内容。

只需使用循环下载所有页面:

import urllib
import urlparse
from bs4 import BeautifulSoup

for i in xrange(3):
    url = "http://www.downloadcrew.com/?act=search&page=%d&cat=51" % i
    pageHtml = urllib.urlopen(url)
    soup = BeautifulSoup(pageHtml)

    for a in soup.select("div.productListingTitle a[href]"):
        try:
            print (a["href"]).encode("utf-8","replace")
        except:
            print "no link"

如果你不知道页数,你可以

import urllib
import urlparse
from bs4 import BeautifulSoup

i = 0
while 1:
    url = "http://www.downloadcrew.com/?act=search&page=%d&cat=51" % i
    pageHtml = urllib.urlopen(url)
    soup = BeautifulSoup(pageHtml)

    has_more = 0
    for a in soup.select("div.productListingTitle a[href]"):
        has_more = 1
        try:
            print (a["href"]).encode("utf-8","replace")
        except:
            print "no link"
    if has_more:
        i += 1
    else:
        break

我在我的电脑上运行它,它得到了 60 个三页链接。
祝你好运~

于 2013-09-11T05:23:22.287 回答