2

所以我想得到这个页面上的所有图片(nba 球队)。 http://www.cbssports.com/nba/draft/mock-draft

但是,我的代码提供的远不止这些。它给了我,

<a href="/nba/teams/page/ORL"><img src="http://sports.cbsimg.net/images/nba/logos/30x30/ORL.png" alt="Orlando Magic" width="30" height="30" border="0" /></a>

我怎样才能缩短它只给我,http://sports.cbsimg.net/images/nba/logos/30x30/ORL.png.

我的代码:

import urllib2
from BeautifulSoup import BeautifulSoup
# or if your're using BeautifulSoup4: 
# from bs4 import BeautifulSoup

soup = BeautifulSoup(urllib2.urlopen('http://www.cbssports.com/nba/draft/mock-draft').read())

rows = soup.findAll("table", attrs = {'class': 'data borderTop'})[0].tbody.findAll("tr")[2:]

for row in rows:
  fields = row.findAll("td")
  if len(fields) >= 3:
    anchor = row.findAll("td")[1].find("a")
    if anchor:
      print anchor
4

3 回答 3

1

要将所有图像保存在http://www.cbssports.com/nba/draft/mock-draft

import urllib2
import os
from BeautifulSoup import BeautifulSoup
URL = "http://www.cbssports.com/nba/draft/mock-draft"
default_dir = os.path.join(os.path.expanduser("~"),"Pictures")
opener = urllib2.build_opener()
urllib2.install_opener(opener)
soup = BeautifulSoup(urllib2.urlopen(URL).read())
imgs = soup.findAll("img",{"alt":True, "src":True})
for img in imgs:
    img_url = img["src"]
    filename = os.path.join(default_dir, img_url.split("/")[-1])
    img_data = opener.open(img_url)
    f = open(filename,"wb")
    f.write(img_data.read())
    f.close()

要将任何特定图像保存在http://www.cbssports.com/nba/draft/mock-draft上,请使用

soup.find("img",{"src":"image_name_from_source"})
于 2012-07-05T18:51:25.557 回答
1

我知道这可能是“创伤性的”,但是对于那些自动生成的页面,您只想将那些该死的图像拿走并且永远不会回来,采用所需模式的 quick-n-dirty 正则表达式往往是我的选择(没有 Beautiful Soup 依赖是一个很大的优势):

import urllib, re

source = urllib.urlopen('http://www.cbssports.com/nba/draft/mock-draft').read()

## every image name is an abbreviation composed by capital letters, so...
for link in re.findall('http://sports.cbsimg.net/images/nba/logos/30x30/[A-Z]*.png', source):
    print link


    ## the code above just prints the link;
    ## if you want to actually download, set the flag below to True

    actually_download = False
    if actually_download:
        filename = link.split('/')[-1]
        urllib.urlretrieve(link, filename)

希望这可以帮助!

于 2012-07-05T19:16:40.517 回答
1

您可以使用此函数从 url 获取所有图像 url 的列表。

#
#
# get_url_images_in_text()
#
# @param html - the html to extract urls of images from him.
# @param protocol - the protocol of the website, for append to urls that not start with protocol.
#
# @return list of imags url.
#
#
def get_url_images_in_text(html, protocol):
    urls = []
    all_urls = re.findall(r'((http\:|https\:)?\/\/[^"\' ]*?\.(png|jpg))', html, flags=re.IGNORECASE | re.MULTILINE | re.UNICODE)
    for url in all_urls:
        if not url[0].startswith("http"):
            urls.append(protocol + url[0])
        else:
            urls.append(url[0])

    return urls

#
#
# get_images_from_url()
#
# @param url - the url for extract images url from him. 
#
# @return list of images url.
#
#
def get_images_from_url(url):
    protocol = url.split('/')[0]
    resp = requests.get(url)
    return get_url_images_in_text(resp.text, protocol)
于 2018-08-25T10:12:24.867 回答