1

Hello I would like to reuse the output links for open a new link into the output website. With the RSS Feed I actualize the links. I need to process all output links. With which code this is possible?

import urllib2
import re

htmlfile = urllib2.urlopen('http://www.spiegel.de/schlagzeilen/tops/index.rss')
htmltext = htmlfile.read()
regex = '<guid>(.+?)</guid>'
pattern = re.compile(regex)
links = re.findall(pattern,htmltext)
downloadlinks = ''
for i, link in enumerate(links):
    if i == 0:
        downloadlinks += link
    else:
        downloadlinks += ' ' + link

print (downloadlinks)

The Output is:

http://www.spiegel.de/panorama/leute/jennifer-lopez-singt-beim-geburtstag-von-turkmenistans-praesident-a-908601.html
http://www.spiegel.de/sport/sonst/tony-martin-setzt-tour-de-france-trotz-sturz-fort-a-908600.html
http://www.spiegel.de/politik/ausland/ecuador-schiebt-verantwortung-fuer-snowden-auf-russland-a-908595.html
http://www.spiegel.de/panorama/wetter-temperaturrekorde-im-westen-der-usa-a-908593.html http://www.spiegel.de/politik/deutschland/polizei-raeumt-camp-hungerstreikender-fluechtinge-in-muenchen-a-908592.html
...

An other Example:

import urllib2
import re

htmlfile = urllib2.urlopen('http://www.kino.de/rss/neu-im-kino/')
htmltext = htmlfile.read()
regex = '<link>(.+?)</link>'
pattern = re.compile(regex)
links = re.findall(pattern,htmltext)
downloadlinks = ''
for i, link in enumerate(links):
    if i == 0:
        downloadlinks += link
    else:
        downloadlinks += ' ' + link

print (downloadlinks)

'--------------------------------------------------------------------------------------    --------------------------'

htmlfile_2 = urllib2.urlopen(downloadlinks)
htmltext_2 = htmlfile_2.read()
regex_2 = '<meta itemprop="contentURL" content="(.+?)" />'
pattern_2 = re.compile(regex_2)
links_2 = re.findall(pattern_2,htmltext_2)
downloadlinks_2 = ''
for i, link in enumerate(links_2):
    if i == 0:
        downloadlinks_2 += link
    else:
        downloadlinks_2 += ' ' + link

print (downloadlinks_2)

The Output is:

http://www.kino.de/kinofilm/the-deep/130585
http://www.kino.de/kinofilm/englisch-fuer-anfaenger/145880
http://www.kino.de/kinofilm/the-grandmaster/147546 
http://www.kino.de/kinofilm/jets-helden-der-luefte/148993
http://www.kino.de/kinofilm/laurence-anyways/144027
http://www.kino.de/kinofilm/modest-reception-die-macht-des-geldes/142819
http://www.kino.de/kinofilm/papadopoulos-und-soehne/145922
http://www.kino.de/kinofilm/seitengaenge/132599
http://www.kino.de/kinofilm/a-silent-rockumentary/149048
http://www.kino.de/kinofilm/world-war-z/120130

I would like to have this:

htmlfile_2 = urllib2.urlopen(http://www.kino.de/kinofilm/the-deep/130585)

Than the Output is:

http://flashvideo.kino.de/video/clipfile/627/000551627.mp4
4

1 回答 1

1

只需遍历每个原始链接,打印出所有子链接。

import urllib2
import re

htmlfile = urllib2.urlopen('http://www.kino.de/rss/neu-im-kino/')
htmltext = htmlfile.read()
regex = '<link>(.+?)</link>'
pattern = re.compile(regex)
links = re.findall(pattern,htmltext)

print( ' '.join(links) ) # or print( '\n'.join(links) )


for link in links:
    htmlfile_2 = urllib2.urlopen(link)
    htmltext_2 = htmlfile_2.read()
    regex_2 = '<meta itemprop="contentURL" content="(.+?)" />'
    pattern_2 = re.compile(regex_2)
    links_2 = re.findall(pattern_2,htmltext_2)

    print( ' '.join(links_2) ) # or print( '\n'.join(links_2) )
于 2013-06-30T15:44:35.187 回答