2

我试图首先从页面中获取所有链接,当获取“下一步”按钮的 URL 并继续循环直到没有更多页面为止。一直试图获得一个嵌套循环来实现这一点,但由于某种原因,BeautifulSoup 从不解析第二页..只有第一页然后停止..

很难解释,但这里的代码应该更容易理解我要解释的内容:)

#this site holds the first page that it should start looping on.. from this page i want to reach page 2, 3, etc.
   webpage = urlopen('www.first-page-with-urls-and-next-button.com').read()

soup = BeautifulSoup(webpage)

for tag in soup.findAll('a', { "class" : "next" }):

    print tag['href']
    print "\n--------------------\n"


#next button is relative url so append it to main-url.com
    soup = BeautifulSoup('http://www.main-url.com/'+ re.sub(r'\s', '', tag['href']))

#for some reason this variable only holds the tag['href']
    print soup

    for taggen in soup.findAll('a', { "class" : "homepage target-blank" }):
        print tag['href']

        # Read page found
        sidan = urlopen(taggen['href']).read()

# get title
        Titeln = re.findall(patFinderTitle, sidan)

        print Titeln

有任何想法吗?很抱歉英语不好,我希望我不会受到打击 :) 请询问我是否解释得不好,我会尽力解释更多。哦,我是 Python 的新手——从今天开始(你可能已经想到了:)

4

2 回答 2

2

如果你调用urlopen新的 url 并将生成的文件对象传递给 BeatifulSoup,我想你会准备好的。那是:

wepage = urlopen(http://www.main-url.com/'+ re.sub(r'\s', '', tag['href']))
soup = BeautifulSoup(webpage)
于 2012-04-26T20:05:32.730 回答
0

对于线路:

soup = BeautifulSoup('http://www.main-url.com/'+ re.sub(r'\s', '', tag['href']))

尝试:

webpage = urlopen('http://www.main-url.com/'+re.sub(r'\s','',tag['href'])).read()

soup = BeautifulSoup(webpage)

于 2012-04-26T20:05:42.793 回答