0

我正在尝试使用以下代码抓取谷歌新闻:

from bs4 import BeautifulSoup
import requests
import time
from random import randint


def scrape_news_summaries(s):
    time.sleep(randint(0, 2))  # relax and don't let google be angry
    r = requests.get("http://www.google.co.uk/search?q="+s+"&tbm=nws")
    content = r.text
    news_summaries = []
    soup = BeautifulSoup(content, "html.parser")
    st_divs = soup.findAll("div", {"class": "st"})
    for st_div in st_divs:
        news_summaries.append(st_div.text)
    return news_summaries


l = scrape_news_summaries("T-Notes")
#l = scrape_news_summaries("""T-Notes""")
for n in l:
    print(n)

尽管这段代码以前可以工作,但我现在无法弄清楚为什么它不再工作了。是否有可能因为我只运行代码 3 或 4 次而被谷歌禁止?(我也尝试过使用必应新闻,但结果却很不幸……)

谢谢。

4

1 回答 1

2

我尝试运行代码,它在我的计算机上运行良好。

您可以尝试打印请求的状态码,看看它是否不是 200。

from bs4 import BeautifulSoup
import requests
import time
from random import randint


def scrape_news_summaries(s):
    time.sleep(randint(0, 2))  # relax and don't let google be angry
    r = requests.get("http://www.google.co.uk/search?q="+s+"&tbm=nws")
    print(r.status_code)  # Print the status code
    content = r.text
    news_summaries = []
    soup = BeautifulSoup(content, "html.parser")
    st_divs = soup.findAll("div", {"class": "st"})
    for st_div in st_divs:
        news_summaries.append(st_div.text)
    return news_summaries


l = scrape_news_summaries("T-Notes")
#l = scrape_news_summaries("""T-Notes""")
for n in l:
    print(n)

https://www.scrapehero.com/how-to-prevent-getting-blacklisted-while-scraping/获取表示您已被禁止的状态代码列表。

于 2016-09-06T17:38:12.837 回答