1

有一个 python 库 - Newspaper3k,它使获取网页内容变得更容易。[报纸][1]

标题检索:

import newspaper
a = Article(url)
print(a.title)

对于内容检索:

url = 'http://fox13now.com/2013/12/30/new-year-new-laws-obamacare-pot-guns-and-drones/'
article = Article(url)
article.text

我想获取有关网页的信息(有时是标题,有时是实际内容)有我的代码来获取网页的内容/文本:

from newspaper import Article
import nltk
nltk.download('punkt')
fil=open("laborURLsml2.csv","r") 
# 3, below read every line in fil
Lines = fil.readlines()
for line in Lines:
    print(line)
    article = Article(line)
    article.download()
    article.html
    article.parse()
    print("[[[[[")
    print(article.text)
    print("]]]]]")

“laborURLsml2.csv”文件内容为:[ laborURLsml2.csv ][2]

我的问题是:我的代码读取了第一个 URL 并打印了内容,但未能读取 2 个 URL

4

1 回答 1

1

我注意到您的 CSV 文件中的某些 URL 有一个尾随空格,这导致了问题。我还注意到,您的其中一个链接不可用,而其他链接则是分发给子公司以供发布的相同故事。

下面的代码处理了前两个问题,但它不处理数据冗余问题。

from newspaper import Config
from newspaper import Article
from newspaper import ArticleException

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'

config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10

with open('laborURLsml2.csv', 'r') as file:
    csv_file = file.readlines()
    for url in csv_file:
        try:
            article = Article(url.strip(), config=config)
            article.download()
            article.parse()
            print(article.title)
            # the replace is used to remove newlines
            article_text = article.text.replace('\n', ' ')
            print(article_text)
        except ArticleException:
            print('***FAILED TO DOWNLOAD***', article.url)

您可能会发现我在Github 页面上创建和共享的这份报纸 3K 概述文档很有用。

于 2021-01-25T19:50:17.720 回答