我建议您查看我在GitHub 上发布的报纸概述文档。该文档有多个提取示例和其他可能有用的技术。
关于你的问题...
Newspaper3K将几乎完美地解析某些网站。但是有很多网站需要检查页面的导航结构以确定如何正确解析文章元素。
例如,https://www.marketwatch.com具有单独的文章元素,例如标题、发布日期和存储在页面元标记部分中的其他项目。
下面的报纸示例将正确解析元素。我注意到您可能需要对关键字或标签输出进行一些数据清理。
import newspaper
from newspaper import Config
from newspaper import Article
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10
base_url = 'https://www.marketwatch.com'
article_urls = set()
marketwatch = newspaper.build(base_url, config=config, memoize_articles=False, language='en')
for sub_article in marketwatch.articles:
article = Article(sub_article.url, config=config, memoize_articles=False, language='en')
article.download()
article.parse()
if article.url not in article_urls:
article_urls.add(article.url)
# The majority of the article elements are located
# within the meta data section of the page's
# navigational structure
article_meta_data = article.meta_data
published_date = {value for (key, value) in article_meta_data.items() if key == 'parsely-pub-date'}
article_published_date = " ".join(str(x) for x in published_date)
authors = sorted({value for (key, value) in article_meta_data.items() if key == 'parsely-author'})
article_author = ', '.join(authors)
title = {value for (key, value) in article_meta_data.items() if key == 'parsely-title'}
article_title = " ".join(str(x) for x in title)
keywords = ''.join({value for (key, value) in article_meta_data.items() if key == 'keywords'})
keywords_list = sorted(keywords.lower().split(','))
article_keywords = ', '.join(keywords_list)
tags = ''.join({value for (key, value) in article_meta_data.items() if key == 'parsely-tags'})
tag_list = sorted(tags.lower().split(','))
article_tags = ', '.join(tag_list)
summary = {value for (key, value) in article_meta_data.items() if key == 'description'}
article_summary = " ".join(str(x) for x in summary)
# the replace is used to remove newlines
article_text = article.text.replace('\n', '')
print(article_text)
https://www.euronews.com与https://www.marketwatch.com类似,除了一些文章元素位于正文中,其他项目位于元标记部分内。
import newspaper
from newspaper import Config
from newspaper import Article
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10
base_url = 'https://www.euronews.com'
article_urls = set()
euronews = newspaper.build(base_url, config=config, memoize_articles=False, language='en')
for sub_article in euronews.articles:
if sub_article.url not in article_urls:
article_urls.add(sub_article.url)
article = Article(sub_article.url, config=config, memoize_articles=False, language='en')
article.download()
article.parse()
# The majority of the article elements are located
# within the meta data section of the page's
# navigational structure
article_meta_data = article.meta_data
published_date = {value for (key, value) in article_meta_data.items() if key == 'date.created'}
article_published_date = " ".join(str(x) for x in published_date)
article_title = article.title
summary = {value for (key, value) in article_meta_data.items() if key == 'description'}
article_summary = " ".join(str(x) for x in summary)
keywords = ''.join({value for (key, value) in article_meta_data.items() if key == 'keywords'})
keywords_list = sorted(keywords.lower().split(','))
article_keywords = ', '.join(keywords_list).strip()
# the replace is used to remove newlines
article_text = article.text.replace('\n', '')