0

我想从AlJazeera 网站上抓取大约 500 篇文章,并想收集 4 个标签,即

  • 网址
  • 标题
  • 标签
  • 作者

我已经编写了从主页收集数据的脚本,但它只收集了几篇文章。其他文章属于不同类别。如何遍历 500 篇文章。有没有一种有效的方法来做到这一点。

import bs4
import pandas as pd
from bs4 import BeautifulSoup
import requests
from collections import Counter
page = requests.get('https://www.aljazeera.com/')
soup = BeautifulSoup(page.content,"html.parser")
article = soup.find(id='more-top-stories')
inside_articles= article.find_all(class_='mts-article mts-default-article')
article_title = [inside_articles.find(class_='mts-article-title').get_text() for inside_articles in inside_articles]
article_dec = [inside_articles.find(class_='mts-article-p').get_text() for inside_articles in inside_articles]
tag = [inside_articles.find(class_='mts-category').get_text() for inside_articles in inside_articles]
link = [inside_articles.find(class_='mts-article-title').find('a') for inside_articles in inside_articles]
4

1 回答 1

1

You can use scrapy for this purpose.

import scrapy
import json
class BlogsSpider(scrapy.Spider):
    name = 'blogs'
    start_urls = [
        'https://www.aljazeera.com/news/2020/05/fbi-texas-naval-base-shooting-terrorism-related-200521211619145.html',
    ]

    def parse(self, response):
        for data in response.css('body'):
            current_script = data.xpath("//script[contains(., 'mainEntityOfPage')]/text()").extract_first()
            json_data = json.loads(current_script)
            yield {
                'name': json_data['headline'],
                'author': json_data['author']['name'],
                'url': json_data['mainEntityOfPage'],
                'tags': data.css('div.article-body-tags ul li a::text').getall(),
            }

Save this file to file.py and run it by

$scrapy crawl blogs -o output.json

But configure scrapy structure first.

于 2020-05-22T05:44:03.013 回答