0

我正在做一个从游戏媒体网站中提取文章的项目,并且我正在做一个基本的测试运行,根据 VSCode 的调试器,在我设置多线程提取之后始终挂起(更改线程数无济于事)在两个站点上。老实说,我不确定我在这里做错了什么;我按照已经列出的示例进行操作。其中一个站点 Gamespot 甚至被用于某人的教程中,我尝试删除另一个站点(Polygon),但似乎没有帮助。我已经创建了一个虚拟环境,并在 Python 3.8 和 3.7 中都进行了尝试。所有依赖似乎都得到满足;我还在 repl dot it 中进行了测试并得到了相同的挂起。

我很想听到我只是做错了什么,所以我可以修复它;我真的很想在这些特定的网站和他们的文章上做一些数据科学!但似乎,至少对于 OS X 用户来说,多线程存在某种错误。这是我的代码:

#import system functions
import sys
import requests
sys.path.append('/usr/local/lib/python3.8/site-packages/')
#import basic HTTP handling processes
#import urllib
#from urllib.request import urlopen
#import scraping libraries

#import newspaper and BS dependencies

from bs4 import BeautifulSoup
import newspaper
from newspaper import Article 
from newspaper import Source 
from newspaper import news_pool

#import broad data libraries
import pandas as pd

#import gaming related news sources as newspapers
gamespot = newspaper.build('https://www.gamespot.com/news', memoize_articles=False)
polygon = newspaper.build('https://www.polygon.com/gaming', memoize_articles=False)

#organize the gaming related news sources using a list
gamingPress = [gamespot, polygon]
print("About to set the pool.")
#parallel process these articles using multithreading (store in mem)
news_pool.set(gamingPress, threads_per_source=4)
print("Setting the pool")
news_pool.join()
print("Pool set")
#create the interim pandas dataframe based on these sources
final_df = pd.DataFrame()

#a limit on sources could be placed here; intentionally I have placed none
limit = 10

for source in gamingPress:
    #these are temporary placeholder lists for elements to be extracted
    list_title = []
    list_text = []
    list_source = []

    count = 0

    for article_extract in source.articles:
        article_extract.parse()
        
        #further limit functionality could be placed here; not placed
        if count > limit:
            break

        list_title.append(article_extract.title)
        list_text.append(article_extract.text)
        list_source.apprend(article_extract.source_url)

        print(count)
        count +=1 #progress the loop *via* count

    temp_df = pd.DataFrame({'Title': list_title, 'Text': list_text, 'Source': list_source})
    #Append this to the final DataFrame
    final_df = final_df.append(temp_df, ignore_index=True)

#export to CSV, placeholder for deeper analysis/more limited scope, may remain
final.df.to_csv('gaming_press.csv')

当我最终放弃并在控制台打断时,这就是我得到的回报:


About to set the pool.
Setting the pool
^X^X^CTraceback (most recent call last):
  File "scraper1.py", line 31, in <module>
    news_pool.join()
  File "/usr/local/lib/python3.8/site-packages/newspaper3k-0.3.0-py3.8.egg/newspaper/mthreading.py", line 103, in join
    self.pool.wait_completion()
  File "/usr/local/lib/python3.8/site-packages/newspaper3k-0.3.0-py3.8.egg/newspaper/mthreading.py", line 63, in wait_completion
    self.tasks.join()
  File "/usr/local/Cellar/python@3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/queue.py", line 89, in join
    self.all_tasks_done.wait()
  File "/usr/local/Cellar/python@3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 302, in wait
    waiter.acquire()
KeyboardInterrupt
4

1 回答 1

0

我决定研究报纸多线程问题。我在 github 上查看了Newspaper的源代码并设计了这个答案。在我的测试中,我能够获得文章标题。

这个处理似乎很耗时,因为它平均需要 6 分钟。在做了更多研究之后,看起来时间延迟与后台下载的文章直接相关。我不确定如何使用Newspaper加快这个过程。

import newspaper
from newspaper import Config
from newspaper import news_pool

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'

config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10

gamespot = newspaper.build('https://www.gamespot.com/news', config=config, memoize_articles=False)
polygon = newspaper.build('https://www.polygon.com/gaming', config=config, memoize_articles=False)

gamingPress = [gamespot, polygon]

# this setting is adjustable 
news_pool.config.number_threads = 2

# this setting is adjustable 
news_pool.config.thread_timeout_seconds = 2

news_pool.set(gamingPress)
news_pool.join()

for source in gamingPress:
  for article_extract in source.articles:
    article_extract.parse()
    print(article_extract.title)

老实说,我正在尝试确定使用news_pool的好处。从Newspaper源代码中的注释看来,news_pool的主要目的与连接速率限制有关。我还注意到已经进行了几次尝试来改进线程模型,但是这些代码更新还没有被推送到生产代码中。

尽管如此...下面的答案在 1 分钟内开始处理,并且不使用news_pool。需要进行更多测试以查看源速率是否会限制连接或出现其他问题。

import newspaper
from newspaper import Config
from newspaper import news_pool

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'

config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10

gamespot = newspaper.build('https://www.gamespot.com/news', config=config, memoize_articles=False)
polygon = newspaper.build('https://www.polygon.com/gaming', config=config, memoize_articles=False)
gamingPress = [gamespot, polygon]
for source in gamingPress:
   source.download_articles()
   for article_extract in source.articles:
      article_extract.parse()
      print(article_extract.title)

关于news_pool代码部分。出于某种原因,我在对您的目标来源的有限测试中注意到了多余的文章标题。

于 2020-10-13T04:40:38.243 回答