1

我有一些代码使用报纸去查看各种媒体并从它们下载文章。这在很长一段时间内一直运行良好,但最近开始出现问题。我可以看到问题出在哪里,但是由于我是 Python 新手,所以我不确定解决它的最佳方法。基本上(我认为)我需要进行修改,以防止偶尔出现的格式错误的网址完全使脚本崩溃,而是允许它放弃该网址并转移到其他网址。

错误的根源是当我尝试使用以下方式下载文章时:

article.download()

有些文章(显然每天都在变化)会抛出以下错误,但脚本会继续运行:

    Traceback (most recent call last):
      File "C:\Anaconda3\lib\encodings\idna.py", line 167, in encode
        raise UnicodeError("label too long")
   UnicodeError: label too long

   The above exception was the direct cause of the following exception:

   Traceback (most recent call last):
     File "C:\Anaconda3\lib\site-packages\newspaper\mthreading.py", line 38, in run
       func(*args, **kargs)
     File "C:\Anaconda3\lib\site-packages\newspaper\source.py", line 350, in download_articles
       html = network.get_html(url, config=self.config)
     File "C:\Anaconda3\lib\site-packages\newspaper\network.py", line 39, in get_html return get_html_2XX_only(url, config, response)
     File "C:\Anaconda3\lib\site-packages\newspaper\network.py", line 60, in get_html_2XX_only url=url, **get_request_kwargs(timeout, useragent))
     File "C:\Anaconda3\lib\site-packages\requests\api.py", line 72, in get return request('get', url, params=params, **kwargs)
     File "C:\Anaconda3\lib\site-packages\requests\api.py", line 58, in request return session.request(method=method, url=url, **kwargs)
     File "C:\Anaconda3\lib\site-packages\requests\sessions.py", line 502, in request resp = self.send(prep, **send_kwargs)
     File "C:\Anaconda3\lib\site-packages\requests\sessions.py", line 612, in send r = adapter.send(request, **kwargs)
     File "C:\Anaconda3\lib\site-packages\requests\adapters.py", line 440, in send timeout=timeout
     File "C:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 600, in urlopen chunked=chunked)
     File "C:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 356, in _make_request conn.request(method, url, **httplib_request_kw)
     File "C:\Anaconda3\lib\http\client.py", line 1107, in request self._send_request(method, url, body, headers)
     File "C:\Anaconda3\lib\http\client.py", line 1152, in _send_request self.endheaders(body)
     File "C:\Anaconda3\lib\http\client.py", line 1103, in endheaders     self._send_output(message_body)
     File "C:\Anaconda3\lib\http\client.py", line 934, in _send_output self.send(msg)
     File "C:\Anaconda3\lib\http\client.py", line 877, in send     self.connect()
     File "C:\Anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect conn = self._new_conn()
     File "C:\Anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn  (self.host, self.port), self.timeout, **extra_kw)
     File "C:\Anaconda3\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
     File "C:\Anaconda3\lib\socket.py", line 733, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
 UnicodeError: encoding with 'idna' codec failed (UnicodeError: label too long)

接下来应该对每篇文章进行解析和运行自然语言处理,并将某些元素写入数据帧,这样我就有了:

for paper in papers:    
for article in paper.articles:
    article.parse()
    print(article.title)
    article.nlp()
    if article.publish_date is None:
        d = datetime.now().date()
    else:
        d = article.publish_date.date()
    stories.loc[i] = [paper.brand, d, datetime.now().date(), article.title, article.summary, article.keywords, article.url]
    i += 1

(这可能也有点草率,但这是另一天的问题)

这运行良好,直到它到达出现错误的 URL 之一,然后抛出文章异常并且脚本崩溃:

    C:\Anaconda3\lib\site-packages\PIL\TiffImagePlugin.py:709: UserWarning: Corrupt EXIF data.  Expecting to read 2 bytes but only got 0.
   warnings.warn(str(msg))

   ArticleException                          Traceback (most recent call last) <ipython-input-17-2106485c4bbb> in <module>()
          4 for paper in papers:
          5     for article in paper.articles:
    ----> 6         article.parse()
          7         print(article.title)
          8         article.nlp()

   C:\Anaconda3\lib\site-packages\newspaper\article.py in parse(self)
       183 
       184     def parse(self):
   --> 185         self.throw_if_not_downloaded_verbose()
       186 
       187         self.doc = self.config.get_parser().fromstring(self.html)

   C:\Anaconda3\lib\site-packages\newspaper\article.py in throw_if_not_downloaded_verbose(self)
       519         if self.download_state == ArticleDownloadState.NOT_STARTED:
       520             print('You must `download()` an article first!')
   --> 521             raise ArticleException()
       522         elif self.download_state == ArticleDownloadState.FAILED_RESPONSE:
       523             print('Article `download()` failed with %s on URL %s' %

  ArticleException: 

那么防止终止我的脚本的最佳方法是什么?我应该在我收到 unicode 错误的下载阶段解决它还是在解析阶段通过告诉它忽略那些错误地址来解决它?我将如何实施该更正?

非常感谢任何建议。

4

3 回答 3

3

我遇到了同样的问题,虽然通常不建议使用except: pass ,但以下内容对我有用:

    try:
        a.parse()
        file.write( a.title+'\n')
    except :
        pass
于 2017-07-26T10:18:22.783 回答
1

我发现 Navid 对于这个确切的问题是正确的。

然而 .parse() 只是可以让你绊倒的功能之一。我将所有调用包装在 try/accept 结构中,如下所示:

word_list = []

for words in google_news.articles:

try:
    words.download()
    words.parse()
    words.nlp()

except:
    pass

word_list.append(words.keywords)
于 2019-02-10T15:22:15.127 回答
0

您可以尝试捕获 ArticleException。不要忘记import报纸模块。

try:
  article.download()
  article.parse()
except newspaper.article.ArticleException:
  # do something
于 2019-08-13T16:34:32.113 回答