-1

我正在使用以下 Python 代码来抓取新闻网站以收集新闻文章:

import mechanize
import re
import time
from selenium import webdriver
from bs4 import BeautifulSoup


url = "http://www.thehindu.com/archive/web/2013/07/01/"

link_dictionary = {}
driver = webdriver.Firefox()
driver.get(url)
time.sleep(10)
soup = BeautifulSoup(driver.page_source)

for tag_li in soup.findAll('li', attrs={"data-section":"Op-Ed"}):
    for link in tag_li.findAll('a'):
        link_dictionary[link.string] = link.get('href')
        urlnew = link_dictionary[link.string]
        brnew =  mechanize.Browser()
        htmltextnew = brnew.open(urlnew).read()            
        articletext = ""
        soupnew = BeautifulSoup(htmltextnew)
        for tag in soupnew.findAll('p'):
            articletext += tag.text
        print "opinion " + re.sub('\s+', ' ', articletext, flags=re.M)
driver.close()

上面的代码是针对某一天的。当我运行此代码一两个月时,它消耗了大约 3GB 的C:\驱动器内存空间(我正在使用Windows7)。

我不知道它如何以及为什么会消耗这么多内存。有人可以向我解释这种现象并帮助恢复丢失的记忆吗?我是 Python 编程的新手。

4

2 回答 2

3

你做一些disk cleanup。这样你应该能够恢复大约 3-4GB 的东西。为了获得更多的恢复,您可能不得不删除更多的磁盘空间来恢复您的一些应用程序数据。

于 2013-11-13T11:05:42.253 回答
2

link_dictionary = {} 将继续增长。

您永远不会阅读此内容,并且似乎不需要它。

尝试这个:

import mechanize
import re
import time
from selenium import webdriver
from bs4 import BeautifulSoup


url = "http://www.thehindu.com/archive/web/2013/07/01/"

driver = webdriver.Firefox()
driver.get(url)
time.sleep(10)
soup = BeautifulSoup(driver.page_source)

for tag_li in soup.findAll('li', attrs={"data-section":"Op-Ed"}):
    for link in tag_li.findAll('a'): 
        urlnew = link.get('href')
        brnew =  mechanize.Browser()
        htmltextnew = brnew.open(urlnew).read()            
        articletext = ""
        soupnew = BeautifulSoup(htmltextnew)
        for tag in soupnew.findAll('p'):
            articletext += tag.text
        print "opinion " + re.sub('\s+', ' ', articletext, flags=re.M)
driver.close()
于 2013-11-13T04:44:35.543 回答