我正在使用以下 Python 代码来抓取新闻网站以收集新闻文章:
import mechanize
import re
import time
from selenium import webdriver
from bs4 import BeautifulSoup
url = "http://www.thehindu.com/archive/web/2013/07/01/"
link_dictionary = {}
driver = webdriver.Firefox()
driver.get(url)
time.sleep(10)
soup = BeautifulSoup(driver.page_source)
for tag_li in soup.findAll('li', attrs={"data-section":"Op-Ed"}):
for link in tag_li.findAll('a'):
link_dictionary[link.string] = link.get('href')
urlnew = link_dictionary[link.string]
brnew = mechanize.Browser()
htmltextnew = brnew.open(urlnew).read()
articletext = ""
soupnew = BeautifulSoup(htmltextnew)
for tag in soupnew.findAll('p'):
articletext += tag.text
print "opinion " + re.sub('\s+', ' ', articletext, flags=re.M)
driver.close()
上面的代码是针对某一天的。当我运行此代码一两个月时,它消耗了大约 3GB 的C:\
驱动器内存空间(我正在使用Windows7
)。
我不知道它如何以及为什么会消耗这么多内存。有人可以向我解释这种现象并帮助恢复丢失的记忆吗?我是 Python 编程的新手。