我正在尝试监视通过 javascript 生成的网站上的元素。下载一个javascript修改页面的问题之前处理过,借用下面的代码解决了PyQt的问题。
但是当我将此代码设置为每 20 秒运行一次时,我的网络流量平均下降 70KB/s,下降 5KB/s。实际保存的页面只有 80KB,但是 javascript 很重。
一天 6GB 是不合理的,我的 ISP 有数据限制,我已经听从了。
有没有办法修改此代码,例如,它只执行与页面上特定元素对应的 javascript?如果是这样,我将如何弄清楚我需要执行什么?这会对我看到的网络流量产生重大影响吗?
或者,我应该怎么做?我考虑制作一个 chrome 扩展,因为 Chrome 已经为我处理了 javascript,但是我必须弄清楚如何将它与我的项目的其余部分集成,这对我来说是全新的领域。如果有更好的方法,我宁愿这样做。
#Borrowed from http://stackoverflow.com/questions/19161737/cannot-add-custom-request-headers-in-pyqt4
#which is borrowed from http://blog.motane.lu/2009/07/07/downloading-a-pages-content-with-python-and-webkit/
import sys, signal
from PyQt4.QtCore import QUrl
from PyQt4.QtGui import QApplication
from PyQt4.QtWebKit import QWebPage
from PyQt4.QtNetwork import QNetworkAccessManager, QNetworkRequest, QNetworkReply
cookie = ''#snipped, the cookie I have to send is about as long as this bit of code...
class MyNetworkAccessManager(QNetworkAccessManager):
def __init__(self, url):
QNetworkAccessManager.__init__(self)
request = QNetworkRequest(QUrl(url))
self.reply = self.get(request)
def createRequest(self, operation, request, data):
request.setRawHeader('User-Agent', 'Mozilla/5.0')
request.setRawHeader("Cookie",cookie);
return QNetworkAccessManager.createRequest( self, operation, request, data )
class Crawler( QWebPage ):
def __init__(self, url, file):
QWebPage.__init__( self )
self._url = url
self._file = file
self.manager = MyNetworkAccessManager(url)
self.setNetworkAccessManager(self.manager)
def crawl( self ):
signal.signal( signal.SIGINT, signal.SIG_DFL )
self.loadFinished.connect(self._finished_loading)
self.mainFrame().load( QUrl( self._url ) )
def _finished_loading( self, result ):
file = open( self._file, 'w' )
file.write( self.mainFrame().toHtml() )
file.close()
exit(0)
def main(url,file):
app = QApplication([url,file])
crawler = Crawler(url, file)
crawler.crawl()
sys.exit( app.exec_() )