I'm writing a script in Python that will scrape some pages from my web server and put them in a file. I'm using mechanize.Browser()
module for this particular task.
However, I've found that creating one single instance of mechanize.Browser()
is rather slow. Is there a way I could relatively painlessly use multihreading/multiprocessing (i.e. issue several GET requests at once)?