1

我正在根据使用 urllib 从 Siteadvisor 获取的结果将 100k 域名处理成 CSV(我知道,这不是最好的方法)。但是,我当前的脚本创建了太多线程并且 Python 遇到了错误。有没有办法我可以“分块”这个脚本一次做 X 个域(比如 10-20)来防止这些错误?提前致谢。

import threading
import urllib

class Resolver(threading.Thread):
    def __init__(self, address, result_dict):
        threading.Thread.__init__(self)
        self.address = address
        self.result_dict = result_dict

    def run(self):
        try:
            content = urllib.urlopen("http://www.siteadvisor.com/sites/" + self.address).read(12000)
            search1 = content.find("didn't find any significant problems.")
            search2 = content.find('yellow')
            search3 = content.find('web reputation analysis found potential security')
            search4 = content.find("don't have the results yet.")

            if search1 != -1:
                result = "safe"
            elif search2 != -1:
                result = "caution"
            elif search3 != -1:
                result = "warning"
            elif search4 != -1:
                result = "unknown"
            else:
                result = ""

            self.result_dict[self.address] = result

        except:
            pass


def main():
    infile = open("domainslist", "r")
    intext = infile.readlines()
    threads = []
    results = {}
    for address in [address.strip() for address in intext if address.strip()]:
        resolver_thread = Resolver(address, results)
        threads.append(resolver_thread)
        resolver_thread.start()

    for thread in threads:
        thread.join()

    outfile = open('final.csv', 'w')
    outfile.write("\n".join("%s,%s" % (address, ip) for address, ip in results.iteritems()))
    outfile.close()

if __name__ == '__main__':
    main()

编辑:新版本,基于 andyortlieb 的建议。

import threading
import urllib
import time

class Resolver(threading.Thread):
    def __init__(self, address, result_dict, threads):
        threading.Thread.__init__(self)
        self.address = address
        self.result_dict = result_dict
        self.threads = threads
    def run(self):
        try:
            content = urllib.urlopen("http://www.siteadvisor.com/sites/" + self.address).read(12000)
            search1 = content.find("didn't find any significant problems.")
            search2 = content.find('yellow')
            search3 = content.find('web reputation analysis found potential security')
            search4 = content.find("don't have the results yet.")

            if search1 != -1:
                result = "safe"
            elif search2 != -1:
                result = "caution"
            elif search3 != -1:
                result = "warning"
            elif search4 != -1:
                result = "unknown"
            else:
                result = ""

            self.result_dict[self.address] = result

            outfile = open('final.csv', 'a')
            outfile.write(self.address + "," + result + "\n")
            outfile.close()
            print self.address + result

            threads.remove(self)
        except:
            pass


def main():
    infile = open("domainslist", "r")
    intext = infile.readlines()
    threads = []
    results = {}

    for address in [address.strip() for address in intext if address.strip()]:
        loop=True
        while loop:
            if len(threads) < 20:
                resolver_thread = Resolver(address, results, threads)
                threads.append(resolver_thread)
                resolver_thread.start()
                loop=False
            else:
                time.sleep(.25)


    for thread in threads:
        thread.join()

#    removed so I can track the progress of the script
#    outfile = open('final.csv', 'w')
#    outfile.write("\n".join("%s,%s" % (address, ip) for address, ip in results.iteritems()))
#    outfile.close()

if __name__ == '__main__':
     main()
4

2 回答 2

2

您现有的代码将很好地工作 - 只需修改您的__init__内部方法Resolver以接收一个额外的地址列表,而不是一次一个,因此不是每个地址有一个线程,而是每 10 个有一个线程(例如)。这样你就不会超载线程。

您显然还需要稍作修改run,以便它循环遍历地址数组而不是一个self.address.

如果您愿意,我可以编写一个简单的示例,但从您的代码质量来看,我觉得您可以很容易地处理它。

希望这可以帮助!

根据要求编辑下面的示例。请注意,您必须修改 main 以发送您的Resolver实例地址列表而不是单个地址 - 如果不了解有关文件格式和地址存储方式的更多信息,我无法为您处理这个问题。注意 - 您可以run使用辅助函数执行该方法,但我认为这可能更容易理解作为示例

class Resolver(threading.Thread):
    def __init__(self, addresses, result_dict):
        threading.Thread.__init__(self)
        self.addresses = addresses  # Now takes in a list of multiple addresses
        self.result_dict = result_dict

    def run(self):
        for address in self.addresses: # do your existing code for every address in the list
            try:
                content = urllib.urlopen("http://www.siteadvisor.com/sites/" + address).read(12000)
                search1 = content.find("didn't find any significant problems.")
                search2 = content.find('yellow')
                search3 = content.find('web reputation analysis found potential security')
                search4 = content.find("don't have the results yet.")

                if search1 != -1:
                    result = "safe"
                elif search2 != -1:
                    result = "caution"
                elif search3 != -1:
                    result = "warning"
                elif search4 != -1:
                    result = "unknown"
                else:
                    result = ""

                self.result_dict[address] = result
            except:
                pass
于 2010-06-28T18:26:24.137 回答
2

这可能有点死板,但您可以将线程传递给 Resolver,这样当 Resolver.run 完成时,它可以调用 threads.remove(self)

然后您可以嵌套一些条件,以便仅在有空间时才创建线程,如果没有空间,它们会一直等到有空间。

for address in [address.strip() for address in intext if address.strip()]:
        loop=True
        while loop:
            if len(threads)<20:
                resolver_thread = Resolver(address, results, threads)
                threads.append(resolver_thread)
                resolver_thread.start()
                loop=False
            else: 
                time.sleep(.25)
于 2010-06-28T18:42:10.187 回答