4

我正在使用线程和队列来获取 url 并存储到数据库。
我只想要一个线程来做存储工作。
所以我编写如下代码:

import threading
import time

import Queue

site_count = 10

fetch_thread_count = 2

site_queue = Queue.Queue()
proxy_array=[]        


class FetchThread(threading.Thread):
    def __init__(self,site_queue,proxy_array):
        threading.Thread.__init__(self)
        self.site_queue = site_queue
        self.proxy_array = proxy_array
    def run(self):
        while True:
            index = self.site_queue.get()
            self.get_proxy_one_website(index)
            self.site_queue.task_done()
    def get_proxy_one_website(self,index):
        print '{0} fetched site :{1}\n'.format(self.name,index)
        self.proxy_array.append(index)


def save():
    while True:
        if site_queue.qsize() > 0:
            if len(proxy_array) > 10:
                print 'save :{0}  to database\n'.format(proxy_array.pop())

            else:
                time.sleep(1)
        elif len(proxy_array) > 0:
            print 'save :{0} to database\n'.format(proxy_array.pop())

        elif len(proxy_array) == 0:
            print 'break'
            break
        else:
            print 'continue'
            continue

def start_crawl():
    global site_count,fetch_thread_count,site_queue,proxy_array
    print 'init'
    for i in range(fetch_thread_count):
        ft = FetchThread(site_queue,proxy_array)
        ft.setDaemon(True)
        ft.start()

    print 'put site_queue'
    for i in range(site_count):
        site_queue.put(i)

    save()

    print 'start site_queue join'
    site_queue.join()
    print 'finish'

start_crawl()

执行输出:

init
put site_queue
Thread-1 fetched site :0

Thread-2 fetched site :1

Thread-1 fetched site :2

Thread-2 fetched site :3

Thread-1 fetched site :4

Thread-2 fetched site :5

Thread-1 fetched site :6

Thread-2 fetched site :7

Thread-1 fetched site :8

Thread-2 fetched site :9

save :9 to database

save :8 to database

save :7 to database

save :6 to database

save :5 to database

save :4 to database

save :3 to database

save :2 to database

save :1 to database

save :0 to database

break
start site_queue join
finish
[Finished in 1.2s]

为什么save()函数运行之后site_queue.join()写在之后save()
我也save() 用线程函数替换了,但它也不起作用。
这是否意味着我必须更改proxy_array=[]proxy_queue=Queue.Queue(),然后我可以使用 theading 来存储数据?
我只想要一个thead 做这个,没有其他theads 会从那里获取数据proxy_array,我为什么要加入它?使用Queue 似乎很奇怪。
有没有更好的解决方案?

更新:
我不想等到所有 FetchThreads 完成他们的工作。我想在 fethcing 时保存数据,它会快得多。我希望结果如下所示(因为我使用了 array.pop(),所以保存 0 可能会在稍后出现,这只是一个易于理解的示例。):

Thread-2 fetched site :1

Thread-1 fetched site :2

save :0 to database

Thread-2 fetched site :3

Thread-1 fetched site :4

save :2 to database

save :3 to database


Thread-2 fetched site :5

Thread-1 fetched site :6

save :4 to database
.......

UPDATE2 对于某人有以下相同的问题:

问题:
正如我在上面所说的那样,没有任何其他线程会从 proxy_array 获取数据。
我只是无法想象为什么它会破坏线程安全?

回答: misha的回答中的
producer-consumer question,仔细阅读后我明白了。


问题:
还有一个问题,如果程序主线程可以作为消费者使用 FetchThreads(换句话说,不需要创建 StoreThread)

这是我无法弄清楚的,我会在找到答案后更新。

4

2 回答 2

5

我必须想出类似的生产者-消费者。生产者生成一个“id”,消费者使用该 id 进行一些 url 获取并对其进行处理。这是我的骨架代码


    import Queue
    import random
    import threading
    import time
    import sys

    data_queue = Queue.Queue()
    lock = threading.Lock()

    def gcd(a, b):
        while b != 0:
            a,b = b, a%b

        return b

    def consumer(idnum):
        while True:
            try:
                data = data_queue.get(block=False)
            except Exception, e:
               print 'Exception ' + str(e)
            else:
                with lock:
                    print('\t consumer %d: computed gcd(%d, %d) = %d' %(idnum, data[0], data[1], gcd(data[0], data[1])))

            time.sleep(1)
            data_queue.task_done()

    def producer(idnum, count):
        for i in range(count):
            a,b = random.randint(1, sys.maxint), random.randint(1, sys.maxint)
            with lock:
                print('\t producer %d: generated (%d, %d)'% (idnum, a, b))
            data_queue.put((a,b))
            time.sleep(0.5)

    if __name__ == '__main__':
        num_producers = 1
        num_consumers = 2
        num_integer_pairs = 10

        for i in range(num_consumers):
            t = threading.Thread(target=consumer, args=(i,))
            t.daemon = True
            t.start()

        threads = []
        for ii in range(num_producers):
            thread = threading.Thread(target=producer, args=(ii, num_integer_pairs))
            threads.append(thread)
            thread.start()

        # wait for the producers threads to finish
        for thread in threads:
            thread.join()
        print 'done with producer threads'

        # wait till all the jobs are done in the queue
        data_queue.join()

        with lock:
            print 'all consumer threads finished'

        with lock:
            print 'main thread exited'
于 2014-02-04T01:14:35.917 回答
3

我建议您阅读有关生产者-消费者问题的内容。您的生产者是获取线程。您的消费者就是save功能。如果我理解正确,您希望消费者尽快保存获取的结果。为此,生产者和消费者必须能够以某种线程安全的方式(例如队列)进行通信。

基本上,您需要另一个队列。它将取代proxy_array. 您的save函数将如下所示:

while True:
 try:
   data = fetch_data_from_output_queue()
   save_to_database(data)
 except EmptyQueue:
   if not stop_flag.is_set():
     # All done
     break
   time.sleep(1)
   continue

这个save函数需要在它自己的线程中运行。 stop_flag是在您加入获取线程设置的事件。

从高层次来看,您的应用程序将如下所示:

input_queue = initialize_input_queue()
ouput_queue = initialize_output_queue()

stop_flag = Event()
create_and_start_save_thread(output_queue) # read from output queue, save to DB
create_and_start_fetch_threads(input_queue, output_queue) # get sites to crawl from input queue, push crawled results to output_queue
join_fetch_threads() # this will block until the fetch threads have gone through everything in the input_queue
stop_flag.set() # this will inform the save thread that we are done
join_save_thread() # wait for all the saving to complete
于 2013-09-25T07:26:17.287 回答