161

这可能是在类似的情况下被问到的,但在搜索了大约 20 分钟后我无法找到答案,所以我会问。

我写了一个 Python 脚本(比如说:scriptA.py)和一个脚本(比如说 scriptB.py)

在 scriptB 中,我想用不同的参数多次调用 scriptA,每次运行大约需要一个小时,(它是一个巨大的脚本,做了很多事情......别担心),我希望能够运行scriptA 同时包含所有不同的参数,但我需要等到所有参数都完成后再继续;我的代码:

import subprocess

#setup
do_setup()

#run scriptA
subprocess.call(scriptA + argumentsA)
subprocess.call(scriptA + argumentsB)
subprocess.call(scriptA + argumentsC)

#finish
do_finish()

我想同时运行所有subprocess.call(),然后等到它们都完成,我该怎么做?

我尝试像这里的示例一样使用线程:

from threading import Thread
import subprocess

def call_script(args)
    subprocess.call(args)

#run scriptA   
t1 = Thread(target=call_script, args=(scriptA + argumentsA))
t2 = Thread(target=call_script, args=(scriptA + argumentsB))
t3 = Thread(target=call_script, args=(scriptA + argumentsC))
t1.start()
t2.start()
t3.start()

但我认为这是不对的。

我怎么知道他们在去我的地方之前都跑完了do_finish()

4

8 回答 8

219

将线程放在一个列表中,然后使用Join 方法

 threads = []

 t = Thread(...)
 threads.append(t)

 ...repeat as often as necessary...

 # Start all threads
 for x in threads:
     x.start()

 # Wait for all of them to finish
 for x in threads:
     x.join()
于 2012-08-15T12:00:03.550 回答
191

您需要在脚本末尾使用对象的连接方法。Thread

t1 = Thread(target=call_script, args=(scriptA + argumentsA))
t2 = Thread(target=call_script, args=(scriptA + argumentsB))
t3 = Thread(target=call_script, args=(scriptA + argumentsC))

t1.start()
t2.start()
t3.start()

t1.join()
t2.join()
t3.join()

因此主线程将等待直到t1t2t3完成执行。

于 2012-08-15T11:54:27.147 回答
51

在 Python3 中,由于 Python 3.2 有一种新方法可以达到相同的结果,我个人更喜欢传统的线程创建/启动/加入,包concurrent.futureshttps ://docs.python.org/3/library/concurrent.futures .html

使用ThreadPoolExecutor代码将是:

from concurrent.futures.thread import ThreadPoolExecutor
import time

def call_script(ordinal, arg):
    print('Thread', ordinal, 'argument:', arg)
    time.sleep(2)
    print('Thread', ordinal, 'Finished')

args = ['argumentsA', 'argumentsB', 'argumentsC']

with ThreadPoolExecutor(max_workers=2) as executor:
    ordinal = 1
    for arg in args:
        executor.submit(call_script, ordinal, arg)
        ordinal += 1
print('All tasks has been finished')

前面代码的输出类似于:

Thread 1 argument: argumentsA
Thread 2 argument: argumentsB
Thread 1 Finished
Thread 2 Finished
Thread 3 argument: argumentsC
Thread 3 Finished
All tasks has been finished

优点之一是您可以控制设置最大并发工作人员的吞吐量。

于 2016-05-20T08:02:14.377 回答
36

我更喜欢使用基于输入列表的列表推导:

inputs = [scriptA + argumentsA, scriptA + argumentsB, ...]
threads = [Thread(target=call_script, args=(i)) for i in inputs]
[t.start() for t in threads]
[t.join() for t in threads]
于 2015-08-05T11:19:32.630 回答
7

您可以拥有类似下面的类,您可以从中添加“n”个要并行执行的函数或控制台脚本,然后开始执行并等待所有作业完成。

from multiprocessing import Process

class ProcessParallel(object):
    """
    To Process the  functions parallely

    """    
    def __init__(self, *jobs):
        """
        """
        self.jobs = jobs
        self.processes = []

    def fork_processes(self):
        """
        Creates the process objects for given function deligates
        """
        for job in self.jobs:
            proc  = Process(target=job)
            self.processes.append(proc)

    def start_all(self):
        """
        Starts the functions process all together.
        """
        for proc in self.processes:
            proc.start()

    def join_all(self):
        """
        Waits untill all the functions executed.
        """
        for proc in self.processes:
            proc.join()


def two_sum(a=2, b=2):
    return a + b

def multiply(a=2, b=2):
    return a * b


#How to run:
if __name__ == '__main__':
    #note: two_sum, multiply can be replace with any python console scripts which
    #you wanted to run parallel..
    procs =  ProcessParallel(two_sum, multiply)
    #Add all the process in list
    procs.fork_processes()
    #starts  process execution 
    procs.start_all()
    #wait until all the process got executed
    procs.join_all()
于 2013-04-30T15:07:45.410 回答
3

我刚刚遇到了同样的问题,我需要等待使用 for 循环创建的所有线程。我刚刚尝试了以下代码。它可能不是完美的解决方案,但我认为这将是一个简单的解决方案去测试:

for t in threading.enumerate():
    try:
        t.join()
    except RuntimeError as err:
        if 'cannot join current thread' in err:
            continue
        else:
            raise
于 2018-03-14T13:35:32.590 回答
3

threading 模块文档

有一个“主线程”对象;这对应于 Python 程序中的初始控制线程。它不是一个守护线程。

有可能会创建“虚拟线程对象”。这些是对应于“外来线程”的线程对象,它们是在线程模块之外启动的控制线程,例如直接从 C 代码。虚拟线程对象的功能有限;他们总是被认为是活着的和恶魔般的,不能被join()编辑。它们永远不会被删除,因为无法检测到外来线程的终止。

因此,当您对保留创建的线程列表不感兴趣时​​,要捕获这两种情况:

import threading as thrd


def alter_data(data, index):
    data[index] *= 2


data = [0, 2, 6, 20]

for i, value in enumerate(data):
    thrd.Thread(target=alter_data, args=[data, i]).start()

for thread in thrd.enumerate():
    if thread.daemon:
        continue
    try:
        thread.join()
    except RuntimeError as err:
        if 'cannot join current thread' in err.args[0]:
            # catchs main thread
            continue
        else:
            raise

于是:

>>> print(data)
[0, 4, 12, 40]
于 2018-07-10T09:55:24.220 回答
2

也许,像

for t in threading.enumerate():
    if t.daemon:
        t.join()
于 2017-06-06T12:31:22.750 回答