0

我想知道当父进程试图终止子进程时是否有办法在子进程上运行一些代码。有没有办法我们可以写一个Exception也许?

我的代码看起来像这样:

main_process.py

import Process from multiprocessing

def main():
    p1 = Process(target = child, args = (arg1, ))
    p1.start()
    p1.daemon = True
    #blah blah blah code here
    sleep(5)
    p1.terminate()

def child(arg1):
    #blah blah blah
    itemToSend = {}
    #more blah blah
    snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish
    try:
        snmpEngine.transportDispatcher.runDispatcher()
    except:
        snmpEngine.transportDispatcher.closeDispatcher()
        raise

由于作业永远不会完成,因此子进程会继续运行。我必须从父进程终止它,因为子进程永远不会自行终止。但是,我想itemToSend在子进程终止之前发送给父进程。我return可以以某种方式对父进程进行处理吗?

更新:让我解释一下模块runDispatcher()的工作原理pysnmp

def runDispatcher():
    while jobsArePending():  # jobs are always pending because of jobStarted() function
        loop()

def jobStarted(jobId):
    if jobId in jobs:        #This way there's always 1 job remaining
        jobs[jobId] = jobs[jobId] + 1

这非常令人沮丧。除了做这一切,是否可以自己编写一个 snmp 陷阱侦听器?你能指出我正确的资源吗?

4

2 回答 2

2

.runDispatcher() 方法实际上调用异步 I/O 引擎(asyncore/twisted)的主循环,一旦没有活动的 pysnmp 'jobs' 挂起,该引擎就会终止。

您可以通过注册您自己的回调计时器函数来使 pysnmp 调度程序与您的应用程序的其余部分合作,该函数将从 mainloop 定期调用。在您的回调函数中,您可以检查终止事件是否到达并重置 pysnmp 'job' 什么会使 pysnmp mainloop 完成。

def timerCb(timeNow):
    if terminationRequestedFlag:  # this flag is raised by an event from parent process
        # use the same jobId as in jobStarted()
        snmpEngine.transportDispatcher.jobFinished(1)  

snmpEngine.transportDispatcher.registerTimerCbFun(timerCb)

那些 pysnmp 作业只是标志(如代码中的“1”),意思是告诉 I/O 核心异步应用程序仍然需要这个 I/O 核心来运行和服务它们。一旦潜在的许多应用程序中的最后一个对 I/O 核心操作不再感兴趣,主循环就会终止。

于 2014-03-13T19:55:49.467 回答
0

如果子进程可以合作,那么您可以使用multiprocessing.Event通知子进程它应该退出并multiprocessing.Pipe可以用于发送itemToSend给父进程:

#!/usr/bin/env python
import logging
import multiprocessing as mp
from threading import Timer

def child(stopped_event, conn):
    while not stopped_event.wait(1):
        pass
    mp.get_logger().info("sending")
    conn.send({'tosend': 'from child'})
    conn.close()

def terminate(process, stopped_event, conn):
    stopped_event.set() # nudge child process
    Timer(5, do_terminate, [process]).start()
    try:
        print(conn.recv())  # get value from the child
        mp.get_logger().info("received")
    except EOFError:
        mp.get_logger().info("eof")

def do_terminate(process):
    if process.is_alive():
        mp.get_logger().info("terminating")
        process.terminate()

if __name__ == "__main__":
    mp.log_to_stderr().setLevel(logging.DEBUG)
    parent_conn, child_conn = mp.Pipe(duplex=False)
    event = mp.Event()
    p = mp.Process(target=child, args=[event, child_conn])
    p.start()
    child_conn.close() # child must be the only one with it opened
    Timer(3, terminate, [p, event, parent_conn]).start()

输出

[DEBUG/MainProcess] created semlock with handle 139845842845696
[DEBUG/MainProcess] created semlock with handle 139845842841600
[DEBUG/MainProcess] created semlock with handle 139845842837504
[DEBUG/MainProcess] created semlock with handle 139845842833408
[DEBUG/MainProcess] created semlock with handle 139845842829312
[INFO/Process-1] child process calling self.run()
[INFO/Process-1] sending
{'tosend': 'from child'}
[INFO/Process-1] process shutting down
[DEBUG/Process-1] running all "atexit" finalizers with priority >= 0
[DEBUG/Process-1] running the remaining "atexit" finalizers
[INFO/MainProcess] received
[INFO/Process-1] process exiting with exitcode 0
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] running the remaining "atexit" finalizers
于 2014-03-12T22:17:54.723 回答