Python多进程的子进程终止问题

我需要在python中使用多进程,可是我发现不论是multiprocessing.pool.Pool还是concurrent.futures.ProcessPoolExecutor都没有提供对子进程的终止我现在急需要这样的功能,该如何实现?

我有看ProcessPoolExecutor的源代码,ProcessPoolExecutor大概就是在submit后会把参数视为一个work_item,添加到call_queue中去,再由_process_worker取出call_item并运行,感觉也没法用hack的手段来实现终止啊。。

下面是ProcessPoolExecutor中的部分核心源代码。。

def _process_worker(call_queue, result_queue):
    """Evaluates calls from call_queue and places the results in result_queue.

    This worker is run in a separate process.

    Args:
        call_queue: A multiprocessing.Queue of _CallItems that will be read and
            evaluated by the worker.
        result_queue: A multiprocessing.Queue of _ResultItems that will written
            to by the worker.
        shutdown: A multiprocessing.Event that will be set as a signal to the
            worker that it should exit when call_queue is empty.
    """
    while True:
        call_item = call_queue.get(block=True)
        if call_item is None:
            # Wake up queue management thread
            result_queue.put(os.getpid())
            return
        try:
            r = call_item.fn(*call_item.args, **call_item.kwargs)
        except BaseException as e:
            exc = _ExceptionWithTraceback(e, e.__traceback__)
            result_queue.put(_ResultItem(call_item.work_id, exception=exc))
        else:
            result_queue.put(_ResultItem(call_item.work_id,
                                         result=r))
阅读 16.3k
3 个回答

.close() 温和地停止子进程,.terminate() 强制关。

不知道你是什么使用场景需要显式地用这种功能。我用 concurrent.futures 的时候从来没有过这种需求,直接用 with 语句。任务处理完了也就退出了。

###  example
import os
import signal

def handle_sigterm(signum, frame):
    # do stuff
    os._exit(0)

# subprocess
signal.signal(signal.SGITERM, handle_sigterm)

# where to kill subprocess
os.kill(pid, signal.SIGTERM)
撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题