Hello there,
according to asyncio doc this is the correct way to handle SIGINT/SIGTERM signals in order to "cleanly" shutdown the IO loop:
This worked well for me as long as I didn't introduce executors.
Note: I expressively decided to use ProcessPoolExecutor instead of ThreadPoolExecutor in order to be able to terminate workers and exit sooner:
import asyncio
import functools
import time
import concurrent.futures
import signal
loop = asyncio.get_event_loop()
executor = concurrent.futures.ProcessPoolExecutor()
def long_running_fun():
for x in range(5):
print("loop %s" % x)
time.sleep(1000)
@asyncio.coroutine
def infinite_loop():
while True:
try:
fut = loop.run_in_executor(None, long_running_fun)
yield from asyncio.wait_for(fut, None)
finally:
yield from asyncio.sleep(1)
def ask_exit(signame):
print("got signal %s: exit" % signame)
loop.stop()
executor.shutdown()
def main():
loop.set_default_executor(executor)
asyncio.async(infinite_loop())
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame),
functools.partial(ask_exit, signame))
loop.run_forever()
if __name__ == '__main__':
main()
The problem with this code is that every time I hit CTRL+C "time.sleep()" returns immediately and the "for" loop keeps looping until it's exhausted.
This is the output:
$ python3.4 foo.py
loop 0
^Cloop 1
got signal SIGINT: exit
^Cloop 2
^Cloop 3
^Cloop 4
^CException ignored in: <generator object infinite_loop at 0x7fb0c0760cf0>
RuntimeError: generator ignored GeneratorExit
$
I've also tried other solutions such terminating processes returned by multiprocessing.active_children() but it has the same effect.
Apparently the only effective strategy is to use SIGKILL.
Basically I'm looking for a way to cleanly shutdown the IO loop and all its pending workers and if there's a "blessed" strategy in order to do that it would probably makes sense to mention it in the doc because it's not obvious.
--