Cleanly shutting down a streams server

76 views
Skip to first unread message

coles...@gmail.com

unread,
May 5, 2015, 5:23:53 PM5/5/15
to python...@googlegroups.com
When trying to shutdown a streams based server I often get a "Task was
destroyed but it is pending!" error and haven't been able to find a
way to fix it.

I've been able to reproduce this using the "TCP echo server using
streams" example in the Python docs
<https://docs.python.org/3/library/asyncio-stream.html#tcp-echo-server-using-streams>
by establishing a connection (nc localhost 8888) and then interrupting
the server before sending any data:

Python-3.5.0a3$ PYTHONASYNCIODEBUG=1 ./python /tmp/tcp_echo_using_streams.py
Serving on ('127.0.0.1', 8888)
^CTask was destroyed but it is pending!
source_traceback: Object created at (most recent call last):
File "/tmp/tcp_echo_using_streams.py", line 24, in <module>
loop.run_forever()
File "/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/base_events.py",
line 276, in run_forever
self._run_once()
File "/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/base_events.py",
line 1164, in _run_once
handle._run()
File "/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/events.py", line
120, in _run
self._callback(*self._args)
File "/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/streams.py", line
227, in connection_made
self._loop.create_task(res)
task: <Task pending coro=<handle_echo() done, defined at
/tmp/tcp_echo_using_streams.py:3> wait_for=<Future pending
cb=[Task._wakeup()] created at
/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/streams.py:392> created at
/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/streams.py:227>

It appears that the issue here is that since there is still a
connected client socket, Server.close() will leave the connected
socket untouched meaning the client_connected_cb task will remain
active.

PEP-3156 specifically states that Server.wait_closed is "A coroutine
that blocks until the service is closed and all accepted requests have
been handled", so I'm surprised that wait_closed doesn't block until
the connection is closed. Additionally there doesn't seem to be any
way of forcing all connections to close (short of adding this logic to
the coroutine).

Should Server.wait_closed block if there are remaining connections? Is
it possible to force these connections closed?

Cheers,
David

Guido van Rossum

unread,
May 5, 2015, 5:57:29 PM5/5/15
to coles...@gmail.com, python-tulip
The problem here is most likely due to the way ^C is handled -- it raises KeyboardInterrupt which inherits from BaseException but not from Exception. There are some places in the asyncio code that catch only Exception. I think this is probably a bug we should fix -- our original reasoning was that we should never catch BaseException because it is too severe, but in practice many programs recover at some higher level (e.g. the interpreter top level) from a BaseException and then you get the behavior you observe.

Possibly you could get a handle on the issue by explicitly searching for "except Exception" in the asyncio code base that match your traceback.
--
--Guido van Rossum (python.org/~guido)

coles...@gmail.com

unread,
May 5, 2015, 6:13:00 PM5/5/15
to gu...@python.org, python-tulip
I saw some reports about KeyboardInterrupt not being handled well with
asyncio, so I tried adding an explicit signal handler with
`loop.add_signal_handler(signal.SIGINT, lambda: loop.stop())` to break
out of the run_forever loop but that still results in the "Task was
destroyed but it is pending!" warning on exit.

My understanding is that this should avoid any BaseException handling
issues since it just stops the event loop and doesn't interfere with
any running tasks. Is this correct?

Guido van Rossum

unread,
May 5, 2015, 6:45:37 PM5/5/15
to coles...@gmail.com, python-tulip
I think there's a bug in streams.py or perhaps elsewhere, but I don't have time to research it. Using PYTHONASYNCIODEBUG=1 I get a different traceback that may lead you to the problem. Thanks for reporting!

Ludovic Gasc

unread,
May 11, 2015, 4:11:58 PM5/11/15
to python...@googlegroups.com
I've maybe something interesting to you.

I don't remember exactly the issue, but I had a similar issue in API-Hour with the worker.
I've taken some ideas/source code from gunicorn aiohttp worker and aiohttp worker to do that:
Reply all
Reply to author
Forward
0 new messages