asyncio.wait_for and Condition

瀏覽次數:1,096 次
跳到第一則未讀訊息

David Coles

未讀,
2014年11月20日 凌晨3:37:452014/11/20
收件者:python...@googlegroups.com
Hi,

I've been experimenting with asyncio in Python 3.4.2 and run into some interesting behaviour when attempting to use ayncio.wait_for with a Condition:

import asyncio

loop
= asyncio.get_event_loop()
cond
= asyncio.Condition()

@asyncio.coroutine
def foo():
   
#with (yield from cond):
   
yield from cond.acquire()
   
try:
       
try:
           
# Wait for condition with timeout
           
yield from asyncio.wait_for(cond.wait(), 1)
       
except asyncio.TimeoutError:
           
print("Timeout")
   
finally:
       
# XXX: Raises RuntimeError: Lock is not acquired.
        cond
.release()

loop
.run_until_complete(foo())


Taking a look around with a debugger I can see that the cond.wait() coroutine task receives a CancelledError as expected and this attempts to reacquire the lock associated with the condition (yield from self.acquire()). Unfortunately because asyncio.wait_for(...) immediately raises a TimeoutError, it's too late and we've already called called cond.release() in our main task, causing a runtime error.

My current workaround is to roll my own condition timeout-cancellation logic, but it really seems like wait_for and Condition should be able to play nicer together.

@asyncio.coroutine
def cond_wait_timeout(condition, timeout):
    wait_task
= asyncio.async(condition.wait())
    loop
.call_later(timeout, wait_task.cancel)
   
try:
       
yield from wait_task
       
return True
   
except asyncio.CancelledError:
       
return False


Any thoughts?

Guido van Rossum

未讀,
2014年11月20日 晚上9:34:592014/11/20
收件者:David Coles、python-tulip
Hi David,

I've confirmed the issue and I agree with your diagnosis. There are two tasks, one representing foo() and one representing cond.wait(). When the timeout happens, both become runnable. Due to the way scheduling works (I haven't carefully analyzed this yet) the task representing foo() is resumed first, and fails because it is supposed to be the other task's job to re-acquire the lock.

You can see this more clearly by surrounding the release() call in your test program as well as the acquire() call in locks.py with something like the following:

print('before', __file__)
try:
    <the call>
finally:
    print('after', __file__)

If you print different strings in each case you'll see that the test file runs before locks.py.

I wonder if there's a way to influence the order in which the tasks are resumed...
--
--Guido van Rossum (python.org/~guido)

David Coles

未讀,
2014年11月21日 凌晨12:40:432014/11/21
收件者:python...@googlegroups.com、coles...@gmail.com、gu...@python.org
Hi Guido,

My understanding seems to be that a wait_for timeout effectively unchains the two tasks, thus making it tricky to ensure consistency.

One option would be changing wait_for to always wait for the target task (cond.wait()) to complete, even on a timeout. This would at least guarantee that cleanup actions would have consistent ordering.
The downside is that tasks that chose to ignore or excessively delay cancellation would not allow the calling task to resume. Perhaps adding a dont_wait/return_immediately flag to wait_for would help for cases when you knew a Task might ignore cancellation. To me, this behaviour would be less surprising and give the caller the decision whether to unchain the tasks if required.

Sadly I can't think of other ways of ensuring order other than having something the calling task can yield on.

Cheers,
David

coles...@gmail.com

未讀,
2014年11月22日 清晨7:16:092014/11/22
收件者:python...@googlegroups.com
On Thu, Nov 20, 2014 at 5:37 PM, David Coles <coles...@gmail.com> wrote:
>
> My current workaround is to roll my own condition timeout-cancellation logic, but it really seems like wait_for and Condition should be able to play nicer together.

Seems like this isn't quite sufficient by itself. There's a race
condition that can occur if the waiter task is blocked on lock
reacquisition after notification (a good example would be multiple
waiters that cause contention on the lock or just having the notifier
yield between notify and release).

If the waiter is cancelled at this time it will also cause cond.wait()
to return without the lock and cause another RuntimeError in the
future. I don't think it's possible to detect this in my
cond_wait_timeout since cond.locked() may be True because of another
task

Something like this in asyncio.locks would seem sufficient:

finally:
# Must reacquire lock even if wait is cancelled
while True:
try:
yield from self.acquire()
except CancelledError:
pass
回覆所有人
回覆作者
轉寄
0 則新訊息