Explicit vs Implicit event loop

1,616 views
Skip to first unread message

Andrew Svetlov

unread,
Oct 31, 2016, 11:16:56 AM10/31/16
to python-tulip
Hi.

As people know I prefer explicit asyncio loop and pass it everywhere.
That's how I build libraries like aiohttp and aiopg, do writing private source code.
I promote this way of code writing on conferences etc.
Push `asyncio.set_event_loop(None)` at very begin of your code to avoid mysterious bugs!

But I feel temptation of using implicit loop -- and users of my libraries do it very often.

Now, after four years of working with asyncio I almost agree with it -- if implicit loop is used *from coroutine*.

But outside of *task context* (every asyncio coroutine is executed by a task, you know) it's still dangerous sometimes.

Let's take a close look on the following example:

    import motor.motor_asyncio

    class A:
        client = motor.motor_asyncio.AsyncIOMotorClient()

What's wrong with it? The code is untestable. AsyncIOMotorClient accepts is_loop parameter which should be event loop instance or None. If the parameter is None (as in our case) default event loop will be given.

Then we run a test suite. Every test should have own loop for sake of test isolation. In practice it means that every test tool creates an event loop, runs test with it and closes the loop after test finishing. It's very true strategy for testing async code, I know no alternatives.
But our client was coupled with *default* loop created on module import stage.
As a consequence `await client['db'].find(...)` call will be never finished because client's loop is not iterated by test runner.

It's not specific Motor's (async MongoDB client for Torrnado and Asyncio) problem -- aioes.Client, aiohttp.ClientSession and many others does the same.

In opposite aiopg, asyncpg and aiomysql have no this problem -- their DB connections are created by `await connect(params)` coroutine call. `await` cannot be used in global/class namespace, thus user will always call it from task with proper loop.

Sure, I could forbid aiohttp.ClientSession creation without current task and make usage of this particular class safe (it will break backward compatibility for inaccurate users but better to break code early, right).

Asyncio could introduce `asyncio.get_current_loop()` function which will raise exception if implicit loop exists but there is no running task. After that third-party libraries may switch to this function in any case where the loop is required. asyncio itself could do the same.

At the end.
Accessing implicit event loop is very handy but it's safe only inside running task.
Getting the loop in global namespace may lead to very strange and cryptic errors (most likely the next usage will just hang).

Thoughts?

Guido van Rossum

unread,
Oct 31, 2016, 12:27:57 PM10/31/16
to Andrew Svetlov, python-tulip
It's an interesting problem. I would like to rephrase your conclusion:
the implicit loop should only be used when the object you are creating
has a shorter (or equal) lifetime than the loop. I would also think
that the real problem with the code example is that it creates a
variable with global lifetime. That's really an antipattern,
especially when it comes to network connections. But I'm not sure we
can blame asyncio or motor for that -- it's really class A that's to
blame.

I would suggest different guidelines for libraries than for
applications: Libraries should be robust and always store their own
loop. This is how asyncio itself works and how aiohttp (and other
libraries you name) work. Their test suite (like asyncio's) should
enforce this.

But applications (and fledgling libraries may do this too) should be
allowed to assume a simpler model of the world: use coroutines (async
def, await) for everything, never store the loop, happily use
get_event_loop() when you really need it. And really, you should only
need it for run_in_executor(). If you find yourself using call_later()
you probably haven't quite figured out how to use coroutines properly.

Regarding testability of class A, I presume tests could just set a new
default loop and then re-assign A.client =
motor.motor_asyncio.AsyncIOMotorClient()? Or is that class under the
hood a singleton that just wraps more state with global lifetime? Then
that would be a problem.

There are always exceptions to such rules, but they should all fall in
the category of advanced usage, guarding against rare failures,
"productionizing" (which I would recommend against doing in too early
a stage).

--Guido
--
--Guido van Rossum (python.org/~guido)

Martin Richard

unread,
Oct 31, 2016, 2:04:24 PM10/31/16
to Guido van Rossum, Andrew Svetlov, python-tulip
Hi,

Indeed, for applications, I believe that we should rather depend on defining the right EventLoopPolicy (or use the default one if most cases) and call get_event_loop() when required rather than passing the loop explicitly. Most of the time a policy is sufficient to describe how your application is designed and allows to require the loop object as late as possible (which can be very convenient). In fact, I believe that if an application is not using multiple processes, there is probably no reason to use more than one loop, especially when taking the GIL into account,

Regarding the tests, asynctest.TestCase initialize a new loop for each test and sets it as default loop at the beginning of the test. You can set TestCase.forbid_get_event_loop to enforce explicit loop passing in your tests (http://asynctest.readthedocs.io/en/latest/asynctest.case.html#asynctest.TestCase.forbid_get_event_loop).

TestCase also performs some checks after a test case ran to assess the state of the loop after the test: for instance, did it run during the test (happens when your test case is a generator function without the @coroutine decorator), or checks that scheduled calls where invoked, etc (http://asynctest.readthedocs.io/en/latest/asynctest.case.html#asynctest.case.asynctest.fail_on).

--
Martin Richard
www.martiusweb.net

Yury Selivanov

unread,
Oct 31, 2016, 4:30:15 PM10/31/16
to gu...@python.org, Andrew Svetlov, python-tulip
Guido,

I’m with Andrew on this one.

> On Oct 31, 2016, at 10:27 AM, Guido van Rossum <gu...@python.org> wrote:
>
> I would suggest different guidelines for libraries than for
> applications: Libraries should be robust and always store their own
> loop. This is how asyncio itself works and how aiohttp (and other
> libraries you name) work. Their test suite (like asyncio's) should
> enforce this.

> But applications (and fledgling libraries may do this too) should be
> allowed to assume a simpler model of the world: use coroutines (async
> def, await) for everything, never store the loop, happily use
> get_event_loop() when you really need it. And really, you should only
> need it for run_in_executor(). If you find yourself using call_later()
> you probably haven't quite figured out how to use coroutines properly.

I don’t think people will follow this advice, because any application code can one day be refactored to become reusable (a library). That’s why I write my asyncio code defensively, passing the loop explicitly. This is probably the only aspect about asyncio I don’t like. This is the main complaint about asyncio that we hear *repeatedly* from all kinds of users.

The core reason causing all these problems is that get_event_loop is weakly defined. It doesn’t always return the current loop when called from a coroutine. It might return the wrong one, that just happens to be the “default”.

I know that you defend the current behaviour by saying that people should only have one loop per process. But some people have many in one process. That’s why we design libraries to handle the loop explicitly — that ensures that the library will work regardless of what get_event_loop returns.

The result is that all asyncio libraries accept the “loop” parameter in their public APIs. All of them are architected to pass the loop explicitly internally. And then, the asyncio end users aren’t sure how to use asyncio. Python docs recommend (sometimes indirectly) to pass the loop explicitly. Libraries and frameworks recommend that as well.

The end result of this is that asyncio programs always care “too much” about the loop: mange it, store references to it, explicitly pass it to coroutines. This makes asyncio code harder to understand to both advanced and beginner users.

My opinion on this:

1. We need to fix “get_event_loop” to return the *current* event loop when called from within a coroutine:

async def coro():
loop = asyncio.get_event_loop()

This is something we can easily implement, because coroutines are *always* running under some event loop.

2. We should explain asyncio library authors that it is better to “hide” the loop from the high-level API. The high level API should only consist of coroutines, that don’t have a “loop” parameter at all. Because get_event_loop is guaranteed to always return the correct loop, it will be safe to use it.

So instead of:

async def main(loop):
db = asyncpg.connect(…, loop=loop)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))

people will always write this:

async def main():
db = asyncpg.connect(…)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())

3. Docs should say that the event loop is a low-level API. As you said, if you call “loop.call_soon” from your application code, then something is wrong.

If we can shift asyncio libraries to be designed around coroutine-first API, we can safely start caring much less about the loop. We will only use it to run the main() coroutine of the program.

This is something that curio does right — the event loop is what runs the program, but the end user knows pretty much nothing about it. Coroutines just work, because curio provides a *reliable* way for getting the *current* loop from within a running coroutine context.

This is a fully backwards compatible change. We even have a PR to do this: https://github.com/python/asyncio/pull/355. That PR might need another review pass, but the idea is there.

Thank you,
Yury

Guido van Rossum

unread,
Oct 31, 2016, 4:44:05 PM10/31/16
to Yury Selivanov, Andrew Svetlov, python-tulip
On Mon, Oct 31, 2016 at 1:30 PM, Yury Selivanov <yseli...@gmail.com> wrote:
Guido,

I’m with Andrew on this one.

And the reason is the misbehavior of get_event_loop(), right? (See below.)
 
> On Oct 31, 2016, at 10:27 AM, Guido van Rossum <gu...@python.org> wrote:
>
> I would suggest different guidelines for libraries than for
> applications: Libraries should be robust and always store their own
> loop. This is how asyncio itself works and how aiohttp (and other
> libraries you name) work. Their test suite (like asyncio's) should
> enforce this.

>  But applications (and fledgling libraries may do this too) should be
> allowed to assume a simpler model of the world: use coroutines (async
> def, await) for everything, never store the loop, happily use
> get_event_loop() when you really need it. And really, you should only
> need it for run_in_executor(). If you find yourself using call_later()
> you probably haven't quite figured out how to use coroutines properly.

I don’t think people will follow this advice, because any application code can one day be refactored to become reusable (a library).  That’s why I write my asyncio code defensively, passing the loop explicitly.  This is probably the only aspect about asyncio I don’t like.  This is the main complaint about asyncio that we hear *repeatedly* from all kinds of users.

The core reason causing all these problems is that get_event_loop is weakly defined.  It doesn’t always return the current loop when called from a coroutine.  It might return the wrong one, that just happens to be the “default”.

OK, I think I am seeing the problem now.
 
I know that you defend the current behaviour by saying that people should only have one loop per process.  But some people have many in one process.  That’s why we design libraries to handle the loop explicitly — that ensures that the library will work regardless of what get_event_loop returns.

The result is that all asyncio libraries accept the “loop” parameter in their public APIs. All of them are architected to pass the loop explicitly internally.  And then, the asyncio end users aren’t sure how to use asyncio.  Python docs recommend (sometimes indirectly) to pass the loop explicitly.   Libraries and frameworks recommend that as well.

That's still a docs problem (many stdlib functions/methods have optional parameters that are almost always left None).
 
The end result of this is that asyncio programs always care “too much” about the loop: mange it, store references to it, explicitly pass it to coroutines.  This makes asyncio code harder to understand to both advanced and beginner users.

Right.
 
My opinion on this:

1. We need to fix “get_event_loop” to return the *current* event loop when called from within a coroutine:

  async def coro():
     loop = asyncio.get_event_loop()

This is something we can easily implement, because coroutines are *always* running under some event loop.

I now agree.
 
2. We should explain asyncio library authors that it is better to “hide” the loop from the high-level API.  The high level API should only consist of coroutines, that don’t have a “loop” parameter at all.  Because get_event_loop is guaranteed to always return the correct loop, it will be safe to use it.

So instead of:

  async def main(loop):
     db = asyncpg.connect(…, loop=loop)
  loop = asyncio.get_event_loop()
  loop.run_until_complete(main(loop))

people will always write this:

  async def main():
     db = asyncpg.connect(…)
  loop = asyncio.get_event_loop()
  loop.run_until_complete(main())

I think my (feeble) point is that we could recommend this even without changing get_event_loop(), because the situation where get_event_loop() returns the wrong loop is perverse. Unfortunately we've gone down this path too long, so e.g. testing habits have developed that require explicitly passing the loop. :-( So I agree we should fix #1.
 
3. Docs should say that the event loop is a low-level API.  As you said, if you call “loop.call_soon” from your application code, then something is wrong.

If we can shift asyncio libraries to be designed around coroutine-first API, we can safely start caring much less about the loop.  We will only use it to run the main() coroutine of the program.

This is something that curio does right — the event loop is what runs the program, but the end user knows pretty much nothing about it. Coroutines just work, because curio provides a *reliable* way for getting the *current* loop from within a running coroutine context.

This is a fully backwards compatible change.  We even have a PR to do this: https://github.com/python/asyncio/pull/355.  That PR might need another review pass, but the idea is there.

Thank you,
Yury


Thanks! We can fix this in 3.6b4.

Yury Selivanov

unread,
Oct 31, 2016, 4:49:48 PM10/31/16
to gu...@python.org, Andrew Svetlov, python-tulip

> On Oct 31, 2016, at 2:43 PM, Guido van Rossum <gu...@python.org> wrote:
>
> Thanks! We can fix this in 3.6b4.


Awesome! I’ll reopen that PR in a couple of days.

Thank you,
Yury

Andrew Svetlov

unread,
Nov 1, 2016, 1:46:12 PM11/1/16
to python-tulip, andrew....@gmail.com, gu...@python.org


On Monday, October 31, 2016 at 6:27:57 PM UTC+2, Guido van Rossum wrote:
It's an interesting problem. I would like to rephrase your conclusion:
the implicit loop should only be used when the object you are creating
has a shorter (or equal) lifetime than the loop. I would also think
that the real problem with the code example is that it creates a
variable with global lifetime. That's really an antipattern,
especially when it comes to network connections. But I'm not sure we
can blame asyncio or motor for that -- it's really class A that's to
blame.

Yes, lifetime of object created with (implicit) loop should be shorter than loop's one.
Unfortunately it's problem on both sides: user creates silly code but he will blame library author because the library (motor in my example) will hang silently.

I could add a warning to aiohttp.ClientSession for preventing this behavior but covering the issue by asyncio itself could help to newbies and library writers.
The same problem is present in asyncio classes itself: Lock, Queue, streams could be created with global life time and they are will hang if used from different loop.

Regarding testability of class A, I presume tests could just set a new
default loop and then re-assign A.client =
motor.motor_asyncio.AsyncIOMotorClient()? Or is that class under the
hood a singleton that just wraps more state with global lifetime? Then
that would be a problem.

It has a smell of monkey-patching. What we really need is preventing creation classes like  AsyncIOMotorClient if loop is not running.

Yury Selivanov

unread,
Nov 1, 2016, 3:07:44 PM11/1/16
to Andrew Svetlov, python-tulip, gu...@python.org

On Nov 1, 2016, at 10:46 AM, Andrew Svetlov <andrew....@gmail.com> wrote:

The same problem is present in asyncio classes itself: Lock, Queue, streams could be created with global life time and they are will hang if used from different loop.

Once we fix get_event_loop we can guard against this in debug mode.

Yury

Vincent Michel

unread,
Nov 4, 2016, 6:57:29 AM11/4/16
to python-tulip, andrew....@gmail.com


On Tuesday, November 1, 2016 at 6:46:12 PM UTC+1, Andrew Svetlov wrote:
The same problem is present in asyncio classes itself: Lock, Queue, streams could be created with global life time and they are will hang if used from different loop.

 Since PR #303 (merged last year), awaiting a future attached to a different loop raises an exception (RuntimeError).

Martin Richard

unread,
Nov 8, 2016, 11:07:54 AM11/8/16
to python-tulip, gu...@python.org, andrew....@gmail.com


On Monday, October 31, 2016 at 9:49:48 PM UTC+1, Yury Selivanov wrote:

Awesome!  I’ll reopen that PR in a couple of days.

Thank you,
Yury


Hi,

now that the PR is merged, would like to update asynctest for python 3.6, and I'd like your advice about how get_event_loop() should now be handled.

In a nutshell, asynctest.TestCase creates a loop for each test, and adds an option which, when activated, sets a policy which forbids a call to current_policy.get_event_loop() (hence asyncio.get_event_loop()) in the test.
The goal of this feature is to assert that a library author will not rely on get_event_loop() when explicit loop passing is required. Now asyncio.get_event_loop() returns the running loop in a callback or coroutine, and this change redefines the recommended practice about explicit loop passing.

I'd like to be sure asynctest enforces the right practice, hence, tree options:

- the feature is left as it is: a library author should no longer have to deal with the loop from a coroutine/a callback, and this is how asyncio libraries should be written.
- the feature should be updated, because explicit loop passing everywhere is the only way to write safe asyncio libraries, we don't know if someone will ever want to make several loop collaborate (= schedule things from one loop to be run on another),
- the feature can be deprecated, as there is no reason for someone to still force explicit loop passing.

The question can also be: what is now the recommended usage of asyncio.get_event_loop() for asyncio library authors?

Thanks for your input!

Guido van Rossum

unread,
Nov 8, 2016, 11:29:09 AM11/8/16
to Martin Richard, python-tulip, Andrew Svetlov
I think there's a whole new recommendation for library authors. They should just rely on get_event_loop() except under two circumstances:

- When you need a loop but your loop is not yet running. Note that this should be very rare -- e.g. when you're calling a coroutine function like asyncio.sleep() and passing it to another loop (or to run_until_complete()) you do *not* need the loop because the body of the coroutine doesn't start running until it is first scheduled.

- When run inside an application, framework or test that explicitly sets the global event loop to None. This should only be a backwards compatibility concern at this point: we should recommend that such apps, frameworks and tests switch to allowing get_event_loop(), *unless* they are bound to backward requirement with the stdlib asyncio in Python 3.4 or 3.5.3 (or old asyncio installations on 3.3).

--Guido

Yury Selivanov

unread,
Nov 8, 2016, 1:19:02 PM11/8/16
to python...@googlegroups.com


On 2016-11-08 11:07 AM, Martin Richard wrote:
> - the feature is left as it is: a library author should no longer have to
> deal with the loop from a coroutine/a callback, and this is how asyncio
> libraries should be written.
>

^- this. To add to what Guido said, I think we should promote
coroutine-centric APIs for libraries.

I.e. a library should expose API as coroutines that don't even accept
the loop parameter.

Yury

Martin Richard

unread,
Nov 8, 2016, 1:40:19 PM11/8/16
to Yury Selivanov, python-tulip
I fully agree about coroutines, one thing though: I often see functions returning awaitables documented as "coroutines", and I see that as a problem since it gives the assumption that it won't be executed until processed by the loop.

For instance, asynctest can't identify them as coroutines, they won't get mocked correctly: it used to be the case with aiohttp (until the wrapper class was added to the asyncio.COROUTINE_TYPES list for python 3.5): https://github.com/Martiusweb/asynctest/issues/23

In asyncio, it is still the case from some primitives of the loop (run_in_executor(), getaddrinfo()), since they are methods of the loop instance, I guess it's fine.
--
Martin Richard
www.martiusweb.net

Yury Selivanov

unread,
Nov 8, 2016, 1:51:36 PM11/8/16
to Martin Richard, python-tulip
Martin,

> On Nov 8, 2016, at 1:39 PM, Martin Richard <mar...@martiusweb.net> wrote:
>
> I fully agree about coroutines, one thing though: I often see functions returning awaitables documented as "coroutines", and I see that as a problem since it gives the assumption that it won't be executed until processed by the loop.
>
> For instance, asynctest can't identify them as coroutines, they won't get mocked correctly: it used to be the case with aiohttp (until the wrapper class was added to the asyncio.COROUTINE_TYPES list for python 3.5): https://github.com/Martiusweb/asynctest/issues/23
>
> In asyncio, it is still the case from some primitives of the loop (run_in_executor(), getaddrinfo()), since they are methods of the loop instance, I guess it's fine.

I re-read this email a few times, but I still don’t fully understand the problem you’re trying to describe. Maybe you can describe it in more detail?

Yury

Guido van Rossum

unread,
Nov 8, 2016, 3:29:42 PM11/8/16
to Yury Selivanov, Martin Richard, python-tulip
I think the problem is that it's hard to tell the difference between these two:

async def sleep1():
    await asyncio.sleep(1)

def sleep1():
    return Task(asyncio.sleep(1))

since both may be documented as being a "coroutine" but the latter references the loop when you calls it, while the former only references the loop when it's scheduled/awaited (because asyncio.sleep() is a generator).

And this is in turn because with a generator or coroutine, the body doesn't execute when you call it -- but with something that returns a Future (or Task), the body *does* execute in the call's context.

We should strive to make more things coroutines (though I'm not sure how to turn gather() into a coroutine -- I recall it was complicated to write, with the variant behaviors and possible timeouts or cancellations).

Yury Selivanov

unread,
Nov 8, 2016, 3:43:27 PM11/8/16
to python...@googlegroups.com


On 2016-11-08 3:29 PM, Guido van Rossum wrote:
> I think the problem is that it's hard to tell the difference between these
> two:
>
> async def sleep1():
> await asyncio.sleep(1)
>
> def sleep1():
> return Task(asyncio.sleep(1))
>
> since both may be documented as being a "coroutine" but the latter
> references the loop when you calls it, while the former only references the
> loop when it's scheduled/awaited (because asyncio.sleep() is a generator).
>
> And this is in turn because with a generator or coroutine, the body doesn't
> execute when you call it -- but with something that returns a Future (or
> Task), the body *does* execute in the call's context.

Right, but this is in part a documentation issue. If a library exposes
an API that documents some methods to be coroutines, when in fact they
return Future/Tasks, it means that the user is supposed to call this API
from a coroutine. In which case the updated get_event_loop() will do the
trick.

We should probably tell library authors to avoid returning tasks and
futures directly, instead preferring to wrap them in an actual
coroutine. What do you think about that?

I wanted Martin to clarify what does he do in his asyncio unittest
library -- what he tries to mock and why.

>
> We should strive to make more things coroutines (though I'm not sure how to
> turn gather() into a coroutine -- I recall it was complicated to write,
> with the variant behaviors and possible timeouts or cancellations).

Can we rename asyncio.gather to asyncio._gather and wrap it into a
coroutine?

Yury

Guido van Rossum

unread,
Nov 8, 2016, 4:00:45 PM11/8/16
to Yury Selivanov, python-tulip
On Tue, Nov 8, 2016 at 12:43 PM, Yury Selivanov <yseliv...@gmail.com> wrote:


On 2016-11-08 3:29 PM, Guido van Rossum wrote:
I think the problem is that it's hard to tell the difference between these
two:

async def sleep1():
     await asyncio.sleep(1)

def sleep1():
     return Task(asyncio.sleep(1))

since both may be documented as being a "coroutine" but the latter
references the loop when you calls it, while the former only references the
loop when it's scheduled/awaited (because asyncio.sleep() is a generator).

And this is in turn because with a generator or coroutine, the body doesn't
execute when you call it -- but with something that returns a Future (or
Task), the body *does* execute in the call's context.

Right, but this is in part a documentation issue. If a library exposes an API that documents some methods to be coroutines, when in fact they return Future/Tasks, it means that the user is supposed to call this API from a coroutine. In which case the updated get_event_loop() will do the trick.

But they could also call it before a loop is active and then call run_until_complete() on the thing. E.g.

f = sleep1()  # definition from my previous post
asyncio.get_event_loop().run_until_complete(f)

Assuming that's the only get_event_loop() call in the program, this will work if sleep1() is an actual coroutine or generator, but not if it returns a Future or Task.
 
We should probably tell library authors to avoid returning tasks and futures directly, instead preferring to wrap them in an actual coroutine.  What do you think about that?

I wanted Martin to clarify what does he do in his asyncio unittest library -- what he tries to mock and why.


We should strive to make more things coroutines (though I'm not sure how to
turn gather() into a coroutine -- I recall it was complicated to write,
with the variant behaviors and possible timeouts or cancellations).

Can we rename asyncio.gather to asyncio._gather and wrap it into a coroutine?

I suppose, though that just complexifies it more. :-( It will solve the immediate issue being discussed here though.

Martin Richard

unread,
Nov 8, 2016, 6:12:06 PM11/8/16
to Guido van Rossum, Yury Selivanov, python-tulip
2016-11-08 22:00 GMT+01:00 Guido van Rossum <gu...@python.org>:
But they could also call it before a loop is active and then call run_until_complete() on the thing. E.g.

f = sleep1()  # definition from my previous post
asyncio.get_event_loop().run_until_complete(f)

Assuming that's the only get_event_loop() call in the program, this will work if sleep1() is an actual coroutine or generator, but not if it returns a Future or Task.

Yes, this is it. The example I had in mind was a wrapper function creating a lock and returning a coroutine instance.

I wanted Martin to clarify what does he do in his asyncio unittest library -- what he tries to mock and why.

Asynctest uses introspection to mock coroutine functions and generators correctly:

mocked_coroutine_function = asynctest.Mock(asyncio.sleep)
await mocked_coroutine_function()

works because asynctest.Mock returns an awaitable when called if the spec is a coroutine function. While:
mocked_coro_wrapper = asynctest.Mock(sleep1)
await mocked_coro_wrapper()  # fails with TypeError: the returned mock is not awaitable.

We should strive to make more things coroutines (though I'm not sure how to
turn gather() into a coroutine -- I recall it was complicated to write,
with the variant behaviors and possible timeouts or cancellations).

Can we rename asyncio.gather to asyncio._gather and wrap it into a coroutine?

I suppose, though that just complexifies it more. :-( It will solve the immediate issue being discussed here though.

Why not await the future instead of returning it?

@asyncio.coroutine
def gather(...):
   # ...
   return (yield from outer)


--
--Guido van Rossum (python.org/~guido)


--
Martin Richard
www.martiusweb.net

Guido van Rossum

unread,
Nov 8, 2016, 8:43:19 PM11/8/16
to Martin Richard, Yury Selivanov, python-tulip
On Tue, Nov 8, 2016 at 3:11 PM, Martin Richard <mar...@martiusweb.net> wrote:
The example I had in mind was a wrapper function creating a lock and returning a coroutine instance.

I suppose we could change the Lock class (and other classes like it -- probably many) to look up the loop lazily, when first needed (i.e. in acquire or wait).
 

Why not await the future instead of returning it?

@asyncio.coroutine
def gather(...):
   # ...
   return (yield from outer)

That's a clever idea. The downside (of this or of my original thought) is that if gather() is called in a context that requires a future, ensure_future() will wrap it in another layer of Task, and we just get an endless layering of tasks and coroutines. I'd rather avoid that (unless someone can show that this is an unlikely scenario when gather() is involved).

Interestingly, the gather() implementation in curio does this the other way around: curio.gather() is a coroutine, but its argument must be a list of tasks, not coroutines.
 
Reply all
Reply to author
Forward
0 new messages