[Python-Dev] Type annotations, PEP 649 and PEP 563

138 views
Skip to first unread message

Thomas Wouters

unread,
Oct 20, 2021, 9:22:09 AM10/20/21
to Python-Dev
(For visibility, posted both to python-dev and Discourse.)

Over the last couple of months, ever since delaying PEP 563’s default change in 3.10, the Steering Council has been discussing and deliberating over PEP 563 (Postponed Evaluation of Annotations), PEP 649 (Deferred Evaluation Of Annotations Using Descriptors), and type annotations in general. We haven’t made a decision yet, but we want to give everyone an update on where we’re at, what we’re thinking and why, see if there’s consensus on any of this, and perhaps restart the discussion around the options.


First off, as Barry already mentioned in a different thread, the SC does not want to see type annotations as separate from the Python language. We don’t think it would be good to have the syntax or the semantics diverge, primarily because we don’t think users would see them as separate. Any divergence would be hard to explain in documentation, hard to reason about when reading code, and hard to delineate, to describe what is allowed where. There’s a lot of nuance in this position (it doesn’t necessarily mean that all valid syntax for typing uses has to have sensible semantics for non-typing uses), but we’ll be working on something to clarify that more, later.


We also believe that the runtime uses of type annotations, which PEP 563 didn’t really anticipate, are valid uses that Python should support. If function annotations and type annotations had evolved differently, such as being strings from the start, PEP 563 might have been sufficient. It’s clear runtime uses of type annotations serve a real, sensible purpose, and Python benefits from supporting them.


By and large, the SC views PEP 649 as a better way forward. If PEP 563 had never come along, it would be a fairly easy decision to accept PEP 649. We are still inclined to accept PEP 649. That would leave the consideration about what to do with PEP 563 and existing from __future__ import annotations directives. As far as we can tell, there are two reasons for code to want to use PEP 563: being able to conveniently refer to names that aren’t available until later in the code (i.e. forward references), and reducing the overhead of type annotations. If PEP 649 satisfies all of the objectives of PEP 563, is there a reason to keep supporting PEP 563’s stringified annotations?  Are there any practical, real uses of stringified annotations that would not be served by PEP 649's deferred annotations? 


If we no longer need to support PEP 563, can we simply make from __future__ import annotations enable PEP 649? We still may want a new future import for PEP 649 as a transitory measure, but we could make PEP 563's future import mean the same thing, without actually stringifying annotations. (The reason to do this would be to reduce the combinatorial growth of the support matrix, and to simplify the implementation of the parser.) This would affect code that expects annotations to always be strings, but such code would have to be poking directly at function objects (the __annotations__ attribute), instead of using the advertised ways of getting at annotations (like typing.get_type_hints()). This question in particular is one in which the SC isn't yet of one mind.


Keeping the future import and stringified annotations around is certainly an option, but we’re worried about the cost of the implementation, the support cost, and the confusion for users (specifically, it is a future import that will never become the future). If we do keep them, how long would we keep them around? Should we warn about their use? If we warn about the future import, is the noise and confusion this generates going to be worth it? If we don't warn about them, how will we ever be able to turn them off?


One thing we’re thinking of specifically for the future import, and for other deprecations in Python, is to revisit the deprecation and warning policy. We think it’s pretty clear that the policy we have right now doesn’t exactly work. We used to have noisy DeprecationWarnings, which were confusing to end users when they were not in direct control of the code. We now have silent-by-default DeprecationWarnings, where the expectation is that test frameworks surface these warnings. This avoids the problem of end users being confused, but leaves the problem of the code’s dependencies triggering the warning, and thus still warns users (developers) not necessarily in a position to fix the problem, which in turn leads to them silencing the warning and moving on. We need a better way to reach the users in a position to update the code.


One idea is to rely on linters and IDEs to provide this signal, possibly with a clear upgrade path for the code (e.g. a 2to3-like fixer for a specific deprecation). Support for deprecations happened to be brought up on the typing-sig mailing list not too long ago, as an addition to the pytype type checker and hopefully others (full disclosure, Yilei is a team-mate of Thomas’s at Google).


This sounds like a reasonably user-friendly approach, but it would require buy-in from linter/IDE developers, or an officially supported “Python linter” project that we control. There’s also the question of support timelines: most tooling supports a wider range of Python versions than just the two years that we use in our deprecation policy. Perhaps we need to revisit the policy, and consider deprecation timelines based on how many Python versions library developers usually want to support.


The SC continues to discuss the following open questions, and we welcome your input on them:

  1. Is it indeed safe to assume PEP 649 satisfies all reasonable uses of PEP 563? Are there cases of type annotations for static checking or runtime use that PEP 563 enables, which would break with PEP 649?

  2. Is it safe to assume very little code would be poking directly at __annotations__ attributes of function objects; effectively, to declare them implementation details and let them not be strings even in code that currently has the annotations future import?

  3. Is the performance of PEP 649 and PEP 563 similar enough that we can outright discount it as a concern? Does anyone actually care about the overhead of type annotations anymore? Are there other options to alleviate this potential issue (like a process-wide switch to turn off annotations)?

  4. If we do not need to keep PEP 563 support, which would be a lot easier on code maintenance and our support matrix, do we need to warn about the semantics change? Can we silently accept (and ignore) the future import once PEP 649 is in effect?

  5. If we do need a warning, how loud, and how long should it be around? At the end of the deprecation period, should the future import be an error, or simply be ignored?

  6. Are there other options we haven’t thought of for dealing with deprecations like this one?


Like I said, the SC isn’t done deliberating on any of this. The only decisions we’ve made so far is that we don’t see the typing language as separate from Python (and thus won’t be blanket delegating typing PEPs to a separate authority), and we don’t see type annotations as purely for static analysis use.


For the whole SC,

Thomas.


--
Thomas Wouters <tho...@python.org>

Hi! I'm an email virus! Think twice before sending your email to help me spread!

Sebastian Rittau

unread,
Oct 20, 2021, 12:13:56 PM10/20/21
to pytho...@python.org
Am 20.10.21 um 15:18 schrieb Thomas Wouters:

By and large, the SC views PEP 649 as a better way forward. If PEP 563 had never come along, it would be a fairly easy decision to accept PEP 649. We are still inclined to accept PEP 649. That would leave the consideration about what to do with PEP 563 and existing from __future__ import annotations directives. As far as we can tell, there are two reasons for code to want to use PEP 563: being able to conveniently refer to names that aren’t available until later in the code (i.e. forward references), and reducing the overhead of type annotations. If PEP 649 satisfies all of the objectives of PEP 563, is there a reason to keep supporting PEP 563’s stringified annotations?  Are there any practical, real uses of stringified annotations that would not be served by PEP 649's deferred annotations?

These are my reasons for using from __future__ import annotations:

  • Forward references and cyclic imports (with if TYPE_CHECKING: from foo import Bar) while avoiding quotes.
  • Using typing features from future Python versions. This becomes less important the more typing stabilizes.

 - Sebastian

Sebastian Rittau

unread,
Oct 20, 2021, 12:16:32 PM10/20/21
to pytho...@python.org
I'm sorry, I sent my last mail too early. Here are the rest of my thoughts.

Am 20.10.21 um 15:18 schrieb Thomas Wouters:

Keeping the future import and stringified annotations around is certainly an option, but we’re worried about the cost of the implementation, the support cost, and the confusion for users (specifically, it is a future import that will never become the future). If we do keep them, how long would we keep them around? Should we warn about their use? If we warn about the future import, is the noise and confusion this generates going to be worth it? If we don't warn about them, how will we ever be able to turn them off?

Personally, I think that stringified annotations should be deprecated and eventually be removed. This opens the design space to use quotes for different purposes, for example getting rid of the cumbersome need to use Literal for literals.

One thing we’re thinking of specifically for the future import, and for other deprecations in Python, is to revisit the deprecation and warning policy. We think it’s pretty clear that the policy we have right now doesn’t exactly work. We used to have noisy DeprecationWarnings, which were confusing to end users when they were not in direct control of the code. We now have silent-by-default DeprecationWarnings, where the expectation is that test frameworks surface these warnings. This avoids the problem of end users being confused, but leaves the problem of the code’s dependencies triggering the warning, and thus still warns users (developers) not necessarily in a position to fix the problem, which in turn leads to them silencing the warning and moving on. We need a better way to reach the users in a position to update the code.

+1


  1. If we do need a warning, how loud, and how long should it be around? At the end of the deprecation period, should the future import be an error, or simply be ignored?

The future import should remain active and should continue to work as it does now (without warning) as long as there are Python versions supported that have not implemented PEP 649, i.e. Python 3.10. Otherwise, existing code that provides supports for these older versions (especially libraries) have to regress their typing support. For example, a library supporting 3.8+ could have the following code:

from __future__ import annotations

def foo(x: int | str): pass

If newer versions stop supporting the future import or warn about it, the library would have to go back to using typing.Union here. Other constructs would even be impossible to use.

After that, I would recommend to start the normal "warn"/"remove" cycle for the future import. Don't keep it around if it's doing nothing.

 - Sebastian

Christopher Barker

unread,
Oct 20, 2021, 5:38:09 PM10/20/21
to Thomas Wouters, Python-Dev
Thanks to the SC for such a thoughtful note. I really like where this is going.

One thought.

On Wed, Oct 20, 2021 at 6:21 AM Thomas Wouters <tho...@python.org> wrote:
  1. Is the performance of PEP 649 and PEP 563 similar enough that we can outright discount it as a concern? Does anyone actually care about the overhead of type annotations anymore? Are there other options to alleviate this potential issue (like a process-wide switch to turn off annotations)?


Annotations are used at runtime by at least one std lib module: dataclasses, and who knows how many third party libs. So that may not be practical.

-CHB 
--
Christopher Barker, PhD (Chris)

Python Language Consulting
  - Teaching
  - Scientific Software Development
  - Desktop GUI and Web Development
  - wxPython, numpy, scipy, Cython

Inada Naoki

unread,
Oct 20, 2021, 9:28:17 PM10/20/21
to Christopher Barker, Thomas Wouters, Python-Dev
On Thu, Oct 21, 2021 at 6:38 AM Christopher Barker <pyth...@gmail.com> wrote:
>
> Thanks to the SC for such a thoughtful note. I really like where this is going.
>
> One thought.
>
> On Wed, Oct 20, 2021 at 6:21 AM Thomas Wouters <tho...@python.org> wrote:
>>
>> Is the performance of PEP 649 and PEP 563 similar enough that we can outright discount it as a concern? Does anyone actually care about the overhead of type annotations anymore? Are there other options to alleviate this potential issue (like a process-wide switch to turn off annotations)?
>
> Annotations are used at runtime by at least one std lib module: dataclasses, and who knows how many third party libs. So that may not be practical.
>

This is similar to docstring. Some tools using docstring (e.g. docopt)
prevent using -OO option.
Although some library (e.g. SQLAlchemy) has huge docstring, we can not
use -OO if a time set of module depends on docstring or assertion.

So I think we need some mechanizm to disable optimization like
dropping assertions, docstrings, and annotations per module.

--
Inada Naoki <songof...@gmail.com>
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/DMJOQ6JXDQSPRZPOLL4FRADIMP5EZTF6/
Code of Conduct: http://python.org/psf/codeofconduct/

Henry Fredrick Schreiner

unread,
Oct 21, 2021, 12:09:31 AM10/21/21
to pytho...@python.org
typing features from future Python versions

I second both of these uses, but especially this (which seems to be missing from the original post), it’s been by far the main reason I’ve used this mode and I’ve seen this used, and is the main feature to look forward to when dropping Python 3.7 support. The new features coming to typing make static typing much easier, but no libraries can drop Python 3.7/3.8/3.9 support; but static typing doesn’t need old versions.

When talking about speed, one of the important things to consider here is the difference between the two proposals. PEP 649 was about the same as the current performance, but PEP 563 was significantly faster, since it doesn’t instantiate or deal with objects at all, which both the current default and PEP 563 do. You could even protect imports with TYPE_CHECKING with PEP 563, and further reduce the runtime cost of adding types - which could be seen as a reason to avoid adding types. To the best of my knowledge, it hasn’t been a blocker for packages, but something to include.

Also, one of the original points for static typing is that strings can be substituted for objects. “Type” is identical, from a static typing perspective, to Type. You can swap one for the other, and for a Python 3.6+ codebase, using something like “A | B” (with the quotes) is a valid way to have static types in 3.6 that pass MyPy (though are not usable at runtime, obviously, but that’s often not a requirement). NumPy, for example, makes heavy usage of unions and other newer additions in static typing.

Inada Naoki

unread,
Oct 21, 2021, 1:07:36 AM10/21/21
to Henry Fredrick Schreiner, pytho...@python.org
On Thu, Oct 21, 2021 at 1:08 PM Henry Fredrick Schreiner
<henry.fredri...@cern.ch> wrote:
>
> > typing features from future Python versions
>
> I second both of these uses, but especially this (which seems to be missing from the original post), it’s been by far the main reason I’ve used this mode and I’ve seen this used, and is the main feature to look forward to when dropping Python 3.7 support. The new features coming to typing make static typing much easier, but no libraries can drop Python 3.7/3.8/3.9 support; but static typing doesn’t need old versions.
>

I agree with this point. We shouldn't emit DeprecationWarning for
`from __future__ import annotations` at least 3 versions (3.11 ~
3.13).

> When talking about speed, one of the important things to consider here is the difference between the two proposals. PEP 649 was about the same as the current performance, but PEP 563 was significantly faster, since it doesn’t instantiate or deal with objects at all, which both the current default and PEP 563 do. You could even protect imports with TYPE_CHECKING with PEP 563, and further reduce the runtime cost of adding types - which could be seen as a reason to avoid adding types. To the best of my knowledge, it hasn’t been a blocker for packages, but something to include.
>
> Also, one of the original points for static typing is that strings can be substituted for objects. “Type” is identical, from a static typing perspective, to Type. You can swap one for the other, and for a Python 3.6+ codebase, using something like “A | B” (with the quotes) is a valid way to have static types in 3.6 that pass MyPy (though are not usable at runtime, obviously, but that’s often not a requirement). NumPy, for example, makes heavy usage of unions and other newer additions in static typing.
>

We may be able to provide tool to rewrite Python sources like 2to3:

* Remove `from __future__ import annotations`
* Stringify annotations, if it is not constants (e.g. `None`, `42`,
`"foo"` are not rewrote).

This tool can ease transition from PEP 563 to 649, and solve
performance issues too.
PEP 649 can have the performance as PEP 563 if all annotations are constants.

--
Inada Naoki <songof...@gmail.com>
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/ZFGRO6UGFVOBPTW2EFNLUC7WEYPCRLAQ/

Larry Hastings

unread,
Oct 21, 2021, 10:27:31 AM10/21/21
to pytho...@python.org


On 10/21/21 5:01 AM, Henry Fredrick Schreiner wrote:
PEP 649 was about the same as the current performance, but PEP 563 was significantly faster, since it doesn’t instantiate or deal with objects at all, which both the current default and PEP 563 do.

I don't understand what you're saying about how PEP 563 both does and doesn't instantiate objects.

PEP 649, and the current implementation of PEP 563, are definitely both faster than stock behavior when you don't examine annotations; both of these approaches don't "instantiate or deal with objects" unless you examine the annotations.  PEP 649 is roughly the same as stock when you do examine annotations.  PEP 563 is faster if you only ever examine the annotations as strings, but becomes enormously slower if you examine the annotations as actual Python values.

The way I remember it, most of the negative feedback about PEP 649's performance concerned its memory consumption.  I've partially addressed that by always lazy-creating the function object.  But, again, I suggest that performance is a distraction at this stage.  The important thing is to figure out what semantics we want for the language.  We have so many clever people working on CPython, I'm sure this team will make whatever semantics we choose lean and performant.


/arry

Damian Shaw

unread,
Oct 21, 2021, 12:46:23 PM10/21/21
to Henry Fredrick Schreiner, Python-Dev
Sorry for the naive question but why doesn't "TYPE_CHECKING" work under PEP 649?

I think I've seen others mention this but as the code object isn't executed until inspected then if you are just using annotations for type hints it should work fine?

Is the use case wanting to use annotations for type hints and real time inspection but you also don't want to import the objects at run time?

If that's really such a strong use cause couldn't PEP 649 be modified to return a repr of the code object when it gets a NameError? Either by attaching it to the NameError exception or as part of a ForwardRef style object if that's how PEP 649 ends up getting implemented?

Damian (he/him)

_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/

Carl Meyer

unread,
Oct 21, 2021, 1:30:51 PM10/21/21
to Damian Shaw, Henry Fredrick Schreiner, Python-Dev
On Thu, Oct 21, 2021 at 10:44 AM Damian Shaw
<damian.p...@gmail.com> wrote:
> Sorry for the naive question but why doesn't "TYPE_CHECKING" work under PEP 649?
>
> I think I've seen others mention this but as the code object isn't executed until inspected then if you are just using annotations for type hints it should work fine?
>
> Is the use case wanting to use annotations for type hints and real time inspection but you also don't want to import the objects at run time?

Yes, you're right. And I don't think PEP 649 and PEP 563 are really
all that different in this regard: if you have an annotation using a
non-imported name, you'll be fine as long as you don't introspect it
at runtime. If you do, you'll get a NameError. And with either PEP you
can work around this if you need to by ensuring you do the imports
first if you're going to need the runtime introspection of the
annotations.

The difference is that PEP 563 makes it easy to introspect the
annotation _as a string_ without triggering NameError, and PEP 649
takes that away, but I haven't seen anyone describe a really
compelling use case for that.

> If that's really such a strong use cause couldn't PEP 649 be modified to return a repr of the code object when it gets a NameError? Either by attaching it to the NameError exception or as part of a ForwardRef style object if that's how PEP 649 ends up getting implemented?

It could, but this makes evaluation of annotations oddly different
from evaluation of any other code, which is something that it's
reasonable to try to avoid if possible.

Carl
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/7ZVH5ADSQ7LTBPMZY6FOUGIJUBF33JPL/

Larry Hastings

unread,
Oct 21, 2021, 1:37:08 PM10/21/21
to pytho...@python.org


On 10/21/21 5:42 PM, Damian Shaw wrote:
Sorry for the naive question but why doesn't "TYPE_CHECKING" work under PEP 649?

I think I've seen others mention this but as the code object isn't executed until inspected then if you are just using annotations for type hints it should work fine?

Yes, it works fine in that case.


Is the use case wanting to use annotations for type hints and real time inspection but you also don't want to import the objects at run time?

If that's really such a strong use cause couldn't PEP 649 be modified to return a repr of the code object when it gets a NameError? Either by attaching it to the NameError exception or as part of a ForwardRef style object if that's how PEP 649 ends up getting implemented?

That's the use case.

Your proposal is one of several suggesting that type annotations are special enough to break the rules.  I don't like this idea.  But you'll be pleased to know there are a lot of folks in the "suppress the NameError" faction, including Guido (IIUC).

See also this PR against co_annotations, proposing returning a new AnnotationName object when evaluating the annotations raises a NameError.

https://github.com/larryhastings/co_annotations/pull/3


Cheers,


/arry

Jelle Zijlstra

unread,
Oct 21, 2021, 2:09:16 PM10/21/21
to Carl Meyer, Henry Fredrick Schreiner, Python-Dev
El jue, 21 oct 2021 a las 10:31, Carl Meyer (<ca...@oddbird.net>) escribió:
On Thu, Oct 21, 2021 at 10:44 AM Damian Shaw
<damian.p...@gmail.com> wrote:
> Sorry for the naive question but why doesn't "TYPE_CHECKING" work under PEP 649?
>
> I think I've seen others mention this but as the code object isn't executed until inspected then if you are just using annotations for type hints it should work fine?
>
> Is the use case wanting to use annotations for type hints and real time inspection but you also don't want to import the objects at run time?

Yes, you're right. And I don't think PEP 649 and PEP 563 are really
all that different in this regard: if you have an annotation using a
non-imported name, you'll be fine as long as you don't introspect it
at runtime. If you do, you'll get a NameError. And with either PEP you
can work around this if you need to by ensuring you do the imports
first if you're going to need the runtime introspection of the
annotations.

The difference is that PEP 563 makes it easy to introspect the
annotation _as a string_ without triggering NameError, and PEP 649
takes that away, but I haven't seen anyone describe a really
compelling use case for that.

I would want this for my type checker, pyanalyze. I'd want to get the raw annotation and turn it into a type. For example, if the annotation is `List[SomeType]` and `SomeType` is imported in `if TYPE_CHECKING`, I'd at least still be able to extract `List[Any]` instead of bailing out completely. With some extra analysis work on the module I could even get the real type out of the `if TYPE_CHECKING` block.

Carl Meyer

unread,
Oct 21, 2021, 2:19:48 PM10/21/21
to Jelle Zijlstra, Henry Fredrick Schreiner, Python-Dev
On Thu, Oct 21, 2021 at 12:06 PM Jelle Zijlstra
<jelle.z...@gmail.com> wrote:

> I would want this for my type checker, pyanalyze. I'd want to get the raw annotation and turn it into a type. For example, if the annotation is `List[SomeType]` and `SomeType` is imported in `if TYPE_CHECKING`, I'd at least still be able to extract `List[Any]` instead of bailing out completely. With some extra analysis work on the module I could even get the real type out of the `if TYPE_CHECKING` block.

If you're up for the extra analysis on the module, wouldn't it be just
as possible to front-load this analysis instead of doing it after the
fact, and perform the imports in the `if TYPE_CHECKING` block prior to
accessing the annotation data? Or perform a fake version of the
imports that just sets every name imported in `if TYPE_CHECKING` to
`Any`?

Carl
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/PEJBJBSFTTA4U3BXQW2N6ABGL5A4V2NF/

Guido van Rossum

unread,
Oct 21, 2021, 3:04:35 PM10/21/21
to Larry Hastings, Python-Dev
On Thu, Oct 21, 2021 at 10:35 AM Larry Hastings <la...@hastings.org> wrote:
.

Your proposal is one of several suggesting that type annotations are special enough to break the rules.  I don't like this idea.  But you'll be pleased to know there are a lot of folks in the "suppress the NameError" faction, including Guido (IIUC).


Yes, I still want this part of your PEP changed. I find your characterization of my position misleading -- there is no rule to break here(*), just an API.

(*) The Zen of Python does not have rules.

--
--Guido van Rossum (python.org/~guido)

Łukasz Langa

unread,
Oct 21, 2021, 4:10:02 PM10/21/21
to Thomas Wouters, Python-Dev

On 20 Oct 2021, at 15:18, Thomas Wouters <tho...@python.org> wrote:

(For visibility, posted both to python-dev and Discourse.)

Since this got split into Discourse and python-dev, I wrote my take on the issue on the blog so that I could post it in both spaces with proper formatting and everything:



- Ł
signature.asc

Larry Hastings

unread,
Oct 21, 2021, 4:51:49 PM10/21/21
to gu...@python.org, Python-Dev


It's certainly not my goal to be misleading.  Here's my perspective.

In Python, if you evaluate an undefined name, Python raises a NameError.  This is so consistent I'm willing to call it a "rule".  Various folks have proposed an exception to this "rule": evaluating an undefined name in an PEP 649 delayed annotation wouldn't raise NameError, instead evaluating to some yet-to-be-determined value (ForwardRef, AnnotationName, etc).  I don't think annotations are special enough to "break the rules" in this way.

Certainly this has the potential to be irritating for code using annotations at runtime, e.g. Pydantic.  Instead of catching the exception, it'd have to check for this substitute value.   I'm not sure if the idea is to substitute for the entire annotation, or for just the value that raised NameError; if the latter, Pydantic et al would have to iterate over every value in an annotation to look for this special value.

As a consumer of annotations at runtime, I'd definitely prefer that they raise NameError rather than silently substitute in this alternative value.


/arry

Steven D'Aprano

unread,
Oct 21, 2021, 8:22:33 PM10/21/21
to pytho...@python.org
On Thu, Oct 21, 2021 at 04:48:28PM -0400, Larry Hastings wrote:

> In Python, if you evaluate an undefined name, Python raises a
> NameError.  This is so consistent I'm willing to call it a "rule". 
> Various folks have proposed an exception to this "rule": evaluating an
> undefined name in an PEP 649 delayed annotation wouldn't raise
> NameError, instead evaluating to some yet-to-be-determined value
> (ForwardRef, AnnotationName, etc).  I don't think annotations are
> special enough to "break the rules" in this way.

Can we have a code snippet illustrating that? I think this is what you
mean. Please correct me if I have anything wrong.

If I have this:

from typing import Any
def function(arg:Spam) -> Any: ...

then we have four sets of (actual or proposed) behaviour:

1. The original, current and standard behaviour is that Spam raises a
NameError at function definition time, just as it would in any other
context where the name Spam is undefined, e.g. `obj = Spam()`.

2. Under PEP 563 (string annotations), there is no NameError, as the
annotations stay as strings until you attempt to explicitly resolve them
using eval. Only then would it raise NameError.

3. Under PEP 649 (descriptor annotations), there is no NameError at
function definition time, as the code that resolves the name Spam (and
Any for that matter) is buried in a descriptor. It is only on inspecting
`function.__annotations__` at runtime that the code in the descriptor is
run and the name Spam will generate a NameError.

4. Guido would(?) like PEP 649 to be revised so that inspecting the
annotations at runtime would not generate a NameError. Since Spam is
unresolvable, some sort of proxy or ForwardRef (or maybe just a
string?) would have to be returned instead of raising.

Am I close?

My initial thought was to agree with Larry about special cases, but
perhaps we could borrow part of PEP 563 and return a string if the name
Spam is unresolvable.

Runtime type checkers already have to deal with forward refs that are
strings, as this is legal, and always will be:

def function(arg:'Spam') -> Any: ...

so we're not putting any extra burden on them. And we had already
agreed to implicitly use strings for annotations.

So if I have understood the options correctly, I like the idea of a
hybrid descriptor + stringy annotations solution.

- defer evaluation of the annotations using descriptors (PEP 649);

- on runtime evaluation, if a name does not resolve, stringify it (as
PEP 563 would have done implicitly);

- anyone who really wants to force a NameError can eval the string.

More practically, folks will more likely delay evaluating the string
until Spam has been created/imported and will resolve.



--
Steve
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/F4BZLEQ36MIIBDRUIMNGWCSSE6AMYM5K/

Larry Hastings

unread,
Oct 21, 2021, 9:29:31 PM10/21/21
to pytho...@python.org

Your description of the four behaviors is basically correct.


So if I have understood the options correctly, I like the idea of a 
hybrid descriptor + stringy annotations solution.

- defer evaluation of the annotations using descriptors (PEP 649);

- on runtime evaluation, if a name does not resolve, stringify it (as 
PEP 563 would have done implicitly);

- anyone who really wants to force a NameError can eval the string.


You might also be interested in my "Great Compromise" proposal from back in April:

https://mail.python.org/archives/list/pytho...@python.org/thread/WUZGTGE43T7XV3EUGT6AN2N52OD3U7AE/

Naturally I'd prefer PEP 649 as written.  The "compromise" I described would have the same scoping limitations as stringized annotations, one area where PEP 649 is a definite improvement.


Cheers,


/arry

Christopher Barker

unread,
Oct 22, 2021, 12:40:03 AM10/22/21
to Steven D'Aprano, Python Dev
On Thu, Oct 21, 2021 at 5:24 PM Steven D'Aprano <st...@pearwood.info> wrote:
Runtime type checkers already have to deal with forward refs that are
strings, as this is legal, and always will be:

    def function(arg:'Spam') -> Any: ...

so we're not putting any extra burden on them. And we had already
agreed to implicitly use strings for annotations.

I'll take your word for it. However, other runtime uses for annotations may not already need to support strings as types.

Pydantic is the classic example. I'm not entirely sure how it does its thing, but I have code built on dataclasses that is likely similar -- it expects the annotation to be an actual type object -- an actual class, one that can be instantiated, not even the types that are in the typing module, and certainly not a string that needs to be evaluated.

So having them sometimes, maybe sometimes be strings would be a big pain in the @$$

On the other hand, having them be some odd "ForwardReference" type, rather than a NameError might be OK -- as long as SOME exception was raised as soon as I tried to use it. Though it would make for far more confusing error messages :-(

My first choice would be to get a NameError at module load time, like we do now. Second would be a NameError as soon as it is accessed. Getting a special value is OK though, now that I'm thinking about it, I could probably put that special case code in one place, and provide a nice error message.

-CHB

Jim J. Jewett

unread,
Oct 22, 2021, 4:44:15 PM10/22/21
to pytho...@python.org
Larry Hastings wrote:

> In Python, if you evaluate an undefined name, Python raises a
> NameError.  This is so consistent I'm willing to call it a "rule". 

Would it help the think of the function creation as catching that exception, and then finishing construction with its own version of NaN?
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/KGMXBLE2AP2TBOUBOVCTNFHAVX77ZFLO/

Bluenix

unread,
Oct 22, 2021, 5:00:43 PM10/22/21
to pytho...@python.org
Hello!

> This would affect code that expects annotations to always be strings, but such
> code would have to be poking directly at function objects (the
> __annotations__ attribute), instead of using the advertised ways of getting
> at annotations (like typing.get_type_hints()).

I would speculate that this is somewhat common to find. What I don't think is common
enough to consider is code that directly relies on the annotations being strings. The
majority of code will be trying to retrieve the actual underlying types.

The potential one script out there in the solarsystem that relies on this can just wrap the
result in str(...) to ensure it's in-fact a string.

> Keeping the future import and stringified annotations around is certainly
> an option, but we’re worried about the cost of the implementation, the
> support cost, and the confusion for users (specifically, it is a future
> import that will never become the future). If we do keep them, how long
> would we keep them around? Should we warn about their use? If we warn about
> the future import, is the noise and confusion this generates going to be
> worth it? If we don't warn about them, how will we ever be able to turn
> them off?

I quite particularly like what Łukasz Langa brought up under the header "A better possible future"
in their blog post here: https://lukasz.langa.pl/61df599c-d9d8-4938-868b-36b67fdb4448/
(which has been listed in another reply here).

I will quote part of the paragraph below (read 2021-10-22; last updated 2021-10-21):

> if the library author is fine with DeprecationWarnings, they could keep using
> `from __future__ import annotations` until Python 3.13 is released (October 2023). [...].
> So I would advise marking the old future-import as “pending deprecation” until 3.13
> and only fully deprecate it then for two releases, so it would
> be removed in Python 3.14 (𝛑) in October 2025 when Python 3.9 goes EOL.

This seems like the sense of direction in terms of a compromise that I would like to see.

> One idea is to rely on linters and IDEs to provide this signal, possibly
> with a clear upgrade path for the code (e.g. a 2to3-like fixer for a
> specific deprecation).
> [...]
> This sounds like a reasonably user-friendly approach, but it would require
> buy-in from linter/IDE developers, or an officially supported “Python
> linter” project that we control.

I think this is a nice step in the right direction as you mention test frameworks
have had this role so I don't see the issue in also extending this to static analysis
tools (static type checkers, linters and IDEs). Couldn't the "Python linter" simply
be the current behaviour when you opt-into it?

> Is it safe to assume very little code would be poking directly at
> __annotations__ attributes of function objects; effectively, to declare them
> implementation details and let them not be strings even in code that
> currently has the annotations future import?

I believe so, see my comments further up in this reply. It was never safe to assume
that all annotations were strings (in my opinion on writing good code) and so yes
I believe this would be a non-breaking change and could be considered an
implementation detail.

> Is the performance of PEP 649 and PEP 563 similar enough that we can
> outright discount it as a concern? Does anyone actually care about the
> overhead of type annotations anymore? Are there other options to alleviate
> this potential issue (like a process-wide switch to turn off annotations)?

In my opinion this shouldn't warrant any concern as these costs are only
on startup of Python. The difference is not enough for me to care at least.

> If we do not need to keep PEP 563 support, which would be a lot easier
> on code maintenance and our support matrix, do we need to warn about the
> semantics change? Can we silently accept (and ignore) the future import
> once PEP 649 is in effect?

In terms of semver I don't think this causes a breaking change that warrants
a major bump. That said this change should be made aware to the broader
Python community so that it reaches those who are affected by this change.

Have a good weekend
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/K77G435H7QQZ4YZOVXYBMWZLSQKGO3U3/

Steven D'Aprano

unread,
Oct 22, 2021, 8:55:01 PM10/22/21
to pytho...@python.org
On Thu, Oct 21, 2021 at 09:36:20PM -0700, Christopher Barker wrote:
> On Thu, Oct 21, 2021 at 5:24 PM Steven D'Aprano <st...@pearwood.info> wrote:
>
> > Runtime type checkers already have to deal with forward refs that are
> > strings, as this is legal, and always will be:
> >
> > def function(arg:'Spam') -> Any: ...
> >
> > so we're not putting any extra burden on them. And we had already
> > agreed to implicitly use strings for annotations.
> >
>
> I'll take your word for it. However, other runtime uses for annotations may
> not already need to support strings as types.
>
> Pydantic is the classic example.

Pydantic supports stringified annotations.

https://pydantic-docs.helpmanual.io/usage/postponed_annotations/

Any other runtime annotation tool has to support strings, otherwise the
"from __future__ import annotations" directive will have already broken
it. If the tool does type-checking, then it should support stringified
annotations. They have been a standard part of type-hinting since 2014
and Python 3.5:

https://www.python.org/dev/peps/pep-0484/#forward-references

Any type-checking tool which does not already support stringified
references right now is broken.


--
Steve
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/WVVBETE7UZ4WI6HOF7WCNHYOK6HCXRTA/

Christopher Barker

unread,
Oct 23, 2021, 12:32:27 AM10/23/21
to Steven D'Aprano, pytho...@python.org
Any other runtime annotation tool has to support strings, otherwise the
"from __future__ import annotations" directive will have already broken
it.

Exactly- isn’t that it was deferred in 3.10, and may never be implemented? I’ll leave it to the Pydantic developers to discuss that, but they were pretty clear that PEP 563 would break things.

If the tool does type-checking, then it should support stringified
annotations.

I guess I wasn’t clear — my point was there are uses for annotations that are NOT type checking. 

And my understanding of what the SC said was that they want those uses to continue to be supported.

-CHB


Inada Naoki

unread,
Oct 23, 2021, 4:18:30 AM10/23/21
to Bluenix, Python-Dev
On Sat, Oct 23, 2021 at 5:55 AM Bluenix <bluen...@gmail.com> wrote:
>
>
> > Is the performance of PEP 649 and PEP 563 similar enough that we can
> > outright discount it as a concern? Does anyone actually care about the
> > overhead of type annotations anymore? Are there other options to alleviate
> > this potential issue (like a process-wide switch to turn off annotations)?
>
> In my opinion this shouldn't warrant any concern as these costs are only
> on startup of Python. The difference is not enough for me to care at least.
>

Costs are not only on startup time.
Memory consumption is cost on process lifetime.
And longer GC time is every time when full-GC is happened.

So performance is problem for both of short lifecycle applications
like CLI tools and long lifecycle applications like Web app.

Regards,
--
Inada Naoki <songof...@gmail.com>
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/ZQE3BXOTCETH6CDMGZHDGDEKV6SRQGQR/

Larry Hastings

unread,
Oct 23, 2021, 9:53:43 AM10/23/21
to pytho...@python.org


On 10/22/21 1:45 AM, Steven D'Aprano wrote:
Any other runtime annotation tool has to support strings, otherwise the 
"from __future__ import annotations" directive will have already broken 
it. If the tool does type-checking, then it should support stringified 
annotations. They have been a standard part of type-hinting since 2014 
and Python 3.5:

https://www.python.org/dev/peps/pep-0484/#forward-references

Any type-checking tool which does not already support stringified 
references right now is broken.


It's an debatable point since "from future" behavior is always off by default.  I'd certainly agree that libraries should support stringized annotations by now, considering they were nearly on by default in 3.10.  But I wouldn't say stringized annotations were a "standard" part of Python, yet.  As yet they are optional.  Optional things aren't standard, and standard things aren't optional.


/arry

Steven D'Aprano

unread,
Oct 23, 2021, 10:40:53 AM10/23/21
to pytho...@python.org
On Sat, Oct 23, 2021 at 09:49:10AM -0400, Larry Hastings wrote:

> It's an debatable point since "from future" behavior is always off by
> default.  I'd certainly agree that libraries /should/ support stringized
> annotations by now, considering they were nearly on by default in 3.10. 
> But I wouldn't say stringized annotations were a "standard" part of
> Python, yet.  As yet they are optional.  Optional things aren't
> standard, and standard things aren't optional.

You misunderstand me. I'm not referring to PEP 563, which is still
optional and requires the user to opt-in with a future import. I'm
referring to the *manual* use of strings for forward references, which
has been part of type-hinting since PEP 484 way back in 2014.

https://www.python.org/dev/peps/pep-0484/#forward-references

I expect that people were using strings for forward references before
PEP 484, but it was 484 that made it official.



--
Steve
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/NGW3E43JXZ4N25GPXZIMSIAKHSMHUTSA/

Guido van Rossum

unread,
Oct 23, 2021, 10:52:34 AM10/23/21
to Steven D'Aprano, pytho...@python.org
(Off-topic)

On Sat, Oct 23, 2021 at 07:42 Steven D'Aprano <st...@pearwood.info> wrote:
I expect that people were using strings for forward references before
PEP 484, but it was 484 that made it official.

I doubt it. We invented that specifically for mypy. I am not aware of any prior art. 

—Guido

--
--Guido (mobile)

Bluenix

unread,
Oct 23, 2021, 5:08:45 PM10/23/21
to pytho...@python.org
Hmm, I thought I responded to this on Gmail but it hasn't appeared here on the archive so I'll send it again..

Is it known how much more/less the annotations impact performance compared to function defaults?

--
Blue
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/X7NEHSWIRHFLXLN3HBR3WG7WT6DGUAZJ/

Inada Naoki

unread,
Oct 23, 2021, 10:12:59 PM10/23/21
to Bluenix, Python-Dev
On Sun, Oct 24, 2021 at 6:03 AM Bluenix <bluen...@gmail.com> wrote:
>
> Hmm, I thought I responded to this on Gmail but it hasn't appeared here on the archive so I'll send it again..
>
> Is it known how much more/less the annotations impact performance compared to function defaults?
>

Basically, PEP 563 overhead is the same as function defaults.

* Function annotations are one constant tuple per function.
* Functions having the same signature share the same annotation tuple.
* No GC overhead because it is constant (not tracked by GC).


On the other hand, it is difficult to predicate PEP 649 overhead.

When namespace is not kept:

* Annotation is just a constant tuple too.
* No GC overhead too.
* The constant tuple will be slightly bigger than PEP 563. (It can be
the same as PEP 563 when all annotations are constant.)
* Although they are constant tuples, they may have their own names and
firstlineno. So it is difficult to share whole annotation data between
functions having the same signature. (*)

(*) I proposed to drop firstlineno and names to share more
annotations. See
https://github.com/larryhastings/co_annotations/pull/9

When namespace is kept:

* Annotations will be GC tracked objects.
* They will be heavier than the "namespace is not kept", for both of
startup time and memory consumption.
* They have some GC overhead.

To answer "how much more", we need a target application. But I don't
have a good target application for now.

Additionally, even if we have a good target application, it is still
difficult to estimate real future impact.

For example, SQLAlchemy has very heavy docstrings so the `-OO` option
has a big impact.
But SQLAlchemy doesn't use function annotations yet. So stock/PEP
563/PEP 649 have the same (zero) cost. Yay!
But if SQLAlchemy starts using annotations, *all applications* using
SQLAlchemy will be impacted in the future.

Regards,
--
Inada Naoki <songof...@gmail.com>
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/WHLQAVAXXXACUABGS2EJLYWU336ISZDD/

Bluenix

unread,
Oct 26, 2021, 12:26:10 PM10/26/21
to pytho...@python.org
> * Functions having the same signature share the same annotation tuple.

Is this true with code that have a mutable default?

>>> def a(arg = []):
... arg.append('a')
... return arg
...
>>> def b(arg = []):
... arg.append('b')
... return arg
...
>>> a()
['a']
>>> a()
['a', 'a']
>>> a()
['a', 'a', 'a']
>>> b()
['b']
>>> b()
['b', 'b']
>>> b()
['b', 'b', 'b']
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/5Y7PU6HAXGGG5TGNLTLAGQXDSPW65ZK2/

Larry Hastings

unread,
Oct 26, 2021, 1:43:11 PM10/26/21
to pytho...@python.org


On 10/26/21 5:22 PM, Bluenix wrote:
* Functions having the same signature share the same annotation tuple.
Is this true with code that have a mutable default?
[... examples deleted...]


You're confusing two disjoint concepts.

First of all, all your examples experiment with default values which are unrelated to their annotations.  None of your examples use or examine annotations.

Second, Inada-san was referring to the tuple of strings used to initialize the annotations for a function when PEP 583 (stringized annotations) is active.  This is a clever implementation tweak that first shipped with Python 3.10, which makes stringized annotations very efficient.  Since all the names and annotations are strings, rather than creating the dictionary at function binding time, they're stored in a tuple, and the dictionary is created on demand.  This tuple is a constant object, and marshalling a module automatically collapses duplicate constants into the same constant.  So identical PEP 583 annotation tuples are collapsed into the same tuple.  Very nice!


/arry

asafs...@gmail.com

unread,
Oct 31, 2021, 3:03:58 PM10/31/21
to pytho...@python.org
One use case which seems significant to me but I don’t think has been explicitly mentioned is annotations using a package with stubs where the stubbed typing API is slightly different than the runtime API.
For example sometimes additional tape aliases are defined for convenience in stubs without a corresponding runtime name, or a class is generic in the stubs but has not yet been made to inherit typing.Generic (or just made a subscriptable class), in runtime. These situations are described in the mypy docs: https://mypy.readthedocs.io/en/latest/runtime_troubles.html#using-classes-that-are-generic-in-stubs-but-not-at-runtime.
These are easy to write without problems using PEP 563, but I am not sure how they would work with PEP 649.

I believe this pattern may be useful in complex existing libraries when typing is added, as it may be difficult to convert an existing class to generic. For example with numpy, the core ndarray class was made generic in stubs to support indicating the shape and data type. You could only write eg ndarray[Any, np.int64] using PEP 563. A while later a workaround was added by defining __class_getitem__.
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/IJ37JVIVVGN5BJGMMNKGADXBYUSMVU6F/
Reply all
Reply to author
Forward
0 new messages