Over the last couple of months, ever since delaying PEP 563’s default change in 3.10, the Steering Council has been discussing and deliberating over PEP 563 (Postponed Evaluation of Annotations), PEP 649 (Deferred Evaluation Of Annotations Using Descriptors), and type annotations in general. We haven’t made a decision yet, but we want to give everyone an update on where we’re at, what we’re thinking and why, see if there’s consensus on any of this, and perhaps restart the discussion around the options.
First off, as Barry already mentioned in a different thread, the SC does not want to see type annotations as separate from the Python language. We don’t think it would be good to have the syntax or the semantics diverge, primarily because we don’t think users would see them as separate. Any divergence would be hard to explain in documentation, hard to reason about when reading code, and hard to delineate, to describe what is allowed where. There’s a lot of nuance in this position (it doesn’t necessarily mean that all valid syntax for typing uses has to have sensible semantics for non-typing uses), but we’ll be working on something to clarify that more, later.
We also believe that the runtime uses of type annotations, which PEP 563 didn’t really anticipate, are valid uses that Python should support. If function annotations and type annotations had evolved differently, such as being strings from the start, PEP 563 might have been sufficient. It’s clear runtime uses of type annotations serve a real, sensible purpose, and Python benefits from supporting them.
By and large, the SC views PEP 649 as a better way forward. If PEP 563 had never come along, it would be a fairly easy decision to accept PEP 649. We are still inclined to accept PEP 649. That would leave the consideration about what to do with PEP 563 and existing from __future__ import annotations directives. As far as we can tell, there are two reasons for code to want to use PEP 563: being able to conveniently refer to names that aren’t available until later in the code (i.e. forward references), and reducing the overhead of type annotations. If PEP 649 satisfies all of the objectives of PEP 563, is there a reason to keep supporting PEP 563’s stringified annotations? Are there any practical, real uses of stringified annotations that would not be served by PEP 649's deferred annotations?
If we no longer need to support PEP 563, can we simply make from __future__ import annotations enable PEP 649? We still may want a new future import for PEP 649 as a transitory measure, but we could make PEP 563's future import mean the same thing, without actually stringifying annotations. (The reason to do this would be to reduce the combinatorial growth of the support matrix, and to simplify the implementation of the parser.) This would affect code that expects annotations to always be strings, but such code would have to be poking directly at function objects (the __annotations__ attribute), instead of using the advertised ways of getting at annotations (like typing.get_type_hints()). This question in particular is one in which the SC isn't yet of one mind.
Keeping the future import and stringified annotations around is certainly an option, but we’re worried about the cost of the implementation, the support cost, and the confusion for users (specifically, it is a future import that will never become the future). If we do keep them, how long would we keep them around? Should we warn about their use? If we warn about the future import, is the noise and confusion this generates going to be worth it? If we don't warn about them, how will we ever be able to turn them off?
One thing we’re thinking of specifically for the future import, and for other deprecations in Python, is to revisit the deprecation and warning policy. We think it’s pretty clear that the policy we have right now doesn’t exactly work. We used to have noisy DeprecationWarnings, which were confusing to end users when they were not in direct control of the code. We now have silent-by-default DeprecationWarnings, where the expectation is that test frameworks surface these warnings. This avoids the problem of end users being confused, but leaves the problem of the code’s dependencies triggering the warning, and thus still warns users (developers) not necessarily in a position to fix the problem, which in turn leads to them silencing the warning and moving on. We need a better way to reach the users in a position to update the code.
One idea is to rely on linters and IDEs to provide this signal, possibly with a clear upgrade path for the code (e.g. a 2to3-like fixer for a specific deprecation). Support for deprecations happened to be brought up on the typing-sig mailing list not too long ago, as an addition to the pytype type checker and hopefully others (full disclosure, Yilei is a team-mate of Thomas’s at Google).
This sounds like a reasonably user-friendly approach, but it would require buy-in from linter/IDE developers, or an officially supported “Python linter” project that we control. There’s also the question of support timelines: most tooling supports a wider range of Python versions than just the two years that we use in our deprecation policy. Perhaps we need to revisit the policy, and consider deprecation timelines based on how many Python versions library developers usually want to support.
The SC continues to discuss the following open questions, and we welcome your input on them:
Is it indeed safe to assume PEP 649 satisfies all reasonable uses of PEP 563? Are there cases of type annotations for static checking or runtime use that PEP 563 enables, which would break with PEP 649?
Is it safe to assume very little code would be poking directly at __annotations__ attributes of function objects; effectively, to declare them implementation details and let them not be strings even in code that currently has the annotations future import?
Is the performance of PEP 649 and PEP 563 similar enough that we can outright discount it as a concern? Does anyone actually care about the overhead of type annotations anymore? Are there other options to alleviate this potential issue (like a process-wide switch to turn off annotations)?
If we do not need to keep PEP 563 support, which would be a lot easier on code maintenance and our support matrix, do we need to warn about the semantics change? Can we silently accept (and ignore) the future import once PEP 649 is in effect?
If we do need a warning, how loud, and how long should it be around? At the end of the deprecation period, should the future import be an error, or simply be ignored?
Are there other options we haven’t thought of for dealing with deprecations like this one?
Like I said, the SC isn’t done deliberating on any of this. The only decisions we’ve made so far is that we don’t see the typing language as separate from Python (and thus won’t be blanket delegating typing PEPs to a separate authority), and we don’t see type annotations as purely for static analysis use.
For the whole SC,
Thomas.
By and large, the SC views PEP 649 as a better way forward. If PEP 563 had never come along, it would be a fairly easy decision to accept PEP 649. We are still inclined to accept PEP 649. That would leave the consideration about what to do with PEP 563 and existing from __future__ import annotations directives. As far as we can tell, there are two reasons for code to want to use PEP 563: being able to conveniently refer to names that aren’t available until later in the code (i.e. forward references), and reducing the overhead of type annotations. If PEP 649 satisfies all of the objectives of PEP 563, is there a reason to keep supporting PEP 563’s stringified annotations? Are there any practical, real uses of stringified annotations that would not be served by PEP 649's deferred annotations?
These are my reasons for using from __future__ import annotations:
- Sebastian
Personally, I think that stringified annotations should be deprecated and eventually be removed. This opens the design space to use quotes for different purposes, for example getting rid of the cumbersome need to use Literal for literals.Keeping the future import and stringified annotations around is certainly an option, but we’re worried about the cost of the implementation, the support cost, and the confusion for users (specifically, it is a future import that will never become the future). If we do keep them, how long would we keep them around? Should we warn about their use? If we warn about the future import, is the noise and confusion this generates going to be worth it? If we don't warn about them, how will we ever be able to turn them off?
One thing we’re thinking of specifically for the future import, and for other deprecations in Python, is to revisit the deprecation and warning policy. We think it’s pretty clear that the policy we have right now doesn’t exactly work. We used to have noisy DeprecationWarnings, which were confusing to end users when they were not in direct control of the code. We now have silent-by-default DeprecationWarnings, where the expectation is that test frameworks surface these warnings. This avoids the problem of end users being confused, but leaves the problem of the code’s dependencies triggering the warning, and thus still warns users (developers) not necessarily in a position to fix the problem, which in turn leads to them silencing the warning and moving on. We need a better way to reach the users in a position to update the code.
If we do need a warning, how loud, and how long should it be around? At the end of the deprecation period, should the future import be an error, or simply be ignored?
The future import should remain active and should continue to work as it does now (without warning) as long as there are Python versions supported that have not implemented PEP 649, i.e. Python 3.10. Otherwise, existing code that provides supports for these older versions (especially libraries) have to regress their typing support. For example, a library supporting 3.8+ could have the following code:
from __future__ import annotations
def foo(x: int | str): pass
If newer versions stop supporting the future import or warn about it, the library would have to go back to using typing.Union here. Other constructs would even be impossible to use.
After that, I would recommend to start the normal "warn"/"remove"
cycle for the future import. Don't keep it around if it's doing
nothing.
Is the performance of PEP 649 and PEP 563 similar enough that we can outright discount it as a concern? Does anyone actually care about the overhead of type annotations anymore? Are there other options to alleviate this potential issue (like a process-wide switch to turn off annotations)?
PEP 649 was about the same as the current performance, but PEP 563 was significantly faster, since it doesn’t instantiate or deal with objects at all, which both the current default and PEP 563 do.
I don't understand what you're saying about how PEP 563 both does and doesn't instantiate objects.
PEP 649, and the current implementation of PEP 563, are definitely both faster than stock behavior when you don't examine annotations; both of these approaches don't "instantiate or deal with objects" unless you examine the annotations. PEP 649 is roughly the same as stock when you do examine annotations. PEP 563 is faster if you only ever examine the annotations as strings, but becomes enormously slower if you examine the annotations as actual Python values.
The way I remember it, most of the negative feedback about PEP
649's performance concerned its memory consumption. I've
partially addressed that by always lazy-creating the function
object. But, again, I suggest that performance is a distraction
at this stage. The important thing is to figure out what
semantics we want for the language. We have so many clever people
working on CPython, I'm sure this team will make whatever
semantics we choose lean and performant.
/arry
_______________________________________________
Python-Dev mailing list -- pytho...@python.org
To unsubscribe send an email to python-d...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/pytho...@python.org/message/L3IKBO5YZNQ2B5Y6VA7KX352VCOCQEBB/
Sorry for the naive question but why doesn't "TYPE_CHECKING" work under PEP 649?
I think I've seen others mention this but as the code object isn't executed until inspected then if you are just using annotations for type hints it should work fine?
Yes, it works fine in that case.
Is the use case wanting to use annotations for type hints and real time inspection but you also don't want to import the objects at run time?
If that's really such a strong use cause couldn't PEP 649 be modified to return a repr of the code object when it gets a NameError? Either by attaching it to the NameError exception or as part of a ForwardRef style object if that's how PEP 649 ends up getting implemented?
That's the use case.
Your proposal is one of several suggesting that type annotations are special enough to break the rules. I don't like this idea. But you'll be pleased to know there are a lot of folks in the "suppress the NameError" faction, including Guido (IIUC).
See also this PR against co_annotations, proposing returning a new AnnotationName object when evaluating the annotations raises a NameError.
Cheers,
/arry
On Thu, Oct 21, 2021 at 10:44 AM Damian Shaw
<damian.p...@gmail.com> wrote:
> Sorry for the naive question but why doesn't "TYPE_CHECKING" work under PEP 649?
>
> I think I've seen others mention this but as the code object isn't executed until inspected then if you are just using annotations for type hints it should work fine?
>
> Is the use case wanting to use annotations for type hints and real time inspection but you also don't want to import the objects at run time?
Yes, you're right. And I don't think PEP 649 and PEP 563 are really
all that different in this regard: if you have an annotation using a
non-imported name, you'll be fine as long as you don't introspect it
at runtime. If you do, you'll get a NameError. And with either PEP you
can work around this if you need to by ensuring you do the imports
first if you're going to need the runtime introspection of the
annotations.
The difference is that PEP 563 makes it easy to introspect the
annotation _as a string_ without triggering NameError, and PEP 649
takes that away, but I haven't seen anyone describe a really
compelling use case for that.
.Your proposal is one of several suggesting that type annotations are special enough to break the rules. I don't like this idea. But you'll be pleased to know there are a lot of folks in the "suppress the NameError" faction, including Guido (IIUC).
On 20 Oct 2021, at 15:18, Thomas Wouters <tho...@python.org> wrote:(For visibility, posted both to python-dev and Discourse.)
It's certainly not my goal to be misleading. Here's my
perspective.
In Python, if you evaluate an undefined name, Python raises a NameError. This is so consistent I'm willing to call it a "rule". Various folks have proposed an exception to this "rule": evaluating an undefined name in an PEP 649 delayed annotation wouldn't raise NameError, instead evaluating to some yet-to-be-determined value (ForwardRef, AnnotationName, etc). I don't think annotations are special enough to "break the rules" in this way.
Certainly this has the potential to be irritating for code using annotations at runtime, e.g. Pydantic. Instead of catching the exception, it'd have to check for this substitute value. I'm not sure if the idea is to substitute for the entire annotation, or for just the value that raised NameError; if the latter, Pydantic et al would have to iterate over every value in an annotation to look for this special value.
As a consumer of annotations at runtime, I'd definitely prefer
that they raise NameError rather than silently substitute in this
alternative value.
/arry
Your description of the four behaviors is basically correct.
So if I have understood the options correctly, I like the idea of a hybrid descriptor + stringy annotations solution. - defer evaluation of the annotations using descriptors (PEP 649); - on runtime evaluation, if a name does not resolve, stringify it (as PEP 563 would have done implicitly); - anyone who really wants to force a NameError can eval the string.
You might also be interested in my "Great Compromise" proposal from back in April:
https://mail.python.org/archives/list/pytho...@python.org/thread/WUZGTGE43T7XV3EUGT6AN2N52OD3U7AE/
Naturally I'd prefer PEP 649 as written. The "compromise" I described would have the same scoping limitations as stringized annotations, one area where PEP 649 is a definite improvement.
Cheers,
/arry
Runtime type checkers already have to deal with forward refs that are
strings, as this is legal, and always will be:
def function(arg:'Spam') -> Any: ...
so we're not putting any extra burden on them. And we had already
agreed to implicitly use strings for annotations.
Any other runtime annotation tool has to support strings, otherwise the
"from __future__ import annotations" directive will have already broken
it.
If the tool does type-checking, then it should support stringified
annotations.
Any other runtime annotation tool has to support strings, otherwise the "from __future__ import annotations" directive will have already broken it. If the tool does type-checking, then it should support stringified annotations. They have been a standard part of type-hinting since 2014 and Python 3.5: https://www.python.org/dev/peps/pep-0484/#forward-references Any type-checking tool which does not already support stringified references right now is broken.
It's an debatable point since "from future" behavior is always
off by default. I'd certainly agree that libraries should
support stringized annotations by now, considering they were
nearly on by default in 3.10. But I wouldn't say stringized
annotations were a "standard" part of Python, yet. As yet they
are optional. Optional things aren't standard, and standard
things aren't optional.
/arry
I expect that people were using strings for forward references before
PEP 484, but it was 484 that made it official.
* Functions having the same signature share the same annotation tuple.
Is this true with code that have a mutable default? [... examples deleted...]
You're confusing two disjoint concepts.
First of all, all your examples experiment with default values
which are unrelated to their annotations. None of your examples
use or examine annotations.
Second, Inada-san was referring to the tuple of strings used to
initialize the annotations for a function when PEP 583 (stringized
annotations) is active. This is a clever implementation tweak
that first shipped with Python 3.10, which makes stringized
annotations very efficient. Since all the names and annotations
are strings, rather than creating the dictionary at function
binding time, they're stored in a tuple, and the dictionary is
created on demand. This tuple is a constant object, and
marshalling a module automatically collapses duplicate constants
into the same constant. So identical PEP 583 annotation tuples
are collapsed into the same tuple. Very nice!
/arry