[Python-Dev] Please reject or postpone PEP 526

491 views
Skip to first unread message

Mark Shannon

unread,
Sep 2, 2016, 9:51:45 AM9/2/16
to Python Dev
Hi everyone,

I think we should reject, or at least postpone PEP 526.

PEP 526 represents a major change to the language, however there are, I
believe, a number of technical flaws with the PEP.

It is probable that with significant revisions it can be a worthwhile
addition to the language, but that cannot happen in time for 3.6 beta 1
(in 11 days).

PEP 526 claims to be an extension of PEP 484, but I don't think that is
entirely correct.
PEP 484 was primarily about defining the semantics of pre-existing
syntax. PEP 526 is about adding new syntax.
Historically the bar for adding new syntax has been set very high. I
don't think that PEP 526, in its current form, reaches that bar.

Below is a list of the issues I have with the PEP as it stands.

In many cases it makes it more effort than type comments
========================================================

Type hints should be as easy to use as possible, and that means pushing
as much work as possible onto the checker, and not burdening the programmer.

Attaching type hints to variables, rather than expressions, reduces the
potential for inference. This makes it harder for programmer, but easier
for the checker, which is the wrong way around.

For example,, given a function:
def spam(x: Optional[List[int]])->None: ...

With type comments, this is intuitively correct and should type check:
def eggs(cond:bool):
if cond:
x = None
else:
x = [] # type: List[int]
spam(x) # Here we can infer the type of x

With PEP 526 we loose the ability to infer types.
def eggs(cond:bool):
if cond:
x = None # Not legal due to type declaration below
else:
x: List[int] = []
spam(x)

So we need to use a more complex type
def eggs(cond:bool):
x: Optional[List[int]]
if cond:
x = None # Now legal
else:
x: = []
spam(x)

I don't think this improves readability.
Whether this is an acceptable change is debatable, but it does need some
debate.

It limits the use of variables
==============================

In Python a name (variable) is just a binding that refers to an object.
A name only exists in a meaningful sense once an object has been
assigned to it. Any attempt to use that name, without an object bound to
it, will result in a NameError.

PEP 526 makes variables more than just bindings, as any rebinding must
conform to the given type. This looses us some of the dynamism for which
we all love Python.

Quoting from the PEP:
```
a: int
a: str # Static type checker will warn about this.
```
In other words, it is illegal for a checker to split up the variable,
even though it is straightforward to do so.

However, without the type declarations,
```
a = 1
a = "Hi"
```
is just fine. Useless, but fine.

We should be free to add extra variables, whenever we choose, for
clarity. For example,
total = foo() - bar()
should not be treated differently from:
revenue = foo()
tax = bar()
total = revenue - tax

If types are inferred, there is no problem.
However, if they must be declared, then the use of meaningfully named
variables is discouraged.

[A note about type-inference:
Type inference is not a universal panacea, but it can make life a lot
easier for programmers in statically type languages.
Languages like C# use local type inference extensively and it means that
many variables often do not need their type declared. We should take
care not to limit the ability of checkers to infer values and types and
make programmers' lives easier.
Within a function, type inference is near perfect, failing only
occasionally for some generic types.
One place where type inference definitely breaks down is across calls,
which is why PEP 484 is necessary.
]

It is premature
===============

There are still plenty of issues to iron out w.r.t. PEP 484 types. I
don't think we should be adding more syntax, until we have a *precise*
idea of what is required.

PEP 484 states:
"If type hinting proves useful in general, a syntax for typing variables
may be provided in a future Python version."
Has it proved useful in general? I don't think it has. Maybe it will in
future, but it hasn't yet.

It seems confused about class attributes and instance attributes
================================================================

The PEP also includes a section of how to define class attributes and
instance attributes. It seems that everything needs to be defined in the
class scope, even it is not an attribute of the class, but of its
instances. This seems confusing, both to human reader and machine analyser.

Example from PEP 526:

class Starship:

captain: str = 'Picard'
damage: int
stats: ClassVar[Dict[str, int]] = {}

def __init__(self, damage: int, captain: str = None):
self.damage = damage
if captain:
self.captain = captain # Else keep the default

With type hints as they currently exist, the same code is shorter and
doesn't contaminate the class namespace with the 'damage' attribute.

class Starship:

captain = 'Picard'
stats = {} # type: Dict[str, int]

def __init__(self, damage: int, captain: str = None):
self.damage = damage # Can infer type as int
if captain:
self.captain = captain # Can infer type as str


This isn't an argument against adding type syntax for attributes in
general, just that the form suggested in PEP 526 doesn't seem to follow
Python semantics.

One could imagine applying minimal PEP 526 style hints, with standard
Python semantics and relying on type inference, as follows:

class Starship:

captain = 'Picard'
stats: Dict[str, int] = {}

def __init__(self, damage: int, captain: str = None):
self.damage = damage
if captain:
self.captain = captain

The PEP overstates the existing use of static typing in Python
==============================================================

Finally, in the rejected proposal section, under "Should we introduce
variable annotations at all?" it states that "Variable annotations have
already been around for almost two years in the form of type comments,
sanctioned by PEP 484."
I don't think that this is entirely true.
PEP 484 was about the syntax for types, declaring parameter and return
types, and declaring custom types to be generic.
PEP 484 does include a description of type comments, but they are always
annotations on assignment statements and were primarily intended for use
in stub files.



Please don't turn Python into some sort of inferior Java.
There is potential in this PEP, but in its current form I think it
should be rejected.

Cheers,
Mark.


_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com

Ryan Gonzalez

unread,
Sep 2, 2016, 11:05:39 AM9/2/16
to Mark Shannon, Python Dev

But isn't that the same way with type comments? Except uglier?

I think we're fine; there aren't any `AbstractAccountManagerInterfacePageSQLDatabaseConnPipeOrientedNewVersionProtocol`s.

> There is potential in this PEP, but in its current form I think it should be rejected.
>
> Cheers,
> Mark.
>
>
> _______________________________________________
> Python-Dev mailing list
> Pytho...@python.org
> https://mail.python.org/mailman/listinfo/python-dev

> Unsubscribe: https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com

--
Ryan
[ERROR]: Your autotools build scripts are 200 lines longer than your program. Something’s wrong.
http://kirbyfan64.github.io/

Guido van Rossum

unread,
Sep 2, 2016, 12:03:53 PM9/2/16
to Mark Shannon, Python-Dev

It looks like you're misinterpreting the intent of the PEP. It does not mean legislate the behavior of the type checker in this way. In mypy, the first example is already rejected because it wants the annotation on the first occurrence. The plan is for mypy not to change its behavior -- the old form

TARGET = VALUE # type: TYPE

will be treated the same way as the new form

TARGET: TYPE = VALUE

(If you have a beef with what this means in mypy you should probably take it up with mypy, not with PEP 526.)

> It limits the use of variables
> ==============================
>
> In Python a name (variable) is just a binding that refers to an object.
> A name only exists in a meaningful sense once an object has been assigned to
> it. Any attempt to use that name, without an object bound to it, will result
> in a NameError.

(Or UnboundLocalError, if the compiler knows there is an assignment to the name anywhere in the same (function) scope.)

> PEP 526 makes variables more than just bindings, as any rebinding must
> conform to the given type. This looses us some of the dynamism for which we
> all love Python.

Thanks for catching this; that's not the intent.

> Quoting from the PEP:
> ```
> a: int
> a: str # Static type checker will warn about this.
> ```
> In other words, it is illegal for a checker to split up the variable, even
> though it is straightforward to do so.

One of my co-authors has gone too far here. The intent is not to legislate what should happen in this case but to leave it to the checker. In mypy, the equivalent syntax using type comments is currently indeed rejected, but we're considering a change here (https://github.com/python/mypy/issues/1174). The PEP 526 syntax will not make a difference here.

> However, without the type declarations,
> ```
> a = 1
> a = "Hi"
> ```
> is just fine. Useless, but fine.

And believe me, I want to keep it this way. I will amend the example and clarify the intent in the text.

> We should be free to add extra variables, whenever we choose, for clarity.
> For example,
> total = foo() - bar()
> should not be treated differently from:
> revenue = foo()
> tax = bar()
> total = revenue - tax
>
> If types are inferred, there is no problem.
> However, if they must be declared, then the use of meaningfully named
> variables is discouraged.

There is no mandate to declare variables! I actually see the main use of variable annotations in class bodies where it can crate a lot of clarity around which instance variables exist and what they mean.

> [A note about type-inference:
> Type inference is not a universal panacea, but it can make life a lot easier
> for programmers in statically type languages.
> Languages like C# use local type inference extensively and it means that
> many variables often do not need their type declared. We should take care
> not to limit the ability of checkers to infer values and types and make
> programmers' lives easier.
> Within a function, type inference is near perfect, failing only occasionally
> for some generic types.
> One place where type inference definitely breaks down is across calls, which
> is why PEP 484 is necessary.
> ]

Totally agreed. But type annotations are not *just* for the checker. I am regularly delighted when I find function annotations in code that I have to read for the first time, because it helps my understanding. Many others at Dropbox (where we have been doing a fairly large-scale experiment with the introduction of mypy) agree.

> It is premature
> ===============
>
> There are still plenty of issues to iron out w.r.t. PEP 484 types. I don't
> think we should be adding more syntax, until we have a *precise* idea of
> what is required.
>
> PEP 484 states:
> "If type hinting proves useful in general, a syntax for typing variables may
> be provided in a future Python version."
> Has it proved useful in general? I don't think it has. Maybe it will in
> future, but it hasn't yet.

PEP 526 does not alter this situation. It doesn't define new types, only new places where types can be used syntactically, and it is careful to give them the same syntactic description as PEP 484 (it's an expression). Practical use of mypy has shown that we use `# type` comments on variables with some regularity and not having them in the AST has been a problem for tools. For example, we have had to teach flake8 and pylint about type comments (so that we could continue to benefit from there "unused import" and "undefined variable" tests), and in both cases it was a gross hack.

> It seems confused about class attributes and instance attributes
> ================================================================
>
> The PEP also includes a section of how to define class attributes and
> instance attributes. It seems that everything needs to be defined in the
> class scope, even it is not an attribute of the class, but of its instances.
> This seems confusing, both to human reader and machine analyser.

And yet of course most other OO languages, like C++ and Java, also let you define instance and class variables in the class body (and again with a default of "instance", IIRC you have to use "static" in both languages for class variables).

> Example from PEP 526:
>
> class Starship:
>
> captain: str = 'Picard'
> damage: int
> stats: ClassVar[Dict[str, int]] = {}
>
> def __init__(self, damage: int, captain: str = None):
> self.damage = damage
> if captain:
> self.captain = captain # Else keep the default
>
> With type hints as they currently exist, the same code is shorter and
> doesn't contaminate the class namespace with the 'damage' attribute.
>
> class Starship:
>
> captain = 'Picard'
> stats = {} # type: Dict[str, int]
>
> def __init__(self, damage: int, captain: str = None):
> self.damage = damage # Can infer type as int
> if captain:
> self.captain = captain # Can infer type as str

It is also harder for the human reader to discover that there's a `damage` instance attribute. (Many classes I've reviewed at Dropbox have pages and pages of __init__ code, defining dozens of instance variables, sometimes using init-helper methods.)

For example, if you look at the code for asyncio's Future class you'll see this block at the class level:
```
class Future:
"""..."""

# Class variables serving as defaults for instance variables.
_state = _PENDING
_result = None
_exception = None
_loop = None
_source_traceback = None

_blocking = False # proper use of future (yield vs yield from)

_log_traceback = False # Used for Python 3.4 and later
_tb_logger = None # Used for Python 3.3 only

def __init__(self, *, loop=None):
...

```
The terminology here is actually somewhat confused, but these are all default values for instance variables. Because the defaults here are all immutable, the assignments are put here instead of in __init__ to save a little space in the dict and to make __init__ shorter (it only has to set those instance variables that have mutable values).

There's also a highly technical reason for preferring that some instance variables are given a default value in the class -- that way if an exception happens in __init__ and there is recovery code that tries to use some instance variable that __init__ hadn't initialized yet (e.g. in an attempt to log the object) it avoids AttributeErrors for those variables that have defaults set on the class. This has happened to me often enough that it is now a standard idiom in my head.

> This isn't an argument against adding type syntax for attributes in general,
> just that the form suggested in PEP 526 doesn't seem to follow Python
> semantics.

I'm happy to weaken the semantics as mandated in the PEP. In fact I had thought it already doesn't mandate any semantics (apart from storing certain forms of annotations in __annotations__, to match PEP 3107 and PEP 484), although I agree some examples have crept in that may appear more normative than we meant them.

> One could imagine applying minimal PEP 526 style hints, with standard Python
> semantics and relying on type inference, as follows:
>
> class Starship:
>
> captain = 'Picard'
> stats: Dict[str, int] = {}
>
> def __init__(self, damage: int, captain: str = None):
> self.damage = damage
> if captain:
> self.captain = captain
>
> The PEP overstates the existing use of static typing in Python
> ==============================================================
>
> Finally, in the rejected proposal section, under "Should we introduce
> variable annotations at all?" it states that "Variable annotations have
> already been around for almost two years in the form of type comments,
> sanctioned by PEP 484."
> I don't think that this is entirely true.
> PEP 484 was about the syntax for types, declaring parameter and return
> types, and declaring custom types to be generic.
> PEP 484 does include a description of type comments, but they are always
> annotations on assignment statements and were primarily intended for use in
> stub files.

That is a mis-characterization of the intent of type comments in PEP 484; they are not primarily meant for stubs (the only think I find tying the two together is the use of "..." as the initial value in stubs).

> Please don't turn Python into some sort of inferior Java.
> There is potential in this PEP, but in its current form I think it should be
> rejected.

Thanks for your feedback. We will be sure not to turn Python into Java! But I am unconvinced that your objections are reason to reject the PEP -- you seem to be fine with the general *syntax* proposed, your concerns are about the specific rules to be used by a type checker. I expect we'll be arguing about those for years to come -- maybe one day a PEP will come along that ties the semantics of types down, but PEP 526 is not it.

--
--Guido van Rossum (python.org/~guido)

Steven D'Aprano

unread,
Sep 2, 2016, 12:42:31 PM9/2/16
to pytho...@python.org
Hi Mark,

I'm going to trim your post drastically, down to the bare essentials, in
order to keep this already long post down to a manageable size.


On Fri, Sep 02, 2016 at 02:47:00PM +0100, Mark Shannon wrote:

[...]
> With type comments, this is intuitively correct and should type check:
> def eggs(cond:bool):
> if cond:
> x = None
> else:
> x = [] # type: List[int]
> spam(x) # Here we can infer the type of x

It isn't correct. You've declared something to be a list of ints, but
assigned a value None to it! How is that not an error?

The error is more obvious if you swap the order of assignments:

if cond:
x = [] # type: List[int]
else:
x = None


MyPy currently requires the experimental --strict-optional flag to
detect this error:

[steve@ando ~]$ mypy --strict-optional test.py
test.py: note: In function "eggs":
test.py:10: error: Incompatible types in assignment (expression has type
None, variable has type List[int])


Changing that from comment syntax to (proposed) Python syntax will not
change that. There is no semantic difference to the type checker
between

x = [] #type: List[int]

and

x: List[int] = []

and any type checker will have to treat them identically.


> With PEP 526 we loose the ability to infer types.

On re-reading the PEP, I have just noticed that nowhere does it
explicitly state that checkers are still expected to perform type
inference. However, this is the companion to PEP 484, which states:

Type comments

No first-class syntax support for explicitly marking variables
as being of a specific type is added by this PEP. TO HELP WITH
TYPE INFERENCE IN COMPLEX CASES, a comment of the following
format may be used: [...]

(Emphasis added.) So the intention is that, regardless of whether you
use a type annotation using a comment or the proposed syntax, that is
intended to *help* the checker perform inference, not to replace it.

Perhaps this PEP should include an explicit note to that end.



[...]
> So we need to use a more complex type
> def eggs(cond:bool):
> x: Optional[List[int]]
> if cond:
> x = None # Now legal
> else:
> x: = []
> spam(x)
>
> I don't think this improves readability.

Maybe not, but it sure improves *correctness*.

A human reader might be able to logically deduce that x = None and
x = [] are both valid, given that spam() takes either a list or None,
but I'm not sure if that level of intelligence is currently possible in
type inference. (Maybe it is, and MyPy simply doesn't support it yet.)
So it may be that this is a case where you do have to apply an explicit
type hint, using *either* a type comment or this new proposed syntax:

x: Optional[List[int]]
if cond:
x = None
else:
x = []

should be precisely the same as:

if cond:
x = None #type: Optional[List[int]]
else:
x = []


> Quoting from the PEP:
> ```
> a: int
> a: str # Static type checker will warn about this.
> ```
> In other words, it is illegal for a checker to split up the variable,
> even though it is straightforward to do so.

No, it is a *warning*, not an error.

Remember, the Python interpreter itself won't care. The type checker is
optional and not part of the interpreter. You can still write code like:

a = 1
do_something(a)
a = "string"
do_another(a)

and Python will be happy. But if you do run a type checker, it should
warn you that you're changing types, as that suggests the possibility of
a type error. (And it also goes against a lot of people's style
guidelines.)


> We should be free to add extra variables, whenever we choose, for
> clarity. For example,
> total = foo() - bar()
> should not be treated differently from:
> revenue = foo()
> tax = bar()
> total = revenue - tax
>
> If types are inferred, there is no problem.
> However, if they must be declared, then the use of meaningfully named
> variables is discouraged.

Was there something in the PEP that lead you to believe that they "must"
be declared? Under "Non-goals", the PEP states in bold text:

"the authors have no desire to ever make type hints mandatory"

so I'm not sure why you think that types must be declared.

Perhaps the PEP should make it more obvious that type hints on variables
are *in addition to* and not a substitute for type inference.


> PEP 484 states:
> "If type hinting proves useful in general, a syntax for typing variables
> may be provided in a future Python version."
> Has it proved useful in general? I don't think it has.

According to the PEP, it has proved useful in typeshed.


> It seems confused about class attributes and instance attributes
> ================================================================
>
> The PEP also includes a section of how to define class attributes and
> instance attributes. It seems that everything needs to be defined in the
> class scope, even it is not an attribute of the class, but of its
> instances.

Quoting from the PEP:

"As a matter of convenience, instance attributes can be
annotated in __init__ or other methods, rather than in class"


Perhaps the PEP could do with a little more justification for why
we would want to declare instance attributes in the class rather than in
__init__.



> Example from PEP 526:
>
> class Starship:
>
> captain: str = 'Picard'
> damage: int
> stats: ClassVar[Dict[str, int]] = {}
>
> def __init__(self, damage: int, captain: str = None):
> self.damage = damage
> if captain:
> self.captain = captain # Else keep the default

On re-reading this, I too wonder why damage is being declared in the
class body. Can the type checker not infer that self.damage has the same
type as damage in the __init__ method?



> Finally, in the rejected proposal section, under "Should we introduce
> variable annotations at all?" it states that "Variable annotations have
> already been around for almost two years in the form of type comments,
> sanctioned by PEP 484."
> I don't think that this is entirely true.

PEP 484 itself was created almost two years ago (Sept 2014) and although
it doesn't list prior art for type comments, I seem to recall that it
copied the idea from MyPy. I expect that MyPy (and maybe even linters
like PyLint, PyFlakes, etc.) have been using type comments for "almost
two years", if not longer.



> PEP 484 was about the syntax for types, declaring parameter and return
> types, and declaring custom types to be generic.
> PEP 484 does include a description of type comments, but they are always
> annotations on assignment statements and were primarily intended for use
> in stub files.

I'm not seeing what distinction you think you are making here. What
distinction do you see between:

x: int = func(value)

and

x = func(value) #type: int





--
Steve

Sven R. Kunze

unread,
Sep 2, 2016, 12:48:28 PM9/2/16
to pytho...@python.org
Hi Mark,

I agree with you about postponing.

Not so much because of the issues you mentioned. Those all seem
resolvable to me and mostly concerns type checkers, linters and coding
styles not Python itself. However, I also don't like the rushing through
as if this beta were the only chance to get it into Python.

Python haven't had variable annotations for decades until now and Python
will still be there in 10 decades or so. Thus, 2 years more to wait and
to hammer out the details does not seem much compared to the entire
lifetime of this language.

The PEP also remains silent about when to use annotations. Compared to
recent additions like f-strings or async, it's completely clear when to
use it. However for variable annotations it's not (all variables? most
used variables? least used variables?). So, I also agree with you that
improving type checkers is the better way than adding all static type
annotations all over the place. Python is dynamic and also types should
be as dynamic and redundant-free as possible. Thus, some guidance would
be great here.

Cheers,
Sven
> https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de

Koos Zevenhoven

unread,
Sep 2, 2016, 1:12:21 PM9/2/16
to Mark Shannon, Python Dev
I agree about some concerns and disagree about several. I see most use
for class/instance attribute annotations, both for type checking and
for other uses. I'm least sure about their syntax and about
annotations in functions or at module level in the proposed form.

Below some comments:

On Fri, Sep 2, 2016 at 4:47 PM, Mark Shannon <ma...@hotpy.org> wrote:
> Hi everyone,
>
> I think we should reject, or at least postpone PEP 526.
>
> PEP 526 represents a major change to the language, however there are, I
> believe, a number of technical flaws with the PEP.

[...]
>
> In many cases it makes it more effort than type comments
> ========================================================
>
> Type hints should be as easy to use as possible, and that means pushing as
> much work as possible onto the checker, and not burdening the programmer.
>
> Attaching type hints to variables, rather than expressions, reduces the
> potential for inference. This makes it harder for programmer, but easier for
> the checker, which is the wrong way around.

The more you infer types, the less you check them. It's up to the
programmers to choose the amount of annotation.

> For example,, given a function:
> def spam(x: Optional[List[int]])->None: ...
>
> With type comments, this is intuitively correct and should type check:
> def eggs(cond:bool):
> if cond:
> x = None
> else:
> x = [] # type: List[int]
> spam(x) # Here we can infer the type of x
>
> With PEP 526 we loose the ability to infer types.
> def eggs(cond:bool):
> if cond:
> x = None # Not legal due to type declaration below
> else:
> x: List[int] = []
> spam(x)

I'm also a little worried about not being able to reannotate a name.

>
> So we need to use a more complex type
> def eggs(cond:bool):
> x: Optional[List[int]]
> if cond:
> x = None # Now legal
> else:
> x: = []
> spam(x)
>

A good checker should be able to infer that x is a union type at the
point that it's passed to spam, even without the type annotation. For
example:

def eggs(cond:bool):
if cond:
x = 1
else:
x = 1.5
spam(x) # a good type checker infers that x is of type Union[int, float]

Or with annotations:

def eggs(cond:bool):
if cond:
x : int = foo() # foo may not have a return type hint
else:
x : float = bar() # bar may not have a return type hint
spam(x) # a good type checker infers that x is of type Union[int, float]


[...]
> It limits the use of variables
> ==============================
>
> In Python a name (variable) is just a binding that refers to an object.
> A name only exists in a meaningful sense once an object has been assigned to
> it. Any attempt to use that name, without an object bound to it, will result
> in a NameError.
>

IIUC, that would still be the case after PEP 526.

[...]
>
> We should be free to add extra variables, whenever we choose, for clarity.
> For example,
> total = foo() - bar()
> should not be treated differently from:
> revenue = foo()
> tax = bar()
> total = revenue - tax
>
> If types are inferred, there is no problem.
> However, if they must be declared, then the use of meaningfully named
> variables is discouraged.
>

Who says they *must* be declared?

[...]
> It is premature
> ===============
>
> There are still plenty of issues to iron out w.r.t. PEP 484 types. I don't
> think we should be adding more syntax, until we have a *precise* idea of
> what is required.
>
> PEP 484 states:
> "If type hinting proves useful in general, a syntax for typing variables may
> be provided in a future Python version."
> Has it proved useful in general? I don't think it has. Maybe it will in
> future, but it hasn't yet.
>

Yeah, I hope someone has enough experience to know whether this is the
right thing for Python as a whole.

> It seems confused about class attributes and instance attributes
> ================================================================
>
> The PEP also includes a section of how to define class attributes and
> instance attributes. It seems that everything needs to be defined in the
> class scope, even it is not an attribute of the class, but of its instances.
> This seems confusing, both to human reader and machine analyser.

I don't see the problem here, isn't that how it's usually done in
strongly typed languages? And methods are defined in the class scope
too (well yes, they do also exist in the class namespace, but
anyway...).

But I agree in the sense that the proposed syntax is far from explicit
about these being instance attributes by default.

> Example from PEP 526:
>
> class Starship:
>
> captain: str = 'Picard'
> damage: int
> stats: ClassVar[Dict[str, int]] = {}
>
> def __init__(self, damage: int, captain: str = None):
> self.damage = damage
> if captain:
> self.captain = captain # Else keep the default
>
> With type hints as they currently exist, the same code is shorter and
> doesn't contaminate the class namespace with the 'damage' attribute.

IIUC, 'damage' will not be in the class namespace according to PEP 526.

> class Starship:
>
> captain = 'Picard'
> stats = {} # type: Dict[str, int]
>
> def __init__(self, damage: int, captain: str = None):
> self.damage = damage # Can infer type as int
> if captain:
> self.captain = captain # Can infer type as str
>

And that's one of the reasons why there should be annotations without
setting a type hint (as I wrote in the other thread).

>
> This isn't an argument against adding type syntax for attributes in general,
> just that the form suggested in PEP 526 doesn't seem to follow Python
> semantics.
>
> One could imagine applying minimal PEP 526 style hints, with standard Python
> semantics and relying on type inference, as follows:
>
> class Starship:
>
> captain = 'Picard'
> stats: Dict[str, int] = {}
>
> def __init__(self, damage: int, captain: str = None):
> self.damage = damage
> if captain:
> self.captain = captain

I don't like this, because some of the attributes are introduced at
class level and some inside __init__, so it is easy to miss that there
is such a thing as 'damage' (at least in more complicated examples). I
keep repeating myself, but again this where we need non-type-hinting
attribute declarations.

-- Koos

>
> The PEP overstates the existing use of static typing in Python
> ==============================================================
>
[...]
> Please don't turn Python into some sort of inferior Java.
> There is potential in this PEP, but in its current form I think it should be
> rejected.
>
> Cheers,
> Mark.
>
>
> _______________________________________________
> Python-Dev mailing list
> Pytho...@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/k7hoven%40gmail.com


--
+ Koos Zevenhoven + http://twitter.com/k7hoven +

Steve Dower

unread,
Sep 2, 2016, 1:52:13 PM9/2/16
to Steven D'Aprano, pytho...@python.org
"I'm not seeing what distinction you think you are making here. What
distinction do you see between:

    x: int = func(value)

and

    x = func(value)  #type: int"

Not sure whether I agree with Mark on this particular point, but the difference I see here is that the first describes what types x may ever contain, while the latter describes what type of being assigned to x right here. So one is a variable annotation while the other is an expression annotation.

Personally, I prefer expression annotations over variable annotations, as there are many other languages I'd prefer if variable have fixed types (e.g. C++, where I actually enjoy doing horrible things with implicit casting ;) ).

Variable annotations appear to be inherently restrictive, so either we need serious clarification as to why they are not, or they actually are and we ought to be more sure that it's the direction we want the language to go.

Cheers,
Steve

Top-posted from my Windows Phone

From: Steven D'Aprano
Sent: ‎9/‎2/‎2016 9:43
To: pytho...@python.org
Subject: Re: [Python-Dev] Please reject or postpone PEP 526

Ivan Levkivskyi

unread,
Sep 2, 2016, 1:54:49 PM9/2/16
to Guido van Rossum, Python-Dev
On 2 September 2016 at 17:59, Guido van Rossum <gu...@python.org> wrote:
> On Fri, Sep 2, 2016 at 6:47 AM, Mark Shannon <ma...@hotpy.org> wrote:
> > Quoting from the PEP:
> > ```
> > a: int
> > a: str # Static type checker will warn about this.
> > ```
> > In other words, it is illegal for a checker to split up the variable, even
> > though it is straightforward to do so.
>
> One of my co-authors has gone too far here. The intent is not to legislate what should happen in this case but to leave it to the checker. In mypy, the equivalent syntax using type comments is currently indeed rejected, but we're considering a change here (https://github.com/python/mypy/issues/1174). The PEP 526 syntax will not make a difference here.

If I remember correctly, I added this example. At that time the intention was to propose to "loosen" the behaviour of type checkers (note it is a warning, not an error like in mypy). But now I agree with Guido that we should be even more liberal. We could left this to type checker to decide what to do (they could even have options like -Werror or -Wignore).

--
Ivan

Steven D'Aprano

unread,
Sep 2, 2016, 2:05:53 PM9/2/16
to pytho...@python.org
On Fri, Sep 02, 2016 at 08:10:24PM +0300, Koos Zevenhoven wrote:

> A good checker should be able to infer that x is a union type at the
> point that it's passed to spam, even without the type annotation. For
> example:
>
> def eggs(cond:bool):
> if cond:
> x = 1
> else:
> x = 1.5
> spam(x) # a good type checker infers that x is of type Union[int, float]

Oh I really hope not. I wouldn't call that a *good* type checker. I
would call that a type checker that is overly permissive.

Maybe you think that it's okay because ints and floats are somewhat
compatible. But suppose I wrote:

if cond:
x = HTTPServer(*args)
else:
x = 1.5

Would you want the checker to infer Union[HTTPServer, float]? I
wouldn't. I would want the checker to complain that the two branches of
the `if` result in different types for x. If I really mean it, then I
can give a type-hint.

In any case, this PEP isn't about specifying when to declare variable
types, it is for picking syntax. Do you have a better idea for variable
syntax?



--
Steve

Steven D'Aprano

unread,
Sep 2, 2016, 2:20:57 PM9/2/16
to pytho...@python.org
On Fri, Sep 02, 2016 at 10:47:41AM -0700, Steve Dower wrote:
> "I'm not seeing what distinction you think you are making here. What
> distinction do you see between:
>
> x: int = func(value)
>
> and
>
> x = func(value) #type: int"
>
> Not sure whether I agree with Mark on this particular point, but the
> difference I see here is that the first describes what types x may
> ever contain, while the latter describes what type of being assigned
> to x right here. So one is a variable annotation while the other is an
> expression annotation.

Ultimately Python is a dynamically typed language, and that's not
changing. This means types are fundamentally associated with *values*,
not *variables* (names). But in practice, you can go a long way by
pretending that it is the variable that carries the type. That's the
point of the static type checker: if you see that x holds an int here,
then assume (unless told differently) that x should always be an int.
Because in practice, most exceptions to that are due to bugs, or at
least sloppy code.

Of course, it is up to the type checker to decide how strict it wants to
be, whether to treat violations as a warning or a error, whether to
offer the user a flag to set the behaviour, etc. None of this is
relevant to the PEP. The PEP only specifies the syntax, leaving
enforcement or non-enforcement to the checker, and it says:

PEP 484 introduced type hints, a.k.a. type annotations. While its
main focus was function annotations, it also introduced the notion
of type comments to annotate VARIABLES [emphasis added]

not expressions. And:

This PEP aims at adding syntax to Python for annotating the types
of variables and attributes, instead of expressing them through
comments

which to me obviously implies that the two ways (type comment, and
variable type hint) are intended to be absolutely identical in
semantics, at least as far as the type-checker is concerned.

(They will have different semantics at runtime: the comment is just a
comment, while the type hint will set an __annotations__ mapping.)

But perhaps the PEP needs to make it explicit that they are to be
treated exactly the same.

Stefan Krah

unread,
Sep 2, 2016, 2:39:24 PM9/2/16
to pytho...@python.org
[Replying to Steve Dower]
On Sat, Sep 03, 2016 at 04:19:13AM +1000, Steven D'Aprano wrote:
> On Fri, Sep 02, 2016 at 10:47:41AM -0700, Steve Dower wrote:
> > "I'm not seeing what distinction you think you are making here. What
> > distinction do you see between:
> >
> > x: int = func(value)
> >
> > and
> >
> > x = func(value) #type: int"
> >
> > Not sure whether I agree with Mark on this particular point, but the
> > difference I see here is that the first describes what types x may
> > ever contain, while the latter describes what type of being assigned
> > to x right here. So one is a variable annotation while the other is an
> > expression annotation.

I see it differently, but I'm quite used to OCaml:

# let f () =
let x : int = 10 in
let x : float = 320.0 in
x;;
Warning 26: unused variable x.
val f : unit -> float = <fun>
# f();;
- : float = 320.


Like in Python, in OCaml variables can be rebound and indeed have different
types with different explicit type constraints.


Expressions can also be annotated, but require parentheses (silly example):

# let x = (10 * 20 : int);;
val x : int = 200



So I'm quite happy with the proposed syntax in the PEP, perhaps the
parenthesized expression annotations could also be added. But these
are only very rarely needed.


Stefan Krah

Guido van Rossum

unread,
Sep 2, 2016, 3:35:52 PM9/2/16
to Steve Dower, Python-Dev
On Fri, Sep 2, 2016 at 10:47 AM, Steve Dower <steve...@python.org> wrote:
> "I'm not seeing what distinction you think you are making here. What
> distinction do you see between:
>
> x: int = func(value)
>
> and
>
> x = func(value) # type: int"
>
> Not sure whether I agree with Mark on this particular point, but the
> difference I see here is that the first describes what types x may ever
> contain, while the latter describes what type of being assigned to x right
> here. So one is a variable annotation while the other is an expression
> annotation.

But that's not what type comments mean! They don't annotate the
expression. They annotate the variable. The text in PEP 484 that
introduces them is clear about this (it never mentions expressions,
only variables).

> Personally, I prefer expression annotations over variable annotations, as
> there are many other languages I'd prefer if variable have fixed types (e.g.
> C++, where I actually enjoy doing horrible things with implicit casting ;)
> ).
>
> Variable annotations appear to be inherently restrictive, so either we need
> serious clarification as to why they are not, or they actually are and we
> ought to be more sure that it's the direction we want the language to go.

At runtime the variable annotations are ignored. And a type checker
will only ask for them when it cannot infer the type. So I think we'll
be fine.

--
--Guido van Rossum (python.org/~guido)
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com

Mark Shannon

unread,
Sep 4, 2016, 6:17:26 AM9/4/16
to pytho...@python.org

On 02/09/16 19:04, Steven D'Aprano wrote:
> On Fri, Sep 02, 2016 at 08:10:24PM +0300, Koos Zevenhoven wrote:
>
>> A good checker should be able to infer that x is a union type at the
>> point that it's passed to spam, even without the type annotation. For
>> example:
>>
>> def eggs(cond:bool):
>> if cond:
>> x = 1
>> else:
>> x = 1.5
>> spam(x) # a good type checker infers that x is of type Union[int, float]
>
> Oh I really hope not. I wouldn't call that a *good* type checker. I
> would call that a type checker that is overly permissive.
Why would that be overly permissive? It infers the most precise type
possible.

>
> Maybe you think that it's okay because ints and floats are somewhat
> compatible. But suppose I wrote:
>
> if cond:
> x = HTTPServer(*args)
> else:
> x = 1.5
>
> Would you want the checker to infer Union[HTTPServer, float]? I
> wouldn't. I would want the checker to complain that the two branches of
> the `if` result in different types for x. If I really mean it, then I
> can give a type-hint.
Yes, the checker would infer that the type of x (strictly, all uses of x
that are defined by these definitions) is Union[HTTPServer, float].

You example is incomplete, what do you do with x?
If you pass x to a function that takes Union[HTTPServer, float] then
there is no error.
If you pass it to a function that takes a number then you get an error:
"Cannot use HTTPServer (from line 2) as Number (line ...)"
as one would expect.

When it comes to checkers, people hate false positives. Flagging correct
code as erroneous because it is bad 'style' is really unpopular.

>
> In any case, this PEP isn't about specifying when to declare variable
> types, it is for picking syntax. Do you have a better idea for variable
> syntax?

No. I think that defining the type of variables, rather than expressions
is a bad idea.

Cheers,
Mark.

Mark Shannon

unread,
Sep 4, 2016, 6:54:21 AM9/4/16
to Python-Dev
On 02/09/16 19:19, Steven D'Aprano wrote:
> On Fri, Sep 02, 2016 at 10:47:41AM -0700, Steve Dower wrote:
>> "I'm not seeing what distinction you think you are making here. What
>> distinction do you see between:
>>
>> x: int = func(value)
>>
>> and
>>
>> x = func(value) #type: int"
>>
>> Not sure whether I agree with Mark on this particular point, but the
>> difference I see here is that the first describes what types x may
>> ever contain, while the latter describes what type of being assigned
>> to x right here. So one is a variable annotation while the other is an
>> expression annotation.
>
> Ultimately Python is a dynamically typed language, and that's not
> changing. This means types are fundamentally associated with *values*,
> not *variables* (names). But in practice, you can go a long way by
> pretending that it is the variable that carries the type. That's the
> point of the static type checker: if you see that x holds an int here,
> then assume (unless told differently) that x should always be an int.
> Because in practice, most exceptions to that are due to bugs, or at
> least sloppy code.
>
> Of course, it is up to the type checker to decide how strict it wants to
> be, whether to treat violations as a warning or a error, whether to
> offer the user a flag to set the behaviour, etc. None of this is
> relevant to the PEP. The PEP only specifies the syntax, leaving
> enforcement or non-enforcement to the checker, and it says:
>
> PEP 484 introduced type hints, a.k.a. type annotations. While its
> main focus was function annotations, it also introduced the notion
> of type comments to annotate VARIABLES [emphasis added]

If I recall, Guido and I agreed to differ on that point. We still do, it
seems. We did manage to agree on the syntax though.

>
> not expressions. And:
>
> This PEP aims at adding syntax to Python for annotating the types
> of variables and attributes, instead of expressing them through
> comments
>
> which to me obviously implies that the two ways (type comment, and
> variable type hint) are intended to be absolutely identical in
> semantics, at least as far as the type-checker is concerned.

The key difference is in placement.
PEP 484 style
variable = value # annotation

Which reads to me as if the annotation refers to the value.
PEP 526
variable: annotation = value

Which reads very much as if the annotation refers to the variable.
That is a change in terms of semantics and a change for the worse, in
terms of expressibility.

Chris Angelico

unread,
Sep 4, 2016, 6:59:24 AM9/4/16
to Python-Dev
On Sun, Sep 4, 2016 at 8:52 PM, Mark Shannon <ma...@hotpy.org> wrote:
> The key difference is in placement.
> PEP 484 style
> variable = value # annotation
>
> Which reads to me as if the annotation refers to the value.
> PEP 526
> variable: annotation = value
>
> Which reads very much as if the annotation refers to the variable.
> That is a change in terms of semantics and a change for the worse, in terms
> of expressibility.

So what you have is actually a change in *implication* (since the PEP
doesn't stipulate semantics); and the old way (the comment) implies
something contrary to the semantics of at least one of the type
checkers that uses it (MyPy). Are there any current type checkers that
actually do associate that with the value? That is, to have "variable
= func() # str" indicate the same type check as "def func() -> str"?
If not, this is a strong argument in favour of the PEP, since it would
synchronize the syntax with the current best-of-breed checkers.

ChrisA

Mark Shannon

unread,
Sep 4, 2016, 7:33:22 AM9/4/16
to pytho...@python.org


On 02/09/16 20:33, Guido van Rossum wrote:
> On Fri, Sep 2, 2016 at 10:47 AM, Steve Dower <steve...@python.org> wrote:
>> "I'm not seeing what distinction you think you are making here. What
>> distinction do you see between:
>>
>> x: int = func(value)
>>
>> and
>>
>> x = func(value) # type: int"
>>
>> Not sure whether I agree with Mark on this particular point, but the
>> difference I see here is that the first describes what types x may ever
>> contain, while the latter describes what type of being assigned to x right
>> here. So one is a variable annotation while the other is an expression
>> annotation.
>
> But that's not what type comments mean! They don't annotate the
> expression. They annotate the variable. The text in PEP 484 that
> introduces them is clear about this (it never mentions expressions,
> only variables).

In PEP 484, the section on type comments says:
(Quoting verbatim)
"""
No first-class syntax support for explicitly marking variables as being
of a specific type is added by this PEP. To help with type inference in
complex cases, a comment of the following format may be used...
"""

Some mentions of the type of a variable are made in other places in the
PEP, but those were all added *after* I had approved the PEP.

In other words PEP 484 specifically states that annotations are to help
with type inference. As defined in PEP 526, I think that type
annotations become a hindrance to type inference.

Cheers,
Mark.

>
>> Personally, I prefer expression annotations over variable annotations, as
>> there are many other languages I'd prefer if variable have fixed types (e.g.
>> C++, where I actually enjoy doing horrible things with implicit casting ;)
>> ).
>>
>> Variable annotations appear to be inherently restrictive, so either we need
>> serious clarification as to why they are not, or they actually are and we
>> ought to be more sure that it's the direction we want the language to go.
>
> At runtime the variable annotations are ignored. And a type checker
> will only ask for them when it cannot infer the type. So I think we'll
> be fine.
>

Ivan Levkivskyi

unread,
Sep 4, 2016, 7:36:49 AM9/4/16
to Mark Shannon, Python-Dev


On 4 September 2016 at 12:52, Mark Shannon <ma...@hotpy.org> wrote:
The key difference is in placement.
PEP 484 style
variable = value # annotation

Which reads to me as if the annotation refers to the value.
PEP 526
variable: annotation = value

Which reads very much as if the annotation refers to the variable.
That is a change in terms of semantics and a change for the worse, in terms of expressibility.

I still think it is better to leave the decision to type checkers.
The proposed syntax allows two forms:

variable: annotation = value

and

variable: annotation

The first form still could be interpreted by type checkers
as annotation for value (a cast to more precise type):

variable = cast(annotation, value) # visually also looks similar

and PEP says that annotations "are intended to help with type inference in complex cases".
Such interpretation could be useful for function local variables
(the implementation is also optimised for such use case).
While the second form (without value)
indeed looks like annotation of variable, and is equivalent to:

__annotations__['variable'] = annotation

This form could be useful for annotating instance/class variables.
Or just for documentation (I really like this).

In addition, expression annotations are allowed

expression: annotation [= value]

This form is not used by 3rd party tools (as far as I know) but one of the possible use cases
could be "check-points" for values:

somedict[somefunc(somevar)]: annotation # type checker could flag this if something went wrong.

Finally, I would like to reiterate, both interpretations (annotating value vs annotating variable)
are possible and we (OK at least me, but it looks like Guido also agree)
don't want to force 3rd party tools to use only one of those.

--
Ivan

Mark Shannon

unread,
Sep 4, 2016, 7:44:59 AM9/4/16
to Chris Angelico, Python-Dev


On 04/09/16 11:57, Chris Angelico wrote:
> On Sun, Sep 4, 2016 at 8:52 PM, Mark Shannon <ma...@hotpy.org> wrote:
>> The key difference is in placement.
>> PEP 484 style
>> variable = value # annotation
>>
>> Which reads to me as if the annotation refers to the value.
>> PEP 526
>> variable: annotation = value
>>
>> Which reads very much as if the annotation refers to the variable.
>> That is a change in terms of semantics and a change for the worse, in terms
>> of expressibility.
>
> So what you have is actually a change in *implication* (since the PEP
> doesn't stipulate semantics); and the old way (the comment) implies
> something contrary to the semantics of at least one of the type
> checkers that uses it (MyPy).
Do we really want to make a major, irrevocable change to the language
just because MyPy does something? MyPy is very far from complete (it
doesn't even support Optional types yet).

Are there any current type checkers that
> actually do associate that with the value? That is, to have "variable
> = func() # str" indicate the same type check as "def func() -> str"?
> If not, this is a strong argument in favour of the PEP, since it would
> synchronize the syntax with the current best-of-breed checkers.
I believe pytype uses value, rather the variable, tracking. It is thus
more precise. Of course, it is more of an inferencer than a checker.
We (semmle.com) do precise value tracking to infer a lot of "type"
errors in unannotated code (as the vast majority of Python code is).

It would be a real shame if PEP 526 mandates against checkers doing as
good as job as possible. Forcing all uses of a variable to have the same
type is a major and, IMO crippling, limitation.

E.g.
def foo(x:Optional[int])->int:
if x is None:
return -1
return x + 1

If the type of the *variable* 'x' is Optional[int] then 'return x + 1'
doesn't type check. If the type of the *parameter* 'x' is Optional[int]
then a checker can readily verify the above code.

I want a checker to check my code and, with minimal annotations, give me
confidence that my code is correct.


Cheers,
Mark.

>
> ChrisA
> _______________________________________________
> Python-Dev mailing list
> Pytho...@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/mark%40hotpy.org

Steven D'Aprano

unread,
Sep 4, 2016, 7:53:52 AM9/4/16
to pytho...@python.org
On Sun, Sep 04, 2016 at 12:31:26PM +0100, Mark Shannon wrote:

> In other words PEP 484 specifically states that annotations are to help
> with type inference. As defined in PEP 526, I think that type
> annotations become a hindrance to type inference.

I'm pretty sure that they don't.

Have you used any languages with type inference? Any type-checkers? If
so, can you give some actual concrete examples of how annotating a
variable hinders type inference? It sounds like you are spreading FUD at
the moment.

The whole point of type annotations is that you use them to deliberately
over-ride what the checker would infer (if it infers the wrong thing, or
cannot infer anything). I cannot see how you conclude from this that
type annotations will be a hindrance to type inference.

If you don't want to declare the type of a variable, simply DON'T
declare the type, and let the checker infer whatever it can (which may
be nothing, or may be the wrong type).



--
Steve

Ivan Levkivskyi

unread,
Sep 4, 2016, 7:58:36 AM9/4/16
to Mark Shannon, Python-Dev
On 4 September 2016 at 13:30, Mark Shannon <ma...@hotpy.org> wrote:
It would be a real shame if PEP 526 mandates against checkers doing as good as job as possible. Forcing all uses of a variable to have the same type is a major and, IMO crippling, limitation.

E.g.
def foo(x:Optional[int])->int:
    if x is None:
       return -1
    return x + 1

If the type of the *variable* 'x' is Optional[int] then 'return x + 1' doesn't type check. If the type of the *parameter* 'x' is Optional[int] then a checker can readily verify the above code.

Mark,

First, in addition to the quote from my previous e-mail, I would like to show another quote from PEP 526
"This PEP does not require type checkers to change their type checking rules. It merely provides a more readable syntax to replace type comments"

Second, almost exactly your example has been added to PEP 484:

class Reason(Enum):
    timeout = 1
    error = 2

def process(response: Union[str, Reason] = '') -> str:
    if response is Reason.timeout:
        return 'TIMEOUT'
    elif response is Reason.error:
        return 'ERROR'
    else:
        # response can be only str, all other possible values exhausted
        return 'PROCESSED: ' + response
I think mypy either already supports this or will support very soon (and the same for Optional)

--
Ivan

Koos Zevenhoven

unread,
Sep 4, 2016, 8:10:36 AM9/4/16
to Mark Shannon, Python-Dev
On Sun, Sep 4, 2016 at 1:52 PM, Mark Shannon <ma...@hotpy.org> wrote:
[...]
>
> The key difference is in placement.
> PEP 484 style
> variable = value # annotation
>
> Which reads to me as if the annotation refers to the value.
> PEP 526
> variable: annotation = value
>
> Which reads very much as if the annotation refers to the variable.
> That is a change in terms of semantics and a change for the worse, in terms
> of expressibility.
>

You have probably noticed this already, but in the semantics which I
have now explained more precisely on python-ideas

https://mail.python.org/pipermail/python-ideas/2016-September/042076.html

an annotation like

variable: annotation = value

is a little closer to an expression annotation. I.e. it does not say
that 'variable' should *always* have the type given by 'annotation'.

-- Koos

>
> Cheers,
> Mark.
>
> _______________________________________________
> Python-Dev mailing list
> Pytho...@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:

Steven D'Aprano

unread,
Sep 4, 2016, 8:45:49 AM9/4/16
to pytho...@python.org
On Sun, Sep 04, 2016 at 12:30:18PM +0100, Mark Shannon wrote:

> It would be a real shame if PEP 526 mandates against checkers doing as
> good as job as possible. Forcing all uses of a variable to have the same
> type is a major and, IMO crippling, limitation.

This is approaching FUD.

Guido has already stated that the section of the PEP which implied that
*any* change of type of a variable would be a warning (not an error) is
too strong:

https://mail.python.org/pipermail/python-dev/2016-September/146064.html

and indeed the relevant section of the PEP has already been changed:

Duplicate type annotations will be ignored. However, static type
checkers may issue a warning for annotations of the same variable
by a different type:

a: int
a: str # Static type checker may or may not warn about this.


This PEP does not mandate any behaviour for type-checkers. It describes
the syntax for type annotations in Python code. What type-checkers do
with that information is up to them.



> E.g.
> def foo(x:Optional[int])->int:
> if x is None:
> return -1
> return x + 1
>
> If the type of the *variable* 'x' is Optional[int] then 'return x + 1'
> doesn't type check.

That makes no sense. Why wouldn't it type check?

It may be that some simple-minded type-checkers are incapable of
checking that code because it is too complex. If so, that's a limitation
of that specific checker, not of type-checkers in general. MyPy already
can type-check that code. See below.


> If the type of the *parameter* 'x' is Optional[int]
> then a checker can readily verify the above code.

This makes even less sense, since the parameter "x" is, of course,
precisely the same as the variable "x".

Here's MyPy in action, successfully checking code that you state can't
be checked:

[steve@ando ~]$ cat test.py
from typing import Optional

def foo(x:Optional[int])->int:
if x is None:
return -1
return x + 1

def bar(x:Optional[int])->int:
y = x # the type of y must be inferred
if y is None:
return y + 1
return len(y)

[steve@ando ~]$ mypy --strict-optional test.py
test.py: note: In function "bar":
test.py:11: error: Unsupported operand types for + (None and "int")
test.py:12: error: Argument 1 to "len" has incompatible type "int"; expected "Sized"


foo passes the type check; bar fails.


> I want a checker to check my code and, with minimal annotations, give me
> confidence that my code is correct.

Don't we all.



--
Steve

Steven D'Aprano

unread,
Sep 4, 2016, 9:09:35 AM9/4/16
to pytho...@python.org
Referring to the alternative syntax forms:

# Proposed
x: int = func(value)

# Already accepted
x = func(value) #type: int


On Sun, Sep 04, 2016 at 11:52:24AM +0100, Mark Shannon wrote:

> The key difference is in placement.
> PEP 484 style
> variable = value # annotation
>
> Which reads to me as if the annotation refers to the value.

Both Guido and the PEP have stated that it doesn't refer to the value,
but to the variable.

But what does it even mean to say that it refers to the value in the
context of *static type-checking*? I know what it means in the context
of dynamic type-checking, but I don't see how that has any relevance to
a static checker.


I have seen a number of people commenting that the comment annotation
"applies to the expression", but I don't understand what this is
supposed to mean. How is that different from applying it to the
variable? (That's not a rhetorical question.) Suppose I write this:

mylist = []
x = False or None or (mylist + [1]) #type: List[int]
pass # stand-in for arbitrary code
x.append("definitely not an int")

Should the type-checker flag the call to x.append as an error? I hope we
all agree that it should.

But it can only do that if it knows the type of the variable `x`. This
is a *static* type-checker, it doesn't know what value x *actually* has
at run-time because it isn't running at run-time. As far as the static
checker is concerned, it can only flag that append as an error if it
knows that `x` must be a list of ints.

If you distinguish the two cases:

"the expression `False or None or (mylist + [1])` is List[int]"

versus:

"the variable `x` is List[int]"

I don't even see what the first case could possible mean. But whatever
it means, if it is different from the second case, then the type-checker
is pretty limited in what it can do.



> PEP 526
> variable: annotation = value
>
> Which reads very much as if the annotation refers to the variable.

Since the PEP makes it clear that the two forms are to be treated the
same, I think that whatever difference you think they have is not
relevant. They are *defined* to mean the same thing.




--
Steve

Ivan Levkivskyi

unread,
Sep 4, 2016, 9:24:46 AM9/4/16
to Steven D'Aprano, Python-Dev
On 4 September 2016 at 15:07, Steven D'Aprano <st...@pearwood.info> wrote:
> PEP 526
> variable: annotation = value
>
> Which reads very much as if the annotation refers to the variable.

Since the PEP makes it clear that the two forms are to be treated the
same, I think that whatever difference you think they have is not
relevant. They are *defined* to mean the same thing.

Steve,
This has been discussed in the python/typing tracker. When you say "mean"
you are talking about semantics, but:

Me:
"""
The title of the PEP contains "Syntax", not "Semantics" because we don't want to impose any new type semantics (apart from addition of ClassVar)
"""

Guido:
"""
I have nothing to add to what @ilevkivskyi said about the semantics of redefinition -- that is to be worked out between type checkers.
(Much like PEP 484 doesn't specify how type checkers should behave -- while it gives examples of suggested behavior, those examples are not normative,
and there are huge gray areas where the PEP doesn't give any guidance. It's the same for PEP 526.)
"""

--
Ivan

Koos Zevenhoven

unread,
Sep 4, 2016, 10:36:37 AM9/4/16
to Steven D'Aprano, Python Dev
On Sun, Sep 4, 2016 at 3:43 PM, Steven D'Aprano <st...@pearwood.info> wrote:
[...]
> [steve@ando ~]$ cat test.py
> from typing import Optional
>
> def foo(x:Optional[int])->int:
> if x is None:
> return -1
> return x + 1
>
> def bar(x:Optional[int])->int:
> y = x # the type of y must be inferred
> if y is None:
> return y + 1
> return len(y)
>
> [steve@ando ~]$ mypy --strict-optional test.py
> test.py: note: In function "bar":
> test.py:11: error: Unsupported operand types for + (None and "int")
> test.py:12: error: Argument 1 to "len" has incompatible type "int"; expected "Sized"
>
>
> foo passes the type check; bar fails.
>

That's great. While mypy has nice features, these examples have little
to do with PEP 526 as they don't have variable annotations, not even
using comments.

For some reason, pip install --upgrade mypy fails for me at the
moment, but at least mypy version 0.4.1 does not allow this:

from typing import Callable

def foo(cond: bool, bar : Callable, baz : Callable) -> float:
if cond:
x = bar() # type: int
else:
x = baz() # type: float
return x / 2

and complains that

test.py:7: error: Name 'x' already defined". Maybe someone can confirm
this with a newer version.

Here,

def foo(cond: bool) -> float:
if cond:
x = 1
else:
x = 1.5
return x / 2

you get a different error:

test.py:5: error: Incompatible types in assignment (expression has
type "float", variable has type "int")

Maybe someone can confirm this with a newer version, but IIUC this is
still the case.

>> I want a checker to check my code and, with minimal annotations, give me
>> confidence that my code is correct
>
> Don't we all.
>

I would add *with minimal restrictions on how the code is supposed to
be written* for type checking to work. It's not at all obvious that
everyone thinks that way. Hence, the "Semantics for type checking"
thread on python-ideas.

-- Koos

>
>
> --
> Steve
> _______________________________________________
> Python-Dev mailing list
> Pytho...@python.org
> https://mail.python.org/mailman/listinfo/python-dev

Nick Coghlan

unread,
Sep 4, 2016, 12:45:09 PM9/4/16
to Ivan Levkivskyi, Python-Dev
On 4 September 2016 at 21:32, Ivan Levkivskyi <levki...@gmail.com> wrote:
> The first form still could be interpreted by type checkers
> as annotation for value (a cast to more precise type):
>
> variable = cast(annotation, value) # visually also looks similar

I think a function based spelling needs to be discussed further, as it
seems to me that at least some of the goals of the PEP could be met
with a suitable definition of "cast" and "declare", with no syntactic
changes to Python. Specifically, consider:

def cast(value, annotation):
return value

def declare(annotation):
return object()

The idea here is that "cast" would be used as a hint to type checkers
to annotate an *expression*, with the runtime semantic impact being
exactly nil - the value just gets passed through unmodified.

Annotated initialisations would then look like:

from typing import cast
primes = cast([], List[int])

class Starship:
stats = cast({}, ClassVar[Dict[str, int]])

This preserves the relative annotation order of current type hints,
where the item being annotated (parameter, function declaration,
assignment statement) is on the left, and the annotation is on the
right.

In cases where the typechecker is able to infer a type for the
expression, it may complain here when there's a mismatch between the
type inference and the explicit declaration, so these would also be a
form of type assertion.

That leaves the case of declarations, where the aim is to provide a
preemptive declaration that all assignments to a particular variable
will include an implicit casting of the RHS. That would look like:

from typing import declare

captain = declare(str)

Until it left the scope, or saw a new target declaration, a
typechecker would then interpret future assignments to "captain" as if
they had been written:

captain = cast(RHS, str)

With the above definition, this would have the runtime consequence of
setting "captain" to a unique object() instance until the first
assignment took place. Both that assignment, and the runtime overhead
of evaluating the declaration, can be avoided by moving the
declaration into otherwise dead code:

if 0: captain = declare(str)

Considering the goals and problems listed in the PEP, this would be
sufficient to address many of them:

* These are real expressions, and hence will be highlighted appropriately
* declare() allows annotations of undefined variables (sort of)
* These annotations will be in the AST, just as function arguments,
rather than as custom nodes
* In situations where normal comments and type comments are used
together, it is difficult to distinguish them

For the other goals, the function-based approach may not help:

* For conditional branches, it's only arguably an improvement

if 0: my_var = declare(Logger)
if some_value:
my_var = function()
else:
my_var = another_function()

* Readability for typeshed might improve for module level and class
level declarations, but functions would likely want the leading "if
0:" noise for performance reasons

* Since the code generator remains uninvolved, this still wouldn't
allow annotation access at runtime (unless typing implemented some
sys._getframe() hacks in declare() and cast())

However, exploring this possibility still seems like a good idea to
me, as it should allow many of the currently thorny semantic questions
to be resolved, and a future syntax-only PEP for 3.7+ can just be
about defining syntactic sugar for semantics that can (by then)
already be expressed via appropriate initialisers.

Regards,
Nick.

--
Nick Coghlan | ncog...@gmail.com | Brisbane, Australia

Koos Zevenhoven

unread,
Sep 4, 2016, 1:01:04 PM9/4/16
to Nick Coghlan, Python-Dev
On Sun, Sep 4, 2016 at 7:43 PM, Nick Coghlan <ncog...@gmail.com> wrote:
> On 4 September 2016 at 21:32, Ivan Levkivskyi <levki...@gmail.com> wrote:
>> The first form still could be interpreted by type checkers
>> as annotation for value (a cast to more precise type):
>>
>> variable = cast(annotation, value) # visually also looks similar
>
> I think a function based spelling needs to be discussed further, as it
> seems to me that at least some of the goals of the PEP could be met
> with a suitable definition of "cast" and "declare", with no syntactic
> changes to Python. Specifically, consider:
>
> def cast(value, annotation):
> return value
>

typing.cast already exists.

-- Koos

Ivan Levkivskyi

unread,
Sep 4, 2016, 1:25:19 PM9/4/16
to Nick Coghlan, Python-Dev
On 4 September 2016 at 18:43, Nick Coghlan <ncog...@gmail.com> wrote:
On 4 September 2016 at 21:32, Ivan Levkivskyi <levki...@gmail.com> wrote:
> The first form still could be interpreted by type checkers
> as annotation for value (a cast to more precise type):
>
> variable = cast(annotation, value) # visually also looks similar

I think a function based spelling needs to be discussed further, as it
seems to me that at least some of the goals of the PEP could be met
with a suitable definition of "cast" and "declare", with no syntactic
changes to Python. Specifically, consider:

    def cast(value, annotation):
        return value

    def declare(annotation):
        return object()
 
Nick, If I understand you correctly, this idea is very similar to Undefined.
It was proposed a year and half ago, when PEP 484 was discussed.

At that time it was abandoned, it reappeared during the discussion
of this PEP, but many people (including me) didn't like this,
so that we decided to put it in the list of rejected ideas to this PEP.

Some downsides of this approach:

* People will start to expect Undefined (or whatever is returned by declare())
everywhere (as in Javascript) even if we prohibit this.

* Some runtime overhead is still present: annotation gets evaluated
at every call to cast, and many annotations involve creation of
class objects (especially generics) that are very costly.
Because of this overhead, such use of generics was prohibited in PEP 484:

x = Node[int]() # prohibited by PEP 484
x = Node() # type: Node[int] # this was allowed

* Readability will be probably even worse than with comments:
many types already have brackets and parens, so that two more form cast()
is not good. Plus some noise of the if 0: that you mentioned, plus
"cast" everywhere.

However, exploring this possibility still seems like a good idea to
me, as it should allow many of the currently thorny semantic questions
to be resolved, and a future syntax-only PEP for 3.7+ can just be
about defining syntactic sugar for semantics that can (by then)
already be expressed via appropriate initialisers.

I think that motivation of the PEP is exactly opposite, this is why it has
"Syntax" not "Semantics" in title. Also quoting Guido:

> But I'm not in a hurry for that -- I'm only hoping to get the basic
> syntax accepted by Python 3.6 beta 1 so that we can start using this
> in 5 years from now rather than 7 years from now.

I also think that semantics should be up to the type checkers.
Maybe it is not a perfect comparison, but prohibiting all type semantics
except one is like prohibiting all Python web frameworks except one.

--
Ivan

Nick Coghlan

unread,
Sep 4, 2016, 1:31:46 PM9/4/16
to Steven D'Aprano, pytho...@python.org
On 4 September 2016 at 21:51, Steven D'Aprano <st...@pearwood.info> wrote:
> On Sun, Sep 04, 2016 at 12:31:26PM +0100, Mark Shannon wrote:
>
>> In other words PEP 484 specifically states that annotations are to help
>> with type inference. As defined in PEP 526, I think that type
>> annotations become a hindrance to type inference.
>
> I'm pretty sure that they don't.
>
> Have you used any languages with type inference? Any type-checkers? If
> so, can you give some actual concrete examples of how annotating a
> variable hinders type inference? It sounds like you are spreading FUD at
> the moment.

Steven, this kind of credential checking is uncalled for - Mark is
significantly more qualified than most of us to comment on this topic,
since he actually works on a shipping software quality analysis
product that does type inference on Python code (hence
https://semmle.com/semmle-analysis-now-includes-python/ ), and was
nominated as the BDFL-Delegate for PEP 484 because Guido trusted him
to critically review the proposal and keep any insurmountable problems
from getting through.

Getting accused of spreading FUD when a topic is just plain hard to
explain (due to the large expert/novice gap that needs to be bridged)
is one of the reasons python-dev and python-ideas can end up feeling
hostile to domain experts. We have the SIG system to help mitigate
that problem, but it's vastly preferable if such folks also feel their
expertise is welcomed on the main lists, rather than having it be
rejected as an inconvenient complication.

> The whole point of type annotations is that you use them to deliberately
> over-ride what the checker would infer (if it infers the wrong thing, or
> cannot infer anything). I cannot see how you conclude from this that
> type annotations will be a hindrance to type inference.

The problem arises for the "bare annotation" case, as that looks a
*lot* like traditional declarations in languages where initialisation
(which can specify a type) and assignment (which can't) are different
operations.

Consider this case:

if arg is not None:
x = list(arg)
# Type of "x" is inferred as List[Any] or something more specific here
if other_arg is not None:
# This is fine, we know "x" is a list at this point
x.extend(other_arg)
else:
x = None
# Type of "x" is inferred as type(None) here
# Type of "x" is inferred as Optional[List[Any]] from here on out

Now, consider that case with PEP 526:

x: Optional[List[Any]]
# Oops, this is the type of "x" *after* the if statement, not *during* it
if arg is not None:
x = list(arg)
if other_arg is not None:
# If we believe the type declaration here, this code will
(incorrectly) be flagged
# (as None has no "extend" method)
x.extend(other_arg)
else:
x = None

The "pre-declaration as documentation" proposal turns out to be
problematic in this case, as it misses the fact that different
branches of the if statement contribute different types to what
ultimately becomes a Union type for the rest of the code.

In order to cover the entire code block, we have to make the
pre-declaration match the state after the if statement, but then it's
overly broad for any *particular* branch.

So in this case, attempting to entirely defer specification of the
semantics creates a significant risk of type checkers written on the
assumption of C++ or Java style type declarations actively inhibiting
the dynamism of Python code, suggesting that the PEP would be well
advised to declare not only that the PEP 484 semantics are unchanged,
but also that a typechecker that flags the example above as unsafe is
wrong to do so.

Cheers,
Nick.

--
Nick Coghlan | ncog...@gmail.com | Brisbane, Australia

Ivan Levkivskyi

unread,
Sep 4, 2016, 2:01:44 PM9/4/16
to Nick Coghlan, pytho...@python.org
On 4 September 2016 at 19:29, Nick Coghlan <ncog...@gmail.com> wrote:
So in this case, attempting to entirely defer specification of the
semantics creates a significant risk of type checkers written on the
assumption of C++ or Java style type declarations actively inhibiting
the dynamism of Python code, suggesting that the PEP would be well
advised to declare not only that the PEP 484 semantics are unchanged,
but also that a typechecker that flags the example above as unsafe is
wrong to do so.

I don't think that a dedicated syntax will increase the
risk more than the existing type comment syntax. Moreover,
mainstream type checkers (mypy, pytype, etc) are far
from C++ or Java, and as far as I know they are not going
to change semantics.

As I understand, the main point of Mark is that such syntax suggests
visually a variable annotation, more than a value annotation.
However, I think that the current behavior of type checkers will
have more influence on perception of people rather than a visual
appearance of annotation.

Anyway, I think it is worth adding an explicit statement to the PEP
that both interpretations are possible (maybe even add that value
semantics is inherent to Python). But I don't think that we should
*prohibit* something in the PEP.

--
Ivan

Nick Coghlan

unread,
Sep 4, 2016, 2:01:47 PM9/4/16
to Ivan Levkivskyi, Python-Dev
On 5 September 2016 at 03:23, Ivan Levkivskyi <levki...@gmail.com> wrote:
> On 4 September 2016 at 18:43, Nick Coghlan <ncog...@gmail.com> wrote:
>>
>> On 4 September 2016 at 21:32, Ivan Levkivskyi <levki...@gmail.com>
>> wrote:
>> > The first form still could be interpreted by type checkers
>> > as annotation for value (a cast to more precise type):
>> >
>> > variable = cast(annotation, value) # visually also looks similar
>>
>> I think a function based spelling needs to be discussed further, as it
>> seems to me that at least some of the goals of the PEP could be met
>> with a suitable definition of "cast" and "declare", with no syntactic
>> changes to Python. Specifically, consider:
>>
>> def cast(value, annotation):
>> return value
>>
>> def declare(annotation):
>> return object()
>
>
> Nick, If I understand you correctly, this idea is very similar to Undefined.
> It was proposed a year and half ago, when PEP 484 was discussed.

Not quite, as it deliberately doesn't create a singleton, precisely to
avoid the problems a new singleton creates - if you use declare() as
written, there's no way to a priori check for it at runtime (since
each call produces a new object), so you have to either get the
TypeError when you try to use it as whatever type it's supposed to be,
or else use a static checker to find cases where you try to use it
without initialising it properly first.

Folks can also put it behind an "if 0:" or "if typing.TYPE_CHECKING"
guard so it doesn't actually execute at runtime, and is only visible
to static analysis.

> At that time it was abandoned, it reappeared during the discussion
> of this PEP, but many people (including me) didn't like this,
> so that we decided to put it in the list of rejected ideas to this PEP.
>
> Some downsides of this approach:
>
> * People will start to expect Undefined (or whatever is returned by
> declare())
> everywhere (as in Javascript) even if we prohibit this.

Hence why I didn't use a singleton.

> * Some runtime overhead is still present: annotation gets evaluated
> at every call to cast, and many annotations involve creation of
> class objects (especially generics) that are very costly.

Hence the commentary about using an explicit guard to prevent
execution ("if 0:" in my post for the dead code elimination, although
"if typing.TYPE_CHECKING:" would be more self-explanatory).

> * Readability will be probably even worse than with comments:
> many types already have brackets and parens, so that two more form cast()
> is not good. Plus some noise of the if 0: that you mentioned, plus
> "cast" everywhere.

I mostly agree, but the PEP still needs to address the fact that it's
only a subset of the benefits that actually require new syntax, since
it's that subset which provides the rationale for rejecting the use of
a function based approach, while the rest provided the incentive to
start looking for a way to replace the type comments.

>> However, exploring this possibility still seems like a good idea to
>> me, as it should allow many of the currently thorny semantic questions
>> to be resolved, and a future syntax-only PEP for 3.7+ can just be
>> about defining syntactic sugar for semantics that can (by then)
>> already be expressed via appropriate initialisers.
>
> I think that motivation of the PEP is exactly opposite, this is why it has
> "Syntax" not "Semantics" in title. Also quoting Guido:
>
>> But I'm not in a hurry for that -- I'm only hoping to get the basic
>> syntax accepted by Python 3.6 beta 1 so that we can start using this
>> in 5 years from now rather than 7 years from now.
>
> I also think that semantics should be up to the type checkers.
> Maybe it is not a perfect comparison, but prohibiting all type semantics
> except one is like prohibiting all Python web frameworks except one.

It's the semantics that worry people though, and it's easy for folks
actively working on typecheckers to think it's just as easy for the
rest of us to make plausible assumptions about the kind of code that
well-behaved typecheckers are going to allow as it is for you. That's
not the case, which means folks get concerned, especially those
accustomed to instititutional environments where decisions about tool
use are still made by folks a long way removed from the day to day
experience of software development, rather than being delegated to the
engineering teams themselves.

I suspect you'll have an easier time of it on that front if you
include some examples of dynamically typed code that a well-behaved
type-checker *must* report as correct Python code, such as:

x: Optional[List[Any]]
# This is the type of "x" *after* the if statement, not *during* it
if arg is not None:
x = list(arg)
if other_arg is not None:
# A well-behaved typechecker should allow this due to
# the more specific initialisation in this particular branch
x.extend(other_arg)
else:
x = None

A typechecker could abide by that restriction by ignoring variable
declarations entirely and operating solely on its own type inference
from expressions, so any existing PEP 484 typechecker is likely to be
well-behaved in that regard.

Similarly, it would be reasonable to say that these three snippets
should all be equivalent from a typechecking perspective:

x = None # type: Optional[T]

x: Optional[T] = None

x: Optional[T]
x = None

This more explcitly spells out what it means for PEP 526 to say that
it's purely about syntax and doesn't define any new semantics beyond
those already defined in PEP 484.

Cheers,

Ivan Levkivskyi

unread,
Sep 4, 2016, 2:15:06 PM9/4/16
to Nick Coghlan, Python-Dev
On 4 September 2016 at 19:59, Nick Coghlan <ncog...@gmail.com> wrote:

Nick,

Thank you for good suggestions.
 
I mostly agree, but the PEP still needs to address the fact that it's
only a subset of the benefits that actually require new syntax, since
it's that subset which provides the rationale for rejecting the use of
a function based approach, while the rest provided the incentive to
start looking for a way to replace the type comments.

I think I agree.
 
I suspect you'll have an easier time of it on that front if you
include some examples of dynamically typed code that a well-behaved
type-checker *must* report as correct Python code, such as:

    x: Optional[List[Any]]
    # This is the type of "x" *after* the if statement, not *during* it
    if arg is not None:
        x = list(arg)
        if other_arg is not None:
            # A well-behaved typechecker should allow this due to
            # the more specific initialisation in this particular branch
            x.extend(other_arg)
    else:
        x = None

There are very similar examples in PEP 484 (section on singletons in unions),
we could just copy those or use this example,
but I am sure Guido will not agree to word "must" (although "should" maybe possible :-)
 
Similarly, it would be reasonable to say that these three snippets
should all be equivalent from a typechecking perspective:

    x = None # type: Optional[T]

    x: Optional[T] = None

    x: Optional[T]
    x = None

Nice idea, explicit is better than implicit.

--
Ivan

Nick Coghlan

unread,
Sep 4, 2016, 2:33:48 PM9/4/16
to Ivan Levkivskyi, Python-Dev
On 5 September 2016 at 04:13, Ivan Levkivskyi <levki...@gmail.com> wrote:
> On 4 September 2016 at 19:59, Nick Coghlan <ncog...@gmail.com> wrote:
>> I suspect you'll have an easier time of it on that front if you
>> include some examples of dynamically typed code that a well-behaved
>> type-checker *must* report as correct Python code, such as:
>>
>> x: Optional[List[Any]]
>> # This is the type of "x" *after* the if statement, not *during* it
>> if arg is not None:
>> x = list(arg)
>> if other_arg is not None:
>> # A well-behaved typechecker should allow this due to
>> # the more specific initialisation in this particular branch
>> x.extend(other_arg)
>> else:
>> x = None
>
> There are very similar examples in PEP 484 (section on singletons in
> unions),
> we could just copy those or use this example,
> but I am sure Guido will not agree to word "must" (although "should" maybe
> possible :-)

"Should" would be fine by me :)

Koos Zevenhoven

unread,
Sep 4, 2016, 2:42:59 PM9/4/16
to Ivan Levkivskyi, Nick Coghlan, Python-Dev
On Sun, Sep 4, 2016 at 9:13 PM, Ivan Levkivskyi <levki...@gmail.com> wrote:
> On 4 September 2016 at 19:59, Nick Coghlan <ncog...@gmail.com> wrote:
[...]
>>
>> Similarly, it would be reasonable to say that these three snippets
>> should all be equivalent from a typechecking perspective:
>>
>> x = None # type: Optional[T]
>>
>> x: Optional[T] = None
>>
>> x: Optional[T]
>> x = None
>
>
> Nice idea, explicit is better than implicit.
>

How is it going to help that these are equivalent within one checker,
if the meaning may differ across checkers?

-- Koos


> --
> Ivan
>
> _______________________________________________
> Python-Dev mailing list
> Pytho...@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:

Guido van Rossum

unread,
Sep 4, 2016, 4:18:28 PM9/4/16
to Ivan Levkivskyi, Nick Coghlan, Python-Dev
Everybody please stop panicking. PEP 526 does not make a stand on the
behavior of type checkers (other than deferring to PEP 484). If you
want to start a discussion about constraining type checkers please do
it over at python-ideas. There is no rush as type checkers are not
affected by the feature freeze.

--
--Guido van Rossum (python.org/~guido)

Greg Ewing

unread,
Sep 4, 2016, 6:01:06 PM9/4/16
to pytho...@python.org
> On Sun, Sep 04, 2016 at 12:31:26PM +0100, Mark Shannon wrote:
>
>> As defined in PEP 526, I think that type
>>annotations become a hindrance to type inference.

In Haskell-like languages, type annotations have no
ability to influence whether types can be inferred.
The compiler infers a type for everything, whether
you annotate or not. The annotations serve as
assertions about what the inferred types should be.
If they don't match, it means the programmer has
made a mistake somewhere.

I don't think it's possible for an annotation to
prevent the compiler from being able to infer a
type where it could have inferred one without the
annotation.

--
Greg

Nick Coghlan

unread,
Sep 4, 2016, 10:23:28 PM9/4/16
to Koos Zevenhoven, Python-Dev
On 5 September 2016 at 04:40, Koos Zevenhoven <k7h...@gmail.com> wrote:
> On Sun, Sep 4, 2016 at 9:13 PM, Ivan Levkivskyi <levki...@gmail.com> wrote:
>> On 4 September 2016 at 19:59, Nick Coghlan <ncog...@gmail.com> wrote:
> [...]
>>>
>>> Similarly, it would be reasonable to say that these three snippets
>>> should all be equivalent from a typechecking perspective:
>>>
>>> x = None # type: Optional[T]
>>>
>>> x: Optional[T] = None
>>>
>>> x: Optional[T]
>>> x = None
>>
>>
>> Nice idea, explicit is better than implicit.
>
> How is it going to help that these are equivalent within one checker,
> if the meaning may differ across checkers?

For typechecker developers, it provides absolute clarity that the
semantics of the new annotations should match the behaviour of
existing type comments when there's an initialiser present, or of a
parameter annotation when there's no initialiser present.

For folks following along without necessarily keeping up with all the
nuances, it makes it more explicit what Guido means when he says "PEP
526 does not make a stand on the
behavior of type checkers (other than deferring to PEP 484)."

For example, the formulation of the divergent initialisation case
where I think the preferred semantics are already implied by PEP 484
can be looked at this way:

x = None # type: Optional[List[T]]
if arg is not None:
x = list(arg)
if other_arg is not None:
x.extend(arg)

It would be a strange typechecker indeed that handled that case
differently from the new spellings made possible by PEP 526:

x: Optional[List[T]] = None
if arg is not None:
x = list(arg)
if other_arg is not None:
x.extend(arg)

x: Optional[List[T]]
if arg is None:
x = None
else:
x = list(arg)
if other_arg is not None:
x.extend(arg)

x: Optional[List[T]]
if arg is not None:
x = list(arg)
if other_arg is not None:
x.extend(arg)
else:
x = None

Or from the semantics of PEP 484 parameter annotations:

def my_func(arg:Optional[List[T]], other_arg=None):
# other_arg is implicitly Optional[Any]
if arg is not None and other_arg is not None:
# Here, "arg" can be assumed to be List[T]
# while "other_arg" is Any
arg.extend(other_arg)

A self-consistent typechecker will either allow all of the above, or
prohibit all of the above, while a typechecker that *isn't*
self-consistent would be incredibly hard to use.

Cheers,
Nick.

--
Nick Coghlan | ncog...@gmail.com | Brisbane, Australia

Koos Zevenhoven

unread,
Sep 5, 2016, 4:21:40 AM9/5/16
to Nick Coghlan, Python-Dev
On Mon, Sep 5, 2016 at 5:21 AM, Nick Coghlan <ncog...@gmail.com> wrote:
> On 5 September 2016 at 04:40, Koos Zevenhoven <k7h...@gmail.com> wrote:
>> On Sun, Sep 4, 2016 at 9:13 PM, Ivan Levkivskyi <levki...@gmail.com> wrote:
>>> On 4 September 2016 at 19:59, Nick Coghlan <ncog...@gmail.com> wrote:
>> [...]
>>>>
>>>> Similarly, it would be reasonable to say that these three snippets
>>>> should all be equivalent from a typechecking perspective:
>>>>
>>>> x = None # type: Optional[T]
>>>>
>>>> x: Optional[T] = None
>>>>
>>>> x: Optional[T]
>>>> x = None
>>>
>>>
>>> Nice idea, explicit is better than implicit.
>>
>> How is it going to help that these are equivalent within one checker,
>> if the meaning may differ across checkers?
>
> For typechecker developers, it provides absolute clarity that the
> semantics of the new annotations should match the behaviour of
> existing type comments when there's an initialiser present,

I understood that, but what's the benefit? I hope there will be a type
checker that breaks this "rule".

> or of a
> parameter annotation when there's no initialiser present.

No, your suggested addition does not provide any reference to this.
(...luckily, because that would have been worse.)

> For folks following along without necessarily keeping up with all the
> nuances, it makes it more explicit what Guido means when he says "PEP
> 526 does not make a stand on the
> behavior of type checkers (other than deferring to PEP 484)."

What you are proposing is exactly "making a stand on the behavior of
type checkers", and the examples you provide below are all variations
of the same situation and provide no justification for a general rule.

Here's a general rule:

The closer it gets to the end of drafting a PEP [1],
the more carefully you have to justify changes.

Justification is left as an exercise ;-).

--Koos

[1] or any document (or anything, I suppose)
--
+ Koos Zevenhoven + http://twitter.com/k7hoven +

Nick Coghlan

unread,
Sep 5, 2016, 6:05:54 AM9/5/16
to Koos Zevenhoven, Python-Dev
On 5 September 2016 at 18:19, Koos Zevenhoven <k7h...@gmail.com> wrote:
> On Mon, Sep 5, 2016 at 5:21 AM, Nick Coghlan <ncog...@gmail.com> wrote:
>> On 5 September 2016 at 04:40, Koos Zevenhoven <k7h...@gmail.com> wrote:
>>> On Sun, Sep 4, 2016 at 9:13 PM, Ivan Levkivskyi <levki...@gmail.com> wrote:
>>>> On 4 September 2016 at 19:59, Nick Coghlan <ncog...@gmail.com> wrote:
>>> [...]
>>>>>
>>>>> Similarly, it would be reasonable to say that these three snippets
>>>>> should all be equivalent from a typechecking perspective:
>>>>>
>>>>> x = None # type: Optional[T]
>>>>>
>>>>> x: Optional[T] = None
>>>>>
>>>>> x: Optional[T]
>>>>> x = None
>>>>
>>>>
>>>> Nice idea, explicit is better than implicit.
>>>
>>> How is it going to help that these are equivalent within one checker,
>>> if the meaning may differ across checkers?
>>
>> For typechecker developers, it provides absolute clarity that the
>> semantics of the new annotations should match the behaviour of
>> existing type comments when there's an initialiser present,
>
> I understood that, but what's the benefit? I hope there will be a type
> checker that breaks this "rule".

Such a typechecker means you're not writing Python anymore, you're
writing Java/C++/C# in a language that isn't designed to be used that
way.

Fortunately, none of the current typecheckers have made that mistake,
nor does anyone appear to be promoting this mindset outside this
particular discussion.

Cheers,
Nick.

--
Nick Coghlan | ncog...@gmail.com | Brisbane, Australia

Koos Zevenhoven

unread,
Sep 5, 2016, 7:48:04 AM9/5/16
to Nick Coghlan, Python-Dev
On Mon, Sep 5, 2016 at 1:04 PM, Nick Coghlan <ncog...@gmail.com> wrote:
> On 5 September 2016 at 18:19, Koos Zevenhoven <k7h...@gmail.com> wrote:
>> On Mon, Sep 5, 2016 at 5:21 AM, Nick Coghlan <ncog...@gmail.com> wrote:
>>> On 5 September 2016 at 04:40, Koos Zevenhoven <k7h...@gmail.com> wrote:
>>>> On Sun, Sep 4, 2016 at 9:13 PM, Ivan Levkivskyi <levki...@gmail.com> wrote:
>>>>> On 4 September 2016 at 19:59, Nick Coghlan <ncog...@gmail.com> wrote:
>>>> [...]
>>>>>>
>>>>>> Similarly, it would be reasonable to say that these three snippets
>>>>>> should all be equivalent from a typechecking perspective:
>>>>>>
>>>>>> x = None # type: Optional[T]
>>>>>>
>>>>>> x: Optional[T] = None
>>>>>>
>>>>>> x: Optional[T]
>>>>>> x = None
>>>>>
>>>>>
>>>>> Nice idea, explicit is better than implicit.
>>>>
>>>> How is it going to help that these are equivalent within one checker,
>>>> if the meaning may differ across checkers?
>>>
>>> For typechecker developers, it provides absolute clarity that the
>>> semantics of the new annotations should match the behaviour of
>>> existing type comments when there's an initialiser present,
>>
>> I understood that, but what's the benefit? I hope there will be a type
>> checker that breaks this "rule".
>
> Such a typechecker means you're not writing Python anymore, you're
> writing Java/C++/C# in a language that isn't designed to be used that
> way.

I'm glad those are all the languages you accuse me of. The list could
have been a lot worse. I actually have some good memories of Java. It
felt kind of cool at that age, and it taught me many things about
undertanding the structure of large and complicated programs after I
had been programming for years in other languages, including C++. It
also taught me to value simplicity instead, so here we are.

> Fortunately, none of the current typecheckers have made that mistake,
> nor does anyone appear to be promoting this mindset outside this
> particular discussion.

The thing I'm promoting here is to not add anything to PEP 526 that
says what a type checker is supposed to do with type annotations.
Quite the opposite of Java/C++/C#, I would say.

We can, of course, speculate about the future of type checkers and the
implications of PEP 526 on it. That's what I'm trying to do on
python-ideas, speculate about the best kind of type checking
(achievable with PEP 526 annotations) [1].

--Koos


[1] https://mail.python.org/pipermail/python-ideas/2016-September/042076.html

>
> Cheers,
> Nick.
>
> --
> Nick Coghlan | ncog...@gmail.com | Brisbane, Australia



--
+ Koos Zevenhoven + http://twitter.com/k7hoven +

Steven D'Aprano

unread,
Sep 5, 2016, 8:12:30 AM9/5/16
to pytho...@python.org
On Mon, Sep 05, 2016 at 11:19:38AM +0300, Koos Zevenhoven wrote:
> On Mon, Sep 5, 2016 at 5:21 AM, Nick Coghlan <ncog...@gmail.com> wrote:
[...]
> > On 5 September 2016 at 04:40, Koos Zevenhoven <k7h...@gmail.com> wrote:
> >> On Sun, Sep 4, 2016 at 9:13 PM, Ivan Levkivskyi <levki...@gmail.com> wrote:
> >>> On 4 September 2016 at 19:59, Nick Coghlan <ncog...@gmail.com> wrote:
> >> [...]


[Ivan Levkivskyi]
> >>>> Similarly, it would be reasonable to say that these three snippets
> >>>> should all be equivalent from a typechecking perspective:
> >>>>
> >>>> x = None # type: Optional[T]
> >>>>
> >>>> x: Optional[T] = None
> >>>>
> >>>> x: Optional[T]
> >>>> x = None
[...]

[Koos Zevenhoven]
> >> How is it going to help that these are equivalent within one checker,
> >> if the meaning may differ across checkers?

Before I can give an answer to your [Koos'] question, I have to
understand what you see as the problem here.

I *think* that you are worried that two different checkers will disagree
on what counts as a type error. That given the same chunk of code:

x: Optional[T] = None
if x:
spam(x)
else:
x.eggs()

two checkers will disagree as to whether or not the code is safe. Is
that your concern? If not, can you explain in more detail what your
concern is?


[Nick Coghlan]
> > For typechecker developers, it provides absolute clarity that the
> > semantics of the new annotations should match the behaviour of
> > existing type comments when there's an initialiser present,

[Koos]
> I understood that, but what's the benefit?

Are you asking what is the benefit of having three forms of syntax for
the same thing?

The type comment systax is required for Python 2 and backwards-
compatibility. That's a given.

The variable annotation syntax is required because the type comment
syntax is (according to the PEP) very much a second-best solution. See
the PEP:

https://www.python.org/dev/peps/pep-0526/#id4

So this is a proposal to create a *better* syntax for something which
already exists. The old version, using comments, cannot be deprecated or
removed, as it is required for Python 3.5 and older.

Once we allow

x: T = value

then there benefit in also allowing:

x: T
x = value

since this supports some of the use cases that aren't well supported by
type comments or one-line variable annotations. E.g. very long or deeply
indented lines, situations where the assignment to x is inside an
if...else branch, or any other time you wish to declare the type of the
variable before actually setting the variable.



[Koos]
> I hope there will be a type checker that breaks this "rule".

I don't understand. Do you mean that you want three different behaviours
for these type annotations? What would they do differently?

To me, all three are clear and obvious ways of declaring the type of a
variable. Whether I write `x: T = expr` or `x = expr #type:T`, it
should be clear that I intend `x` to be treated as T. What would you do
differently?


[Nick]
> > or of a
> > parameter annotation when there's no initialiser present.

[Koos]
> No, your suggested addition does not provide any reference to this.
> (...luckily, because that would have been worse.)

I'm sorry, I don't follow you. Are you suggesting that we should have
the syntax `name:T = value` mean something different inside and outside
of a function parameter list?

def func(x:T = v):
y:T = v


The first line declares x as type T with default value v; the second
line declares y as type T with initial value v. You say this is
"worse"... worse than what? What behaviour would you prefer to see?



[Nick]
> > For folks following along without necessarily keeping up with all the
> > nuances, it makes it more explicit what Guido means when he says "PEP
> > 526 does not make a stand on the
> > behavior of type checkers (other than deferring to PEP 484)."

[Koos]
> What you are proposing is exactly "making a stand on the behavior of
> type checkers", and the examples you provide below are all variations
> of the same situation and provide no justification for a general rule.

I'm sorry, I don't understand this objection. The closest I can get to
an answer would be:

A general rule is better than a large number of unconnected, arbitrary,
special cases.

Does that help?


--
Steve

Koos Zevenhoven

unread,
Sep 5, 2016, 9:42:13 AM9/5/16
to Steven D'Aprano, Python Dev
It looks like you are trying to make sense of this, but unfortunately
there's some added mess and copy&paste-like errors regarding who said
what. I think no such errors remain in what I quote below:

On Mon, Sep 5, 2016 at 3:10 PM, Steven D'Aprano <st...@pearwood.info> wrote:
>
> [Koos Zevenhoven]
>> >> How is it going to help that these are equivalent within one checker,
>> >> if the meaning may differ across checkers?
>
> Before I can give an answer to your [Koos'] question, I have to
> understand what you see as the problem here.

The problem was that suggested restrictive addition into PEP 526 with
no proper justification, especially since the PEP was not supposed to
restrict the semantics of type checking. I was asking how it would
help to add that restriction. Very simple. Maybe some people got
confused because I did want to *discuss* best practices for type
checking elsewhere.

> I *think* that you are worried that two different checkers will disagree
> on what counts as a type error. That given the same chunk of code:

In the long term, I'm worried about that, but there's nothing that PEP
526 can do about it at this point.

> [Nick Coghlan]
>> > For typechecker developers, it provides absolute clarity that the
>> > semantics of the new annotations should match the behaviour of
>> > existing type comments when there's an initialiser present,
>
> [Koos]
>> I understood that, but what's the benefit?
>
> Are you asking what is the benefit of having three forms of syntax for
> the same thing?

No, still the same thing: What is the benefit of that particular
restriction, when there are no other restrictions? Better just leave
it out.

> The type comment systax is required for Python 2 and backwards-
> compatibility. That's a given.

Sure, but all type checkers will not have to care about Python 2.

> The variable annotation syntax is required because the type comment
> syntax is (according to the PEP) very much a second-best solution. See
> the PEP:
>
> https://www.python.org/dev/peps/pep-0526/#id4
>
> So this is a proposal to create a *better* syntax for something which
> already exists. The old version, using comments, cannot be deprecated or
> removed, as it is required for Python 3.5 and older.

Right.

> Once we allow
>
> x: T = value
>
> then there benefit in also allowing:
>
> x: T
> x = value
>
> since this supports some of the use cases that aren't well supported by
> type comments or one-line variable annotations. E.g. very long or deeply
> indented lines, situations where the assignment to x is inside an
> if...else branch, or any other time you wish to declare the type of the
> variable before actually setting the variable.

Sure.

> [Nick]
>> > For folks following along without necessarily keeping up with all the
>> > nuances, it makes it more explicit what Guido means when he says "PEP
>> > 526 does not make a stand on the
>> > behavior of type checkers (other than deferring to PEP 484)."
>
> [Koos]
>> What you are proposing is exactly "making a stand on the behavior of
>> type checkers", and the examples you provide below are all variations
>> of the same situation and provide no justification for a general rule.
>
> I'm sorry, I don't understand this objection. The closest I can get to
> an answer would be:
>
> A general rule is better than a large number of unconnected, arbitrary,
> special cases.

A general rule that does not solve a problem is worse than no rule.


-- Koos


>
> --
> Steve
> _______________________________________________
> Python-Dev mailing list
> Pytho...@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/k7hoven%40gmail.com



--
+ Koos Zevenhoven + http://twitter.com/k7hoven +

Nick Coghlan

unread,
Sep 5, 2016, 9:48:43 AM9/5/16
to Koos Zevenhoven, Python-Dev
On 5 September 2016 at 21:46, Koos Zevenhoven <k7h...@gmail.com> wrote:
> The thing I'm promoting here is to not add anything to PEP 526 that
> says what a type checker is supposed to do with type annotations.

PEP 526 says it doesn't intend to expand the scope of typechecking
semantics beyond what PEP 484 already supports. For that to be true,
it needs to be able to define expected equivalencies between the
existing semantics of PEP 484 and the new syntax in PEP 526.

If those equivalencies can't be defined, then Mark's concerns are
valid, and the PEP either needs to be deferred as inadvertently
introducing new semantics while intending to only introduce new
syntax, or else the intended semantics need to be spelled out as they
were in PEP 484 so folks can judge the proposal accurately, rather
than attempting to judge it based on an invalid premise.

For initialised variables, the equivalence between the two PEPs is
straightforward: "x: T = expr" is equivalent to "x = expr # type: T"

If PEP 526 always required an initialiser, and didn't introduce
ClassVar, there'd be no controversy, and we'd already be done.

However, the question of "Does this new syntax necessarily imply the
introduction of new semantics?" gets a lot murkier for uninitialised
variables.

A strict "no new semantics beyond PEP 484" interpretation would mean
that these need to be interpreted the same way as parameter
annotations: as a type hint on the outcome of the code executed up to
that point, rather than as a type constraint on assignment statements
in the code *following* that point.

Consider:

def simple_appender(base: List[T], value: T) -> None:
base.append(value)

This will typecheck fine - lists have append methods, and the value
appended conforms to what our list expects.

The parameter annotations mainly act as constraints on how this
function is *called*, with the following all being problematic:

simple_appender([1, 2, 3], "hello") # Container/value type mismatch
simple_appender([1, 2, 3], None) # Value is not optional
simple_appender((1, 2, 3), 4) # A tuple is not a list

However, because of the way name binding in Python works, the
annotations in *no way* constrain assignments inside the function
body:

def not_so_simple_appender(base: List[T], value: T) -> None:
other_ref = base
base = value
other_ref.append(base)

From a dynamic typechecking perspective, that's just as valid as the
original implementation, since the "List[T]" type of "other_ref" is
inferred from the original type of "base" before it gets rebound to
value and has its new type inferred as "T".

This freedom to rebind an annotated name without a typechecker
complaining is what Mark is referring to when he says that PEP 484
attaches annotations to expressions rather than types.

Under such "parameter annotation like" semantics, uninitialised
variable annotations would only make sense as a new form of
post-initialisation assertion, and perhaps as some form of
Eiffel-style class invariant documentation syntax.

The usage to help ensure code correctness in multi-branch
initialisation cases would then look something like this:

if case1:
x = ...
elif case2:
x = ...
else:
x = ...
assert x : List[T] # If we get to here without x being List[T],
something's wrong

The interpreter could then optimise type assertions out entirely at
function level (even in __debug__ mode), and turn them into
annotations at module and class level (with typecheckers then deciding
how to process them).

That's not what the PEP proposes for uninitialised variables though:
it proposes processing them *before* a series of assignment
statements, which *only makes sense* if you plan to use them to
constrain those assignments in some way.

If you wanted to write something like that under a type assertion
spelling, then you could enlist the aid of the "all" builtin:

assert all(x) : List[T] # All local assignments to "x" must abide
by this constraint
if case1:
x = ...
elif case2:
x = ...
else:
x = ...

So I've come around to the point of view of being a solid -1 on the
PEP as written - despite the best of intentions, it strongly
encourages "assert all(x): List[T]" as the default interpretation of
unitialised variable annotations, and doesn't provide an easy way to
do arbitrary inline type assertions to statically check the
correctness of the preceding code the way we can with runtime
assertions and as would happen if the code in question was factored
out to an annotated function.

Stick the "assert" keyword in front of them though, call them type
assertions rather than type declarations, and require all() when you
want to constrain all assignments later in the function (or until the
next relevant type assertion), and I'm a solid +1.

Cheers,
Nick.

--
Nick Coghlan | ncog...@gmail.com | Brisbane, Australia

Koos Zevenhoven

unread,
Sep 5, 2016, 10:01:05 AM9/5/16
to Nick Coghlan, Python-Dev
Sorry, I don't have time to read emails of this length now, and
perhaps I'm interpreting your emails more literally than you write
them, anyway.

If PEP 484 introduces unnecessary restrictions at this point, that's a
separate issue. I see no need to copy those into PEP 526. I'll be
posting my own remaining concerns regarding PEP 526 when I find the
time.

-- Koos
--
+ Koos Zevenhoven + http://twitter.com/k7hoven +

Nick Coghlan

unread,
Sep 5, 2016, 10:03:57 AM9/5/16
to Koos Zevenhoven, Python-Dev
On 5 September 2016 at 23:46, Nick Coghlan <ncog...@gmail.com> wrote:
> Under such "parameter annotation like" semantics, uninitialised
> variable annotations would only make sense as a new form of
> post-initialisation assertion, and perhaps as some form of
> Eiffel-style class invariant documentation syntax.

Thinking further about the latter half of that comment, I realised
that the PEP 484 equivalence I'd like to see for variable annotations
in a class body is how they would relate to a property definition
using the existing PEP 484 syntax.

For example, consider:

class AnnotatedProperty:

@property
def x(self) -> int:
...

@x.setter
def x(self, value: int) -> None:
...

@x.deleter
def x(self) -> None:
...

It would be rather surprising if that typechecked differently from:

class AnnotatedVariable:

x: int

For ClassVar, you'd similarly want:


class AnnotatedClassVariable:

x: ClassVar[int]

to typecheck like "x" was declared as an annotated property on the metaclass.

Mark Shannon

unread,
Sep 5, 2016, 10:21:21 AM9/5/16
to pytho...@python.org


On 04/09/16 21:16, Guido van Rossum wrote:
> Everybody please stop panicking. PEP 526 does not make a stand on the
> behavior of type checkers (other than deferring to PEP 484). If you
> want to start a discussion about constraining type checkers please do
> it over at python-ideas. There is no rush as type checkers are not
> affected by the feature freeze.
>

Indeed, we shouldn't panic. We should take our time, review this
carefully and make sure that the version of typehints that lands in 3.7
is one that we most of us are happy with and all of us can at least
tolerate.

Cheers,
Mark.

Koos Zevenhoven

unread,
Sep 5, 2016, 10:26:21 AM9/5/16
to Nick Coghlan, Python-Dev
On Mon, Sep 5, 2016 at 5:02 PM, Nick Coghlan <ncog...@gmail.com> wrote:
> On 5 September 2016 at 23:46, Nick Coghlan <ncog...@gmail.com> wrote:
>> Under such "parameter annotation like" semantics, uninitialised
>> variable annotations would only make sense as a new form of
>> post-initialisation assertion,

Why not discuss this in the python-ideas thread where I quote myself
from last Friday regarding the notion of annotations as assertions?

>> and perhaps as some form of
>> Eiffel-style class invariant documentation syntax.

I hope this is simpler than it sounds :-)

> Thinking further about the latter half of that comment, I realised
> that the PEP 484 equivalence I'd like to see for variable annotations
> in a class body is how they would relate to a property definition
> using the existing PEP 484 syntax.
>
> For example, consider:
>
> class AnnotatedProperty:
>
> @property
> def x(self) -> int:
> ...
>
> @x.setter
> def x(self, value: int) -> None:
> ...
>
> @x.deleter
> def x(self) -> None:
> ...
>
> It would be rather surprising if that typechecked differently from:
>
> class AnnotatedVariable:
>
> x: int
>

How about just using the latter way? That's much clearer. I doubt this
needs a change in the PEP.

> For ClassVar, you'd similarly want:
>
>
> class AnnotatedClassVariable:
>
> x: ClassVar[int]
>
> to typecheck like "x" was declared as an annotated property on the metaclass.
>

Sure, there are many things that one may consider equivalent. I doubt
you'll be able to list them all in a way that everyone agrees on. And
I hope you don't take this as a challenge -- I'm in the don't-panic
camp :).


-- Koos


> Cheers,
> Nick.
>
> --
> Nick Coghlan | ncog...@gmail.com | Brisbane, Australia



--
+ Koos Zevenhoven + http://twitter.com/k7hoven +

Guido van Rossum

unread,
Sep 5, 2016, 11:36:36 AM9/5/16
to Mark Shannon, Python-Dev
On Mon, Sep 5, 2016 at 7:19 AM, Mark Shannon <ma...@hotpy.org> wrote:
> On 04/09/16 21:16, Guido van Rossum wrote:
>>
>> Everybody please stop panicking. PEP 526 does not make a stand on the
>> behavior of type checkers (other than deferring to PEP 484). If you
>> want to start a discussion about constraining type checkers please do
>> it over at python-ideas. There is no rush as type checkers are not
>> affected by the feature freeze.
>>
>
> Indeed, we shouldn't panic. We should take our time, review this carefully
> and make sure that the version of typehints that lands in 3.7 is one that we
> most of us are happy with and all of us can at least tolerate.

Right, we want the best possible version to land in 3.7. And in order
to make that possible, I have to accept it *provisionally* for 3.6 and
Ivan's implementation will go into 3.6b1. We will then have until 3.7
to experiment with it and tweak it as necessary.

Maybe ClassVar will turn out to be pointless. Maybe we'll decide that
we want to have a syntax for quickly annotating several variables with
the same type (x, y, z: T). Maybe we'll change the rules for how or
when __annotations__ is updated. Maybe we'll change slightly whether
we'll allow annotating complex assignment targets like x[f()].

But without starting the experiment now we won't be able to evaluate
any of those things. Waiting until 3.7 is just going to cause the
exact same discussions that are going on now 18 months from now.

Regarding how type checkers should use the new syntax, PEP 526 itself
give barely more guidance than PEP 3107, except that we now have PEP
484 to tell us what types ought to look like, *if* you want to use an
external type checker.

I hope that you and others will help write another PEP
(informational?) to guide type checkers and their users. Given my own
experience at Dropbox (much of it vicariously through the eyes of the
many Dropbox engineers annotating their own code) I am *very*
reluctant to try and specify the behavior of a type checker formally
myself. As anyone who has used mypy on a sizeable project knows, there
are a lot more details to sort out than how to handle branches that
assign different values to the same variable.

For people who want to read about what it is like to use mypy
seriously, I can recommend the series of three blog posts by Daniel
Moisset starting here:
http://www.machinalis.com/blog/a-day-with-mypy-part-1/

If you want to see a large open source code base that's annotated for
mypy (with 97% coverage), I recommend looking at Zulip:
https://github.com/zulip/zulip
Try digging through the history and looking for commits mentioning
mypy; a Google Summer of Code student did most of the work over the
summer. (The syntax used is the Python-2-compatible version, but
that's hardly relevant -- the important things to observe include how
they use types and how they had to change their code to pacify mypy.)

--
--Guido van Rossum (python.org/~guido)

Steven D'Aprano

unread,
Sep 5, 2016, 12:20:40 PM9/5/16
to pytho...@python.org
On Mon, Sep 05, 2016 at 04:40:08PM +0300, Koos Zevenhoven wrote:

> On Mon, Sep 5, 2016 at 3:10 PM, Steven D'Aprano <st...@pearwood.info> wrote:
> >
> > [Koos Zevenhoven]
> >> >> How is it going to help that these are equivalent within one checker,
> >> >> if the meaning may differ across checkers?
> >
> > Before I can give an answer to your [Koos'] question, I have to
> > understand what you see as the problem here.
>
> The problem was that suggested restrictive addition into PEP 526 with
> no proper justification, especially since the PEP was not supposed to
> restrict the semantics of type checking.

What "suggested restrictive addition into PEP 526" are you referring to?

Please be specific.


> I was asking how it would
> help to add that restriction. Very simple. Maybe some people got
> confused because I did want to *discuss* best practices for type
> checking elsewhere.

I still can't answer your question, because I don't understand what
restriction you are talking about. Unless you mean the restriction that
variable annotations are to mean the same thing whether they are written
as `x:T = v` or `x = v #type: T`. I don't see this as a restriction.



> > The type comment systax is required for Python 2 and backwards-
> > compatibility. That's a given.
>
> Sure, but all type checkers will not have to care about Python 2.

They will have to care about type comments until such time as they are
ready to abandon all versions of Python older than 3.6.

And even then, there will probably be code still written with type
comments until Python 4000.


--
Steve
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com

Chris Angelico

unread,
Sep 5, 2016, 12:35:32 PM9/5/16
to python-dev
On Tue, Sep 6, 2016 at 2:17 AM, Steven D'Aprano <st...@pearwood.info> wrote:
>> > The type comment systax is required for Python 2 and backwards-
>> > compatibility. That's a given.
>>
>> Sure, but all type checkers will not have to care about Python 2.
>
> They will have to care about type comments until such time as they are
> ready to abandon all versions of Python older than 3.6.

More specifically, until *the code they check* can abandon all <3.6.
If the checker itself depends on new features (say, an improved AST
parser that retains inline comments for subsequent evaluation), you
could say "You must have Python 3.6 or better to use this checker",
but the application itself would still be able to run on older
versions. That's another reason not to delay this PEP until 3.7, as
it'd push _everything_ another 18 months (or more) into the future.

ChrisA

Ethan Furman

unread,
Sep 5, 2016, 1:25:29 PM9/5/16
to pytho...@python.org
On 09/05/2016 06:46 AM, Nick Coghlan wrote:

[an easy to understand explanation for those of us who aren't type-inferring gurus]

Thanks, Nick. I think I finally have a grip on what Mark was talking about, and about how these things should work.

Much appreciated!

--
~Ethan~

Guido van Rossum

unread,
Sep 5, 2016, 2:17:23 PM9/5/16
to Ethan Furman, Nick Coghlan, Python-Dev
On Mon, Sep 5, 2016 at 10:24 AM, Ethan Furman <et...@stoneleaf.us> wrote:
> On 09/05/2016 06:46 AM, Nick Coghlan wrote:
>
> [an easy to understand explanation for those of us who aren't type-inferring
> gurus]
>
> Thanks, Nick. I think I finally have a grip on what Mark was talking about,
> and about how these things should work.
>
> Much appreciated!

There must be some misunderstanding. The message from Nick with that
timestamp (https://mail.python.org/pipermail/python-dev/2016-September/146200.html)
hinges on an incorrect understanding of the intention of annotations
without value (e.g. `x: Optional[int]`), leading to a -1 on the PEP.

I can't tell if this is an honest misunderstanding or a strawman, but
I want to set the intention straight.

First of all, the PEP does not require the type checker to interpret
anything in a particular way; it intentionally shies away from
prescribing semantics (other than the runtime semantics of updating
__annotations__ or verifying that the target appears assignable).

But there appears considerable fear about what expectations the PEP
has of a reasonable type checker. In response to this I'll try to
sketch how I think this should be implemented in mypy.

There are actually at least two separate cases: if x is a local
variable, the intention of `x: <type>` is quite different from when x
occurs in a class.

- When found in a class, all *uses* (which may appear in modules far
away from the definition) must be considered to conform to the stated
type -- as must all assignments to it, but I believe that's never been
in doubt. There are just too many edge cases to consider to make
stricter assumptions (e.g. threading, exceptions, signals), so that
even after seeing `self.x = 42; use(self.x)` the call to use() cannot
assume that self.x is still 42.

- But when found inside a function referring to a local variable, mypy
should treat the annotation as a restriction on assignment, and use
its own inference engine to type-check *uses* of that variable. So
that in this example (after Mark's):

def bar() -> Optional[int]: ...

def foo() -> int:
x: Optional[int]
x = bar()
if x is None:
return -1
return x

there should not be an error on `return x` because mypy is smart
enough to know it cannot be None at that point.

I am at a loss how to modify the PEP to avoid this misunderstanding,
since it appears it is entirely in the reader's mind. The PEP is not a
tutorial but a spec for the implementation, and as a spec it is quite
clear that it leaves the type-checking semantics up to individual type
checkers. And I think that is the right thing to do -- in practice
there are many other ways to write the above example, and mypy will
understand some of them, but not others, while other type checkers may
understand a different subset of examples. I can't possibly prescribe
how type checkers should behave in each case -- I can't even tell
which cases are important to distinguish.

So writing down "the type checker should not report an error in the
following case" in the PEP is not going to be helpful for anyone (in
contrast, I think discussing examples on a mailing list *is* useful).
Like a linter, a type checker has limited intelligence, and it will be
a quality of implementation issue as to how useful a type checker will
be in practice. But that's not the topic of PEP 526.

--
--Guido van Rossum (python.org/~guido)

Terry Reedy

unread,
Sep 5, 2016, 2:59:57 PM9/5/16
to pytho...@python.org
On 9/5/2016 11:34 AM, Guido van Rossum wrote:
> On Mon, Sep 5, 2016 at 7:19 AM, Mark Shannon <ma...@hotpy.org> wrote:

>> Indeed, we shouldn't panic. We should take our time, review this carefully
>> and make sure that the version of typehints that lands in 3.7 is one that we
>> most of us are happy with and all of us can at least tolerate.
>
> Right, we want the best possible version to land in 3.7. And in order
> to make that possible, I have to accept it *provisionally* for 3.6 and

Until now, the 'provisional' part has not been clear to me, and
presumably others who have written as if acceptance meant 'baked in
stone'. We have had provisional modules, but not, that I can think of,
syntax that remains provisional past the x.y.0 release.

> Ivan's implementation will go into 3.6b1. We will then have until 3.7
> to experiment with it and tweak it as necessary.

New syntax is usually implemented within python itself, and can be fully
experimented with during alpha and beta releases. In this case, the
effective implementation will be in 3rd party checkers and
experimentation will take longer.

--
Terry Jan Reedy

Ivan Levkivskyi

unread,
Sep 5, 2016, 3:03:20 PM9/5/16
to Guido van Rossum, Nick Coghlan, Python-Dev
On 5 September 2016 at 20:15, Guido van Rossum <gu...@python.org> wrote:
There are actually at least two separate cases: if x is a local
variable, the intention of `x: <type>` is quite different from when x
occurs in a class.

If I understand you correctly this also matches my mental model.
In local scope

x: ann = value

acts like a filter allowing only something compatible to be assigned
at this point (and/or casting to a more precise type).
While in class or module it is a part of an "API specification" for that class/module.
 
I am at a loss how to modify the PEP to avoid this misunderstanding,
since it appears it is entirely in the reader's mind. The PEP is not a
tutorial but a spec for the implementation, ...

I was thinking about changing terminology to name annotations, but that will
not solve problem. The PEP mentions a separate document (guidelines) that will be published.
I think a real solution will be to make a separate PEP that will explain in details
what is preferred meaning of types and what people and machines could do with types.

Is anyone interested in going in this direction? I would like to especially invite Mark,
you have a lot of experience with types inference that would be very helpful
(also it seems to me that you are concerned about this).

--
Ivan

Nick Coghlan

unread,
Sep 5, 2016, 11:53:51 PM9/5/16
to Guido van Rossum, Python-Dev
On 6 September 2016 at 04:15, Guido van Rossum <gu...@python.org> wrote:
> On Mon, Sep 5, 2016 at 10:24 AM, Ethan Furman <et...@stoneleaf.us> wrote:
>> On 09/05/2016 06:46 AM, Nick Coghlan wrote:
>>
>> [an easy to understand explanation for those of us who aren't type-inferring
>> gurus]
>>
>> Thanks, Nick. I think I finally have a grip on what Mark was talking about,
>> and about how these things should work.
>>
>> Much appreciated!
>
> There must be some misunderstanding. The message from Nick with that
> timestamp (https://mail.python.org/pipermail/python-dev/2016-September/146200.html)
> hinges on an incorrect understanding of the intention of annotations
> without value (e.g. `x: Optional[int]`), leading to a -1 on the PEP.

Short version of below: after sleeping on it, I'd be OK with the PEP
again if it just *added* the explicit type assertions, such that the
shorthand notation could be described in those terms.

Specifically, "x: T = expr" would be syntactic sugar for:

x = expr
assert x: T

While the bare "x: T" would be syntactic sugar for:

assert all(x): T

which in turn would imply that all future bindings of that assignment
target should be accompanied by a type assertion (and typecheckers may
differ in how they define "all future bindings").

Even if everyone always writes the short forms, the explicit
assertions become a useful aid in explaining what those short forms
mean.

The main exploratory question pushed back to the typechecking
community to answer by 3.7 would then be to resolve precisely what
"assert all(TARGET): ANNOTATION" means for different kinds of target
and for different scopes (e.g. constraining nonlocal name rebindings
in closures, constraining attribute rebinding in modules, classes, and
instances).

> I can't tell if this is an honest misunderstanding or a strawman, but
> I want to set the intention straight.

I'm pretty sure I understand your intentions (and broadly agree with
them), I just also agree with Mark that people are going to need some
pretty strong hints that these are not Java/C/C++/C# style type
declarations, and am suggesting a different way of getting there by
being more prescriptive about your intended semantics.

Specifically:

* for 3.6, push everything into a new form of assert statement and
define those assertions as syntactic sugar for PEP 484 constructs
* for 3.7 (and provisionally in 3.6), consider blessing some of those
assertions with the bare annotation syntax

Folks are already comfortable with the notion of assertions not
necessarily being executed at runtime, and they're also comfortable
with them as a way of doing embedded correctness testing inline with
the code.

> First of all, the PEP does not require the type checker to interpret
> anything in a particular way; it intentionally shies away from
> prescribing semantics (other than the runtime semantics of updating
> __annotations__ or verifying that the target appears assignable).

Unfortunately, the ordering problem the PEP introduces means it pushes
very heavily in a particular direction, such that I think we're going
to be better off if you actually specify draft semantics in the PEP
(in terms of existing PEP 484 annotations), rather than leaving it
completely open to interpretation. It's still provisional so you can
change your mind later, but the notion of describing a not yet bound
name is novel enough that I think more guidance (even if it's
provisional) is needed here than was needed in the case of function
annotations.

(I realise you already understand most of the background I go through
below - I'm spelling out my reasoning so you can hopefully figure out
where I'm diverging from your point of view)

If we look at PEP 484, all uses of annotations exist between two
pieces of code: one that produces a value, and one that binds the
value to a reference.

As such, they act as type assertions:

- on parameters, they assert "I am expecting this argument to be of this type"
- on assignments, they assert "I an expecting this initialiser to be
of this type"

Typecheckers can then use those assertions in two ways: as a
constraint on the value producer, and as a more precise hint if type
inference either isn't possible (e.g. function parameters,
initialisation to None), or gives an overly broad answer (e.g empty
containers)

The "x: T = expr" syntax is entirely conformant with that system - all
it does is change the spelling of the existing type hint comments.

Allowing "assert x: T" would permit that existing kind of type
assertion to be inserted at arbitrary points in the code without
otherwise affecting control flow or type inference, as if you had
written:

# PEP 484
def is_T(arg: T) -> None:
pass

is_T(x)

Or:

# PEP 526
x: T = x

By contrast, bare annotations on new assignment targets without an
initialiser can't be interpreted that way, as there is no *preceding
value to constrain*.

That inability to interpret them in the same sense as existing
annotations means that there's really only one plausible way to
interpret them if a typechecker is going to help ensure that the type
assertion is actually true in a given codebase: as a constraint on
*future* bindings to that particular target.

Typecheckers may differ in how they enforce that constraint, and how
the declared constraint influences the type inference process, but
that "explicit declaration of implicit future type assertions" is core
to the notion of bare variable annotations making any sense at all.

That's a genuinely new concept to introduce into the language, and the
PEP quite clearly intends bare annotations to be used that way given
its discussion of class invariants and the distinction between
instance variables with a class level default and class variables that
shouldn't be shadowed on instances.

> But there appears considerable fear about what expectations the PEP
> has of a reasonable type checker. In response to this I'll try to
> sketch how I think this should be implemented in mypy.
>
> There are actually at least two separate cases: if x is a local
> variable, the intention of `x: <type>` is quite different from when x
> occurs in a class.

This is where I think the "assert all(x): T" notation is useful, as it
changes that core semantic question to "What does 'all' mean for a
type assertion?"

Based on your stated intentions for mypy, it provisionally means:

* for a local variable, "all future bindings in the current scope".

* for a class or module variable, "all future bindings in the current
scope, and all future bindings via attribute access".

Both initialised and bare variable annotations can then be defined as
syntactic sugar for explicit type assertions:

# Initialised annotation
x: T = expr

x = expr
assert x: T # Equivalent type assertion

# Bare annotation
x: T
x = expr

assert all(x): T # Equivalent type assertion
x = expr
assert x: T # Assertion implied by all(x) above

(A full expansion would also show setting __annotations__, but that's
not my main concern here)

> I am at a loss how to modify the PEP to avoid this misunderstanding,
> since it appears it is entirely in the reader's mind. The PEP is not a
> tutorial but a spec for the implementation, and as a spec it is quite
> clear that it leaves the type-checking semantics up to individual type
> checkers. And I think that is the right thing to do -- in practice
> there are many other ways to write the above example, and mypy will
> understand some of them, but not others, while other type checkers may
> understand a different subset of examples. I can't possibly prescribe
> how type checkers should behave in each case -- I can't even tell
> which cases are important to distinguish.

Providing an easier path to decomposing the new syntax into
pre-existing PEP 484 semantics would definitely help me, and I suspect
it would help other folks as well.

Recapping:

* Introduce "assert TARGET: ANNOTATION" as a new noop-at-runtime
syntactic primitive that typechecks as semantically equivalent to:

def _conforms_to_type(x: ANNOTATION): -> None
pass
_conforms_to_type(TARGET)

* Introduce "assert all(TARGET): ANNOTATION" as a way to declaratively
annotate future assignments to a particular target

* Define variable annotations in terms of those two new primitives

* Make it clear that there's currently still room for semantic
variation between typecheckers in defining precisely what "assert
all(TARGET): ANNOTATION" means

> So writing down "the type checker should not report an error in the
> following case" in the PEP is not going to be helpful for anyone (in
> contrast, I think discussing examples on a mailing list *is* useful).

Yeah, I've come around to agreeing with you on that point.

Cheers,
Nick.

--
Nick Coghlan | ncog...@gmail.com | Brisbane, Australia

Guido van Rossum

unread,
Sep 6, 2016, 12:07:12 AM9/6/16
to Nick Coghlan, Python-Dev
I'm sorry, but we're not going to invent new syntax this late in the
game. The syntax proposed by the PEP has been on my mind ever since
PEP 484 with very minor variations; I first proposed it seriously on
python-ideas over a month ago, we've been debating the details since
then, and it's got a solid implementation based on those debates by
Ivan Levkivskyi. In contrast, it looks like you just made the "assert
x: T" syntax up last night in response to the worries expressed by
Mark Shannon, and "assert" sounds a lot like a run-time constraint to
me.

Instead, I encourage you to participate in the writing of a separate
PEP explaining how type checkers are expected to work (since PEP 526
doesn't specify that). Ivan is also interested in such a PEP and we
hope Mark will also lend us his expertise.
--
--Guido van Rossum (python.org/~guido)

Ian Foote

unread,
Sep 6, 2016, 11:38:33 AM9/6/16
to pytho...@python.org
On 05/09/16 14:46, Nick Coghlan wrote:
> That's not what the PEP proposes for uninitialised variables though:
> it proposes processing them *before* a series of assignment
> statements, which *only makes sense* if you plan to use them to
> constrain those assignments in some way.
>
> If you wanted to write something like that under a type assertion
> spelling, then you could enlist the aid of the "all" builtin:
>
> assert all(x) : List[T] # All local assignments to "x" must abide
> by this constraint
> if case1:
> x = ...
> elif case2:
> x = ...
> else:
> x = ...
>

Would the `assert all(x)` be executed at runtime as well or would this
be syntax only for type checkers? I think this particular spelling at
least is potentially confusing.

Regards,
Ian F

signature.asc

Nick Coghlan

unread,
Sep 6, 2016, 12:08:30 PM9/6/16
to Guido van Rossum, Python-Dev
On 6 September 2016 at 14:04, Guido van Rossum <gu...@python.org> wrote:
> I'm sorry, but we're not going to invent new syntax this late in the
> game. The syntax proposed by the PEP has been on my mind ever since
> PEP 484 with very minor variations; I first proposed it seriously on
> python-ideas over a month ago, we've been debating the details since
> then, and it's got a solid implementation based on those debates by
> Ivan Levkivskyi. In contrast, it looks like you just made the "assert
> x: T" syntax up last night in response to the worries expressed by
> Mark Shannon, and "assert" sounds a lot like a run-time constraint to
> me.

That's a fair description, but the notation also helped me a lot in
articulating the concepts I was concerned about without having to put
dummy annotated functions everywhere :)

> Instead, I encourage you to participate in the writing of a separate
> PEP explaining how type checkers are expected to work (since PEP 526
> doesn't specify that). Ivan is also interested in such a PEP and we
> hope Mark will also lend us his expertise.

Aye, I'd be happy to help with that - I think everything proposed can
be described in terms of existing PEP 484 primitives and the
descriptor protocol, so the requirements on typecheckers would just be
for them to be self-consistent, rather than defining fundamentally new
behaviours.

Cheers,
Nick.

--
Nick Coghlan | ncog...@gmail.com | Brisbane, Australia

Nick Coghlan

unread,
Sep 6, 2016, 12:15:28 PM9/6/16
to Ian Foote, pytho...@python.org

Only for typecheckers, same as the plans for function level bare
annotations. Otherwise it wouldn't work, since you'd be calling
"all()" on a non-iterable :)

Guido doesn't like the syntax though, so the only place it would ever
appear is explanatory notes describing the purpose of the new syntax,
and hence can be replaced by something like:

# After all future assignments to x, check that x conforms to T

Cheers,
Nick.

P.S. Or, if you're particularly fond of mathematical notation, and we
take type categories as sets:

# ∀x: x ∈ T

That would be a singularly unhelpful explanatory comment for the vast
majority of folks, though :)

Guido van Rossum

unread,
Sep 6, 2016, 12:24:59 PM9/6/16
to Nick Coghlan, Python-Dev
On Tue, Sep 6, 2016 at 9:00 AM, Nick Coghlan <ncog...@gmail.com> wrote:
> On 6 September 2016 at 14:04, Guido van Rossum <gu...@python.org> wrote:
>> I'm sorry, but we're not going to invent new syntax this late in the
>> game. The syntax proposed by the PEP has been on my mind ever since
>> PEP 484 with very minor variations; I first proposed it seriously on
>> python-ideas over a month ago, we've been debating the details since
>> then, and it's got a solid implementation based on those debates by
>> Ivan Levkivskyi. In contrast, it looks like you just made the "assert
>> x: T" syntax up last night in response to the worries expressed by
>> Mark Shannon, and "assert" sounds a lot like a run-time constraint to
>> me.
>
> That's a fair description, but the notation also helped me a lot in
> articulating the concepts I was concerned about without having to put
> dummy annotated functions everywhere :)

Thanks Nick! It seems your writings has helped some others (e.g.
Ethan) understand PEP 526.

>> Instead, I encourage you to participate in the writing of a separate
>> PEP explaining how type checkers are expected to work (since PEP 526
>> doesn't specify that). Ivan is also interested in such a PEP and we
>> hope Mark will also lend us his expertise.
>
> Aye, I'd be happy to help with that - I think everything proposed can
> be described in terms of existing PEP 484 primitives and the
> descriptor protocol, so the requirements on typecheckers would just be
> for them to be self-consistent, rather than defining fundamentally new
> behaviours.

Beware that there are by now some major type checkers that already
claim conformance to PEP 484 in various ways: mypy, pytype, PyCharm,
and probably Semmle.com where Mark works has one too. Each one has
some specialty and each one is a work in progress, but a PEP shouldn't
start out by declaring the approach used by any existing checker
unlawful.

As an example, mypy doesn't yet support Optional by default: it
recognizes the syntax but it doesn't distinguish between e.g. int and
Optional[int]. (It will do the right thing when you pass the
`--strict-optional` flag, but there are still some issues with that
before we can make it the default behavior.)

As another example: mypy understands isinstance() checks so that e.g.
the following works:

def foo(x: Union[int, str]) -> str:
if isinstance(x, str):
return x
return str(x)

I don't think you can find anything in PEP 484 that says this should
work; but without it mypy would be much less useful. (The example here
is silly, but such code appears in real life frequently.)

One final thought: this is not the first time that Python has used
syntax that looks like another language but gives it a different
meaning. In fact, apart from `if`, almost everything in Python works
differently than it works in C++ or Java. So I don't worry much about
that.

--
--Guido van Rossum (python.org/~guido)
Reply all
Reply to author
Forward
0 new messages