I asked a handful of folks at the Education Summit the next day about it:
* for the basic notion of allowing expression level name binding using
the "NAME := EXPR" notation, the reactions ranged from mildly negative
(I read it as only a "-0" rather than a "-1") to outright positive.
* for the reactions to my description of the currently proposed parent
local scoping behaviour in comprehensions, I'd use the word
"horrified", and feel I wasn't overstating the response :)
While I try to account for the fact that I implemented the current
comprehension semantics for the 3.x series, and am hence biased
towards considering them the now obvious interpretation, it's also the
case that generator expressions have worked like nested functions
since they were introduced in Python 2.4 (more than 13 years ago now),
and comprehensions have worked the same way as generator expressions
since Python 3.0 (which has its 10th birthday coming up in December
this year).
This means that I take any claims that the legacy Python 2.x
interpretation of comprehension behaviour is intuitively obvious with
an enormous grain of salt - for the better part of a decade now, every
tool at a Python 3 user's disposal (the fact that the iteration
variable is hidden from the current scope, reading the language
reference [1], printing out locals(), using the dis module, stepping
through code in a debugger, writing their own tracing function, and
even observing the quirky interaction with class scopes) will have
nudged them towards the "it's a hidden nested function" interpretation
of expected comprehension behaviour.
Acquiring the old mental model for the way comprehensions work pretty
much requires a developer to have started with Python 2.x themselves
(perhaps even before comprehensions and lexical closures were part of
the language), or else have been taught the Python 2 comprehension
model by someone else - there's nothing in Python 3's behaviour to
encourage that point of view, and plenty of
functional-language-inspired documentation to instead encourage folks
to view comprehensions as tightly encapsulated declarative container
construction syntax.
I'm currently working on a concept proposal at
https://github.com/ncoghlan/peps/pull/2 that's much closer to PEP 572
than any of my previous `given` based suggestions: for already
declared locals, it devolves to being the same as PEP 572 (except that
expressions are allowed as top level statements), but for any names
that haven't been previously introduced, it prohibits assigning to a
name that doesn't already have a defined scope, and instead relies on
a new `given` clause on various constructs that allows new target
declarations to be introduced into the current scope (such that "if
x:= f():" implies "x" is already defined as a target somewhere else in
the current scope, while "if x := f() given x:" potentially introduces
"x" as a new local target the same way a regular assignment statement
does).
One of the nicer features of the draft proposal is that if all you
want to do is export the iteration variable from a comprehension, you
don't need to use an assignment expression at all: you can just append
"... given global x" or "... given nonlocal x" and export the
iteration variable directly to the desired outer scope, the same way
you can in the fully spelled out nested function equivalent.
Cheers,
Nick.
[1] From https://docs.python.org/3.0/reference/expressions.html#displays-for-lists-sets-and-dictionaries:
'Note that the comprehension is executed in a separate scope, so names
assigned to in the target list don’t “leak” in the enclosing scope.'
--
Nick Coghlan | ncog...@gmail.com | Brisbane, Australia
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com
Thank you. Personally, I'd like to see feedback from
educators/teachers after they take the time to read the PEP and take
some time to think about its consequences.
My main concern is we're introducing a second different way of doing
something which is really fundamental.
On Fri, Jun 22, 2018 at 8:09 AM Antoine Pitrou <soli...@pitrou.net> wrote:Thank you. Personally, I'd like to see feedback from
educators/teachers after they take the time to read the PEP and take
some time to think about its consequences.
On Sat, Jun 23, 2018 at 3:02 AM, Michael Selik <mi...@selik.org> wrote:
> On Fri, Jun 22, 2018 at 8:09 AM Antoine Pitrou <soli...@pitrou.net> wrote:
>>
>> Thank you. Personally, I'd like to see feedback from
>> educators/teachers after they take the time to read the PEP and take
>> some time to think about its consequences.
>
>
> I've started testing the proposed syntax when I teach. I don't have a large
> sample yet, but most students either dislike it or don't appreciate the
> benefits. They state a clear preference for shorter, simpler lines at the
> consequence of more lines of code.
This is partly because students, lacking the experience to instantly
recognize larger constructs, prefer a more concrete approach to
coding. "Good code" is code where the concrete behaviour is more
easily understood. As a programmer gains experience, s/he learns to
grok more complex expressions, and is then better able to make use of
the more expressive constructs such as list comprehensions.
I forgot to add that I don't anticipate changing my lesson plans if this proposal is accepted. There's already not enough time to teach everything I'd like. Including a new assignment operator would distract from the learning objectives.
> > > I've started testing the proposed syntax when I teach. I don't have a
> > > large
> > > sample yet, but most students either dislike it or don't appreciate the
> > > benefits. They state a clear preference for shorter, simpler lines at the
> > > consequence of more lines of code.
Of course they do -- they're less fluent at reading code. They don't
have the experience to judge good code from bad.
The question we should be asking is, do we only add features to Python
if they are easy for beginners? It's not that I especially want to add
features which *aren't* easy for beginners, but Python isn't Scratch and
"easy for beginners" should only be a peripheral concern.
> > This is partly because students, lacking the experience to instantly
> > recognize larger constructs, prefer a more concrete approach to
> > coding. "Good code" is code where the concrete behaviour is more
> > easily understood. As a programmer gains experience, s/he learns to
> > grok more complex expressions, and is then better able to make use of
> > the more expressive constructs such as list comprehensions.
> >
>
> I don't think that's the only dynamic going on here. List comprehensions
> are more expressive, but also more declarative and in Python they have nice
> parallels with SQL and speech patterns in natural language. The concept of
> a comprehension is separate from its particular expression in Python. For
> example, Mozilla's array comprehensions in Javascript are/were ugly [0].
Mozilla's array comprehensions are almost identical to Python's, aside
from a couple of trivial differences:
evens = [for (i of numbers) if (i % 2 === 0) i];
compared to:
evens = [i for i in numbers if (i % 2 == 0)]
- the inexplicable (to me) decision to say "for x of array" instead of
"for x in array";
- moving the expression to the end, instead of the beginning.
The second one is (arguably, though not by me) an improvement, since it
preserves a perfect left-to-right execution order within the
comprehension.
> Students who are completely new to programming can see the similarity of
> list comprehensions to spoken language.
o_O
I've been using comprehensions for something like a decade, and I can't
:-)
The closest analogy to comprehensions I know of is set builder notation
in mathematics, which is hardly a surprise. That's where Haskell got the
inspiration from, and their syntax is essentially an ASCIIfied version
of set builder notation:
Haskell: [(i,j) | i <- [1,2], j <- [1..4]]
Maths: {(i,j) : i ∈ {1, 2}, j ∈ {1...4}}
I teach secondary school children maths, and if there's a plain English
natural language equivalent to list builder notation, neither I nor any
of my students, nor any of the text books I've read, have noticed it.
--
Steve
> But once it becomes a more common idiom, students will see it in the wild
> pretty early in their path to learning python. So we'll need to start
> introducing it earlier than later.
Students see many features early in their path. I've had people still
struggling with writing functions ask about metaclasses. People
will see async code everywhere. We don't have to teach *everything* at
once.
So if you absolutely need to teach it to a beginner, it
shouldn't be difficult once they understand the difference between an
expression and a statement.
Python's design principles are expressed in the Zen. They rather focus
on being no more complex than absolutely necessary, without prioritizing
either beginners or old-timers ("simple is better than complex",
"complex is better than complicated").
--
Regards,
Ivan
On 2018-06-22 19:46, Steven D'Aprano wrote:
> - the inexplicable (to me) decision to say "for x of array" instead of
> "for x in array";
Believe JavaScript has for…in, but as usual in the language it is broken and
they needed a few more tries to get it right. for…of is the latest version and
works as expected.
-Mike
Remember, the driving use-case which started this (ever-so-long)
discussion was the ability to push data into a comprehension and then
update it on each iteration, something like this:
x = initial_value()
results = [x := transform(x, i) for i in sequence]
again, not a huge deal, just a little bit more complexity
On Fri, Jun 22, 2018 at 10:59:43AM -0700, Michael Selik wrote:
Of course they do -- they're less fluent at reading code. They don't
have the experience to judge good code from bad.
The question we should be asking is, do we only add features to Python
if they are easy for beginners? It's not that I especially want to add
features which *aren't* easy for beginners, but Python isn't Scratch and
"easy for beginners" should only be a peripheral concern.
Mozilla's array comprehensions are almost identical to Python's, aside
from a couple of trivial differences:
> Students who are completely new to programming can see the similarity of
> [Python] list comprehensions to spoken language.
I've been using comprehensions for something like a decade, and I can't
[Guido]
> [...] IIRC (b) originated with Tim.
I'm not sure who came up with the idea first, but as I remember it, the
first mention of this came in a separate thread on Python-Ideas:
https://mail.python.org/pipermail/python-ideas/2018-April/049631.html
so possibly I'm to blame :-)
> But his essay on the topic, included as Appendix A (
> https://www.python.org/dev/peps/pep-0572/#appendix-a-tim-peters-s-findings)
> does not even mention comprehensions. However, he did post his motivation
> for (b) on python-ideas, IIRC a bit before PyCon; and the main text of the
> PEP gives a strong motivation (
> https://www.python.org/dev/peps/pep-0572/#scope-of-the-target).
> Nevertheless, maybe we should compromise and drop (b)?
I will have more to say about the whole "comprehensions are their own
scope" issue later. But I'd like to see Nick's proposed PEP, or at least
a draft of it, before making any final decisions.
If it came down to it, I'd be happy with the ability to declare an
assignment target nonlocal in the comprehension if that's what it takes.
What do you think of this syntax?
[global|nonlocal] simple_target := expression
Inside a comprehension, without a declaration, the target would be
sublocal (comprehension scope); that should make Nick happier :-)
[Guido]:> However, [Tim] did post his motivation for (b) on python-ideas, IIRC a bit
> before PyCon; and the main text of the PEP gives a strong motivation
> (https://www.python.org/dev/peps/pep-0572/#scope-of-the-target). Nevertheless,
> maybe we should compromise and drop (b)?
Two things to say about that. First, the original example I gave would be approximately as well addressed by allowing to declare intended scopes in magically synthesized functions; like (say)
p = None # to establish the intended scope of `p`
while any(<nonlocal p> # split across lines just for readabilityn % p == 0 for p in small_primes):
n //= p
It didn't really require an inline assignment, just a way to override the unwanted (in this case) "all `for` targets are local to the invisible function" rigid consequence of the implementation du jour.
Second, if it's dropped, then the PEP needs more words to define what happens in cases like the following, because different textual parts of a comprehension execute in different scopes, and that can become visible when bindings can be embedded:def f():y = -1ys = [y for _ in range(y := 5)]print(y, ys)Here `range(y := 5)` is executed in f's scope. Presumably the `y` in `y for` also refers to f's scope, despite that `y` textually _appears_ to be assigned to in the body of the listcomp, and so would - for that reason - expected to be local to the synthesized function, and so raise `UnboundLocalError` when referenced. It's incoherent without detailed knowledge of the implementation.
def g():y = -1ys = [y for y in range(y := 5)]print(y, ys)And here the `y` in `y for y` is local to the synthesized function, and presumably has nothing to do with the `y` in the `range()` call. That's incoherent in its own way.Under the current PEP, all instances of `y` in `f` refer to the f-local `y`, and the listcomp in `g` is a compile-time error.
On Mon, Jun 25, 2018 at 4:06 AM, Steven D'Aprano <st...@pearwood.info> wrote:
>
> Remember, the driving use-case which started this (ever-so-long)
> discussion was the ability to push data into a comprehension and then
> update it on each iteration, something like this:
>
> x = initial_value()
> results = [x := transform(x, i) for i in sequence]
Which means there is another option.
5. Have the assignment be local to the comprehension, but the initial
value of ANY variable is looked up from the surrounding scopes.
That is: you will NEVER get UnboundLocalError from a
comprehension/genexp; instead, you will look up the name as if it were
in the surrounding scope, either getting a value or bombing with
regular old NameError.
Or possibly variations on this such as "the immediately surrounding
scope only", rather than full name lookups. It'd have an awkward
boundary somewhere, whichever way you do it.
This isn't able to send information *out* of a comprehension, but it
is able to send information *in*.
> Actually, I'm closer to -1 on (a) as well. I don't like := as a
> way of getting assignment in an expression. The only thing I would
> give a non-negative rating is some form of "where" or "given".
+1 to this; ‘:=’ doesn't convey the meaning well. Python's syntax
typically eschews cryptic punctuation in faviour of existing words that
convey an appropriate meaning, and I agree with Greg that would be a
better way to get this effect.
--
\ “Self-respect: The secure feeling that no one, as yet, is |
`\ suspicious.” —Henry L. Mencken |
_o__) |
Ben Finney
This thread started with a request for educator feedback, which I took to mean observations of student reactions. I've only had the chance to test the proposal on ~20 students so far, but I'd like the chance to gather more data for your consideration before the PEP is accepted or rejected.
On Sun, Jun 24, 2018 at 11:09 AM Steven D'Aprano <st...@pearwood.info> wrote:Remember, the driving use-case which started this (ever-so-long)
discussion was the ability to push data into a comprehension and then
update it on each iteration, something like this:
x = initial_value()
results = [x := transform(x, i) for i in sequence]If that is the driving use-case, then the proposal should be rejected. The ``itertools.accumulate`` function has been available for a little while now and it handles this exact case. The accumulate function may even be more readable, as it explains the purpose explicitly, not merely the algorithm. And heck, it's a one-liner.results = accumulate(sequence, transform)
The benefits for ``any`` and ``all`` seem useful. Itertools has "first_seen" in the recipes section.
While it feels intuitively useful, I can't recall ever writing something similar myself. For some reason, I (almost?) always want to find all (counter-)examples and aggregate them in some way -- min or max, perhaps -- rather than just get the first.
Even so, if it turns out those uses are quite prevalent, wouldn't a new itertool be better than a new operator? It's good to solve the general problem, but so far the in-comprehension usage seems to have only a handful of cases.
On Fri, Jun 22, 2018 at 9:14 PM Chris Barker via Python-Dev <pytho...@python.org> wrote:again, not a huge deal, just a little bit more complexityI worry that Python may experience something of a "death by a thousand cuts" along the lines of the "Remember the Vasa" warning. Python's greatest strength is its appeal to beginners. Little bits of added complexity have a non-linear effect. One day, we may wake up and Python won't be recommended as a beginner's language.
On Fri, Jun 22, 2018 at 7:48 PM Steven D'Aprano <st...@pearwood.info> wrote:On Fri, Jun 22, 2018 at 10:59:43AM -0700, Michael Selik wrote:
Of course they do -- they're less fluent at reading code. They don't
have the experience to judge good code from bad.
On the other hand, an "expert" may be so steeped in a particular subculture that [they] no longer can distinguish esoteric from intuitive. Don't be so fast to reject the wisdom of the inexperienced.
The question we should be asking is, do we only add features to Python
if they are easy for beginners? It's not that I especially want to add
features which *aren't* easy for beginners, but Python isn't Scratch and
"easy for beginners" should only be a peripheral concern.On the contrary, I believe that "easy for beginners" should be a major concern. Ease of use has been and is a, or even the main reason for Python's success. When some other language becomes a better teaching language, it will eventually take over in business and science as well. Right now, Python is Scratch for adults. That's a great thing. Given the growth of the field, there are far more beginner programmers working today than there ever have been experts.
On Sun, Jun 24, 2018 at 2:41 PM Michael Selik <mi...@selik.org> wrote:This thread started with a request for educator feedback, which I took to mean observations of student reactions. I've only had the chance to test the proposal on ~20 students so far, but I'd like the chance to gather more data for your consideration before the PEP is accepted or rejected.Sure. Since the target for the PEP is Python 3.8 I am in no particular hurry. It would be important to know how you present it to your students.
On Sun, Jun 24, 2018 at 11:09 AM Steven D'Aprano <st...@pearwood.info> wrote:Remember, the driving use-case which started this (ever-so-long)
discussion was the ability to push data into a comprehension and then
update it on each iteration, something like this:
x = initial_value()
results = [x := transform(x, i) for i in sequence]If that is the driving use-case, then the proposal should be rejected. The ``itertools.accumulate`` function has been available for a little while now and it handles this exact case. The accumulate function may even be more readable, as it explains the purpose explicitly, not merely the algorithm. And heck, it's a one-liner.results = accumulate(sequence, transform)I think that's a misunderstanding. At the very least the typical use case is *not* using an existing transform function which is readily passed to accumulate -- instead, it's typically written as a simple expression (e.g. `total := total + v` in the PEP) which would require a lambda.
Plus, I don't know what kind of students you are teaching, but for me, whenever the solution requires a higher-order function (like accumulate), this implies a significant speed bump -- both when writing and when reading code. (Honestly, whenever I read code that uses itertools, I end up making a trip to StackOverflow :-).
On Fri, Jun 22, 2018 at 7:48 PM Steven D'Aprano <st...@pearwood.info> wrote:On Fri, Jun 22, 2018 at 10:59:43AM -0700, Michael Selik wrote:
Of course they do -- they're less fluent at reading code. They don't
have the experience to judge good code from bad.On the other hand, an "expert" may be so steeped in a particular subculture that [they] no longer can distinguish esoteric from intuitive. Don't be so fast to reject the wisdom of the inexperienced.Nor should we cater to them excessively though. While the user is indeed king, it's also well known that most users when they are asking for a feature don't know what they want (same for kings, actually, that's why they have advisors :-).The question we should be asking is, do we only add features to Python
if they are easy for beginners? It's not that I especially want to add
features which *aren't* easy for beginners, but Python isn't Scratch and
"easy for beginners" should only be a peripheral concern.On the contrary, I believe that "easy for beginners" should be a major concern. Ease of use has been and is a, or even the main reason for Python's success. When some other language becomes a better teaching language, it will eventually take over in business and science as well. Right now, Python is Scratch for adults. That's a great thing. Given the growth of the field, there are far more beginner programmers working today than there ever have been experts.I'm sorry, but this offends me, and I don't believe it's true at all. Python is *not* a beginners language, and you are mixing ease of use and ease of learning. Python turns beginners into experts at an unprecedented rate, and that's the big difference with Scratch.
[Tim]
. First, the original example I gave would be approximately as well addressed by allowing to declare intended scopes in magically synthesized functions; like (say)
p = None # to establish the intended scope of `p`
while any(<nonlocal p> # split across lines just for readabilityn % p == 0 for p in small_primes):
n //= p
It didn't really require an inline assignment, just a way to override the unwanted (in this case) "all `for` targets are local to the invisible function" rigid consequence of the implementation du jour.
Hm, that's more special syntax.
The nice bit about (b) as currently specified is that it adds no syntax -- it adds a scope rule, but (as IIRC Steven has convincingly argued) few people care about those. Python's scope rules, when fully specified, are intricate to the point of being arcane (e.g. for class scopes) but all that has a purpose -- to make them so DWIM ("Do what I Mean") that in practice you almost never have to worry about them, *especially* when reading non-obfuscated code (and also when writing, except for a few well-known patterns).
def f():y = -1ys = [y for _ in range(y := 5)]print(y, ys)Here `range(y := 5)` is executed in f's scope. Presumably the `y` in `y for` also refers to f's scope, despite that `y` textually _appears_ to be assigned to in the body of the listcomp, and so would - for that reason - expected to be local to the synthesized function, and so raise `UnboundLocalError` when referenced. It's incoherent without detailed knowledge of the implementation.That code should have the same meaning regardless of whether we accept (b) or not -- there is only one `y`, in f's scope. I don't mind if we have to add more words to the PEP's scope rules to make this explicit, though I doubt it -- the existing weirdness (in the comprehension spec) about the "outermost iterable" being evaluated in the surrounding scope specifies this. I wouldn't call it incoherent -- I think what I said about scope rules above applies here, it just does what you expect.
A "neutral" argument about (b) is that despite the "horrified" reactions that Nick saw, in practice it's going to confuse very few people (again, due to my point about Python's scope rules). I'd wager that the people who might be most horrified about it would be people who feel strongly that the change to the comprehension scope rules in Python 3 is a big improvement, and who are familiar with the difference in implementation of comprehensions (though not generator expressions) in Python 2 vs. 3.
Le 25/06/2018 à 01:30, Greg Ewing a écrit :
>
> Actually, I'm closer to -1 on (a) as well. I don't like := as a
> way of getting assignment in an expression. The only thing I would
> give a non-negative rating is some form of "where" or "given".
This resonates with me for a yet different reason: expressing the
feature with a new operator makes it feel very important and
fundamental, so that beginners would feel compelled to learn it early,
and old-timers tend to have a strong gut reaction to it. Using merely a
keyword makes it less prominent.
Baptiste
Right, the proposed blunt solution to "Should I use 'NAME = EXPR' or
'NAME := EXPR'?" bothers me a bit, but it's the implementation
implications of parent local scoping that I fear will create a
semantic tar pit we can't get out of later.
Unfortunately, I think the key rationale for (b) is that if you
*don't* do something along those lines, then there's a different
strange scoping discrepancy that arises between the non-comprehension
forms of container displays and the comprehension forms:
(NAME := EXPR,) # Binds a local
tuple(NAME := EXPR for __ in range(1)) # Doesn't bind a local
[...]
Those scoping inconsistencies aren't *new*, but provoking them
currently involves either class scopes, or messing about with
locals().
The one virtue that choosing this particular set of discrepancies has
is that the explanation for why they happen is the same as the
explanation for how the iteration variable gets hidden from the
containing scope: because "(EXPR for ....)" et al create an implicitly
nested scope, and that nested scope behaves the same way as an
explicitly nested scope as far as name binding and name resolution is
concerned.
Parent local scoping tries to mitigate the surface inconsistency by
changing how write semantics are defined for implicitly nested scopes,
but that comes at the cost of making those semantics inconsistent with
explicitly nested scopes and with the read semantics of implicitly
nested scopes.
The early iterations of PEP 572 tried to duck this whole realm of
potential semantic inconsistencies by introducing sublocal scoping
instead, such that the scoping for assignment expression targets would
be unusual, but they'd be consistently unusual regardless of where
they appeared, and their quirks would clearly be the result of how
assignment expressions were defined, rather than only showing up in
how they interacted with other scoping design decisions made years
ago.
On 6/24/2018 7:25 PM, Guido van Rossum wrote:
I'd wager that the people who might be most horrified about it
the (b) scoping rule change
would be people who feel strongly that the change to the
comprehension scope rules in Python 3 is a big improvement,
I might not be one of those 'most horrified' by (b), but I increasingly don't like it, and I was at best -0 on the comprehension scope change. To me, iteration variable assignment in the current scope is a non-problem. So to me the change was mostly useless churn. Little benefit, little harm. And not worth fighting when others saw a benefit.
However, having made the change to nested scopes, I think we should stick with them. Or repeal them. (I believe there is another way to isolate iteration names -- see below). To me, (b) amounts to half repealing the nested scope change, making comprehensions half-fowl, half-fish chimeras.
--
Terry Jan Reedy
I'd like to ask: how many readers of this email have ever deliberately taken advantage of the limited Python 3 scope in comprehensions and generator expressions to use what would otherwise be a conflicting local variable name?
I'd like to ask: how many readers of this email have ever deliberately taken advantage of the limited Python 3 scope in comprehensions and generator expressions to use what would otherwise be a conflicting local variable name?
I appreciate that the scope limitation can sidestep accidental naming errors, which is a good thing.Unfortunately, unless we anticipate Python 4 (or whatever) also making for loops have an implicit scope, I am left wondering whether it's not too large a price to pay. After all, special cases aren't special enough to break the rules, and unless the language is headed towards implicit scope for all uses of "for" one could argue that the scope limitation is a special case too far. It certainly threatens to be yet another confusion for learners, and while that isn't the only consideration, it should be given due weight.
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
On Mon, Jun 25, 2018 at 8:37 PM, Terry Reedy <tjr...@udel.edu> wrote:
On 6/24/2018 7:25 PM, Guido van Rossum wrote:
I'd wager that the people who might be most horrified about it
the (b) scoping rule change
would be people who feel strongly that the change to the
comprehension scope rules in Python 3 is a big improvement,
I might not be one of those 'most horrified' by (b), but I increasingly don't like it, and I was at best -0 on the comprehension scope change. To me, iteration variable assignment in the current scope is a non-problem. So to me the change was mostly useless churn. Little benefit, little harm. And not worth fighting when others saw a benefit.
However, having made the change to nested scopes, I think we should stick with them. Or repeal them. (I believe there is another way to isolate iteration names -- see below). To me, (b) amounts to half repealing the nested scope change, making comprehensions half-fowl, half-fish chimeras.
[...]--
Terry Jan Reedy
I'd like to ask: how many readers of this email have ever deliberately taken advantage of the limited Python 3 scope in comprehensions and generator expressions to use what would otherwise be a conflicting local variable name?
I did:
for l in (l.rstrip() for l in f):
I appreciate that the scope limitation can sidestep accidental naming errors, which is a good thing.
Unfortunately, unless we anticipate Python 4 (or whatever) also making for loops have an implicit scope, I am left wondering whether it's not too large a price to pay. After all, special cases aren't special enough to break the rules, and unless the language is headed towards implicit scope for all uses of "for" one could argue that the scope limitation is a special case too far. It certainly threatens to be yet another confusion for learners, and while that isn't the only consideration, it should be given due weight.
_______________________________________________ Python-Dev mailing list Pytho...@python.org https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru
-- Regards, Ivan
On 6/24/2018 7:25 PM, Guido van Rossum wrote:
> I'd wager that the people who might be most horrified about it
the (b) scoping rule change
> would be people who feel strongly that the change to the
> comprehension scope rules in Python 3 is a big improvement,
I might not be one of those 'most horrified' by (b), but I increasingly
don't like it, and I was at best -0 on the comprehension scope change.
To me, iteration variable assignment in the current scope is a
non-problem. So to me the change was mostly useless churn. Little
benefit, little harm. And not worth fighting when others saw a benefit.
However, having made the change to nested scopes, I think we should
stick with them. Or repeal them. (I believe there is another way to
isolate iteration names -- see below). To me, (b) amounts to half
repealing the nested scope change, making comprehensions half-fowl,
half-fish chimeras.
> and who are familiar with the difference in implementation
> of comprehensions (though not generator expressions) in Python 2 vs. 3.
That I pretty much am, I think. In Python 2, comprehensions (the fish)
were, at least in effect, expanded in-line to a normal for loop.
Generator expressions (the fowls) were different. They were, and still
are, expanded into a temporary generator function whose return value is
dropped back into the original namespace. Python 3 turned
comprehensions (with 2 news varieties thereof) into fowls also,
temporary functions whose return value is dropped back in the original
namespace. The result is that a list comprehension is equivalent to
list(generator_ expression), even though, for efficiency, it is not
implemented that way. (To me, this unification is more a benefit than
name hiding.)
(b) proposes to add extra hidden code in and around the temporary
function to partly undo the isolation.
list comprehensions would no
longer be equivalent to list(generator_expression), unless
generator_expressions got the same treatment, in which case they would
no longer be equivalent to calling the obvious generator function.
Breaking either equivalence might break someone's code.
---
How loop variables might be isolated without a nested scope: After a
comprehension is parsed, so that names become strings, rename the loop
variables to something otherwise illegal. For instance, i could become
'<i>', just as lambda becomes '<lambda>' as the name of the resulting
function. Expand the comprehension as in Python 2, except for deleting
the loop names along with the temporary result name.
Assignment expressions within a comprehension would become assignment
expressions within the for loop expansion and would automatically add or
replace values in the namespace containing the comprehension. In other
words, I am suggesting that if we want name expressions in
comprehensions to act as they would in Python 2, then we should consider
reverting to an altered version of the Python 2 expansion.
---
In any case, I think (b) should be a separate PEP linked to a PEP for
(a). The decision for (a) could be reject (making (b) moot), accept
with (b), or accept unconditionally (but still consider (b)).
Thank you for clearly presenting how you see 'comprehension', 'generator
expression' and by implication 'equivalent code'. The latter can either
be a definition or an explanation. The difference is subtle but real,
and, I believe, the essence of the disagreement over iteration
variables. If the code equivalent to a comprehension is its definition,
like a macro expansion, then survival of the iteration variable is to be
expected. If the equivalent code is an explanation of the *result* of
evaluating a *self-contained expression*, then leakage is easily seen a
wart, just as leakage of temporaries from any other expression would be.
My interpretation of what you say below is that you always wanted, for
instance, [i*i for i in iterable] == [j*j for j in iterable] to be true,
and saw the leakage making this not quite true as a wart. In other
words, within comprehensions (including generator expressions)
iterations names should be regarded as dummy placeholders and not part
of the value.
If this is correct, the list comprehension syntax could have been
[\0 * \0 for \0 in iterable]
with \1, \2, ... used as needed. (I am using the regex back-reference
notation in a way similar to the use of str.format forward reference
notation.)
I will stop here for now, as it is 1:30 am for me.
Terry
> --Guido van Rossum (python.org/~guido <http://python.org/~guido>)
"A shorthand to list()/dict()/set()" is actually how I thought of comprehensions when I studied them. And I was actually using list() in my code for some time before I learned of their existence.
_______________________________________________ Python-Dev mailing list Pytho...@python.org https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru
-- Regards, Ivan
I don't know what and where "started" it (AFAIK the idea has been around
for years) but for me, the primary use case for an assignment expression
is to be able to "catch" a value into a variable in places where I can't
put an assignment statement in, like the infamous `if re.match() is not
None'.
> which was about binding a temporary
> variable in a comprehension, for use *within* the comprehension.
Then I can't understand all the current fuss about scoping.
AFAICS, it's already like I described in
https://mail.python.org/pipermail/python-dev/2018-June/154067.html :
the outermost iterable is evaluated in the local scope while others in
the internal one:
In [13]: [(l,i) for l in list(locals())[:5] for i in locals()]
Out[13]:
[('__name__', 'l'),
('__name__', '.0'),
('__builtin__', 'l'),
('__builtin__', '.0'),
('__builtin__', 'i'),
('__builtins__', 'l'),
('__builtins__', '.0'),
('__builtins__', 'i'),
('_ih', 'l'),
('_ih', '.0'),
('_ih', 'i'),
('_oh', 'l'),
('_oh', '.0'),
('_oh', 'i')]
(note that `i' is bound after the first evaluation of internal
`locals()' btw, as to be expected)
If the "temporary variables" are for use inside the comprehension only,
the assignment expression needs to bind in the current scope like the
regular assignment statement, no changes are needed!
>> It depends on the evaluation order (and whether something is
>> evaluated at all),
>
> Which to my mind is yet another reason not to like ":=".
>
--
Regards,
Ivan
actually made those semantics available as an explicit
> "parentlocal NAME" declaration ...:
>
> def _list_comp(_outermost_iter):
> parentlocal item
> _result = []
> for x in _outermost_iter:
> item = x
> _result.append(x)
> return _result
>
> _expr_result = _list_comp(items)
I'm not sure that's possible. If I understand correctly,
part of the definition of "parent local" is that "parent"
refers to the nearest enclosing *non-comprehension* scope,
to give the expected result for nested comprehensions.
If that's so, then it's impossible to fully decouple its
definition from comprehensions.
However, PEP 572 in its current form takes the position "parent local
scoping is sufficiently useful to make it a required pre-requisite for
adding assignment expressions, but not useful enough to expose as a
new scope declaration primitive",
Not on my phone when I’m riding a bus, I can’t. I’m trying to more or less follow the discussion, but the “guess what this will do” aspect of the discussion makes it hard.
Eric
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com
From my reading, PEP 572 takes the position that "parent local
scoping" is what people expect from assignment expressions *in
comprehensions* and it's useful enough that there is no reason not to
make that the behaviour. The behaviour isn't generally useful enough
to be worth exposing as a primitive (it's not even useful enough for
the PEP to give it an explicit name!) so it's just a special case for
assignment expressions in comprehensions/generators.
All expressions inside the comprehension other than the initial iterable
have access to the loop variables generated by the previous parts. So
they are necessarily evaluated in the internal scope for that to be
possible.
Since this is too an essential semantics that one has to know to use the
construct sensibly, I kinda assumed you could make that connection...
E.g.:
[(x*y) for x in range(5) if x%2 for y in range(x,5) if not (x+y)%2]
A B C D E
C and D have access to the current x; E and A to both x and y.
>
> The overlap between the two is the trap, if you try to assign to the
> same variable in the loop header and then update it in the loop body.
>
> Not to mention the inconsistency that some assignments are accessible
> from the surrounding code:
>
> [expr for a in (x := func(), ...) ]
> print(x) # works
>
> while the most useful ones, those in the body, will be locked up in an
> implicit sublocal scope where they are unreachable from outside of the
> comprehension:
>
> [x := something ... for a in sequence ]
> print(x) # fails
>
>
--
Regards,
Ivan