Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Tuple parameter unpacking in 3.x

495 views
Skip to first unread message

Martin Geisler

unread,
Oct 2, 2008, 5:45:00 PM10/2/08
to
Hi,

I just tried running my code using "python2.6 -3" and got a bunch of

SyntaxWarning: tuple parameter unpacking has been removed in 3.x

warnings. I've read PEP-3113:

http://www.python.org/dev/peps/pep-3113/

but I'm still baffled as to why you guys could remove such a wonderful
feature?!

I use tuple unpacking as very nice syntactic sugar to do sort of pattern
matching like in Haskell and ML. And in lambda expressions it looks so
elegant and works so natural ("pythonic"):

ci.addCallback(lambda (ai, bi): ai * bi)

or

map(lambda (i, s): (field(i + 1), s), enumerate(si))

Rewriting these to

ci.addCallback(lambda abi: abi[0] * abi[1])

and

map(lambda is: (field(is[0] + 1), is[1]), enumerate(si))

makes the code much uglier! And slightly longer.

PEP-3113 says that one can just unpack the argument on the first line in
the function and that not many people know about them. The first
argument is bogus for lambda expressions -- the PEP includes a rewriting
like the above, but neglects to comment on how bad the rewritten code
looks. The second argument is matter of who you ask -- I use Twisted and
have lots of callbacks like the ones above, and for me this language
feature is a big plus since it makes the code more readable.

--
Martin Geisler

VIFF (Virtual Ideal Functionality Framework) brings easy and efficient
SMPC (Secure Multi-Party Computation) to Python. See: http://viff.dk/.

bearoph...@lycos.com

unread,
Oct 2, 2008, 7:13:08 PM10/2/08
to
Martin Geisler:

> ci.addCallback(lambda (ai, bi): ai * bi)
> or
> map(lambda (i, s): (field(i + 1), s), enumerate(si))
> Rewriting these to
> ci.addCallback(lambda abi: abi[0] * abi[1])
> and
> map(lambda is: (field(is[0] + 1), is[1]), enumerate(si))
> makes the code much uglier! And slightly longer.

I agree a lot. I can show similar examples of my code with sort/sorted
that contain a lambda that de-structures sequences of 2 or 3 items, to
define a sorting key.

As I've stated in the past, I'd like to see more support of pattern
matching in Python, and not less. People coming from Mathematica,
Scala, OcaML, etc, know they can be useful, and Scala shows that
Python too may find some usages for that.

I think they have removed (part of, not fully) this Python feature
mostly to simplify the C implementation of CPython.

So far I think this removal, and not using {:} as empty array literal
are the only two mistakes done during the design of Python 3. If you
look at the really large number of design decisions taken during the
creation of Python 3 itself, I think this is an exceptionally good
result anyway.

Bye,
bearophile

Nick Craig-Wood

unread,
Oct 4, 2008, 6:30:04 AM10/4/08
to
Martin Geisler <m...@daimi.au.dk> wrote:

> I just tried running my code using "python2.6 -3" and got a bunch of
>
> SyntaxWarning: tuple parameter unpacking has been removed in 3.x
>
> warnings. I've read PEP-3113:
>
> http://www.python.org/dev/peps/pep-3113/
>
> but I'm still baffled as to why you guys could remove such a wonderful
> feature?!

I don't think many people will miss tuple unpacking in def statements.

I think the warning is probably wrong anyway - you just need to remove
a few parens...

> ci.addCallback(lambda (ai, bi): ai * bi)

> map(lambda (i, s): (field(i + 1), s), enumerate(si))

On
Python 3.0rc1 (r30rc1:66499, Oct 4 2008, 11:04:33)

>>> f = lambda (ai, bi): ai * bi
File "<stdin>", line 1
f = lambda (ai, bi): ai * bi
^
SyntaxError: invalid syntax

But

>>> f = lambda ai, bi: ai * bi
>>> f(2,3)
6

Likewise

>>> lambda (i, s): (field(i + 1), s)

File "<stdin>", line 1


lambda (i, s): (field(i + 1), s)

^
SyntaxError: invalid syntax
>>> lambda i, s: (field(i + 1), s)
<function <lambda> at 0xb7bf75ec>
>>>

So just remove the parentheses and you'll be fine.

I have to say I prefer named functions, but I haven't done much
functional programming

def f(ai, bi):
return ai * bi
ci.addCallback(f)

def f(i, s):
return field(i + 1), s
map(f, enumerate(si))

PEP-3113 needs updating as it is certainly confusing here! 2to3 is
doing the wrong thing also by the look of it.

--
Nick Craig-Wood <ni...@craig-wood.com> -- http://www.craig-wood.com/nick

Peter Otten

unread,
Oct 4, 2008, 7:14:40 AM10/4/08
to
Nick Craig-Wood wrote:

No, you change the function signature in the process.

f = lambda (a, b): a*b

is equivalent to

def f((a, b)): # double parens
return a*b

and called as f(arg) where arg is an iterable with two items.

In 3.0 it has to be rewritten as

def f(ab):
a, b = ab
return a*b

i. e. it needs a statement and an expression and is therefore no longer
suitable for a lambda.

Peter

Martin Geisler

unread,
Oct 4, 2008, 11:07:14 AM10/4/08
to
Peter Otten <__pet...@web.de> writes:

> Nick Craig-Wood wrote:
>
>> So just remove the parentheses and you'll be fine.
>
> No, you change the function signature in the process.
>
> f = lambda (a, b): a*b
>
> is equivalent to
>
> def f((a, b)): # double parens
> return a*b
>
> and called as f(arg) where arg is an iterable with two items.
>
> In 3.0 it has to be rewritten as
>
> def f(ab):
> a, b = ab
> return a*b
>
> i. e. it needs a statement and an expression and is therefore no
> longer suitable for a lambda.

Exactly! Maybe it is the callback-heavy programming style encouraged by
Twisted that is at fault here, but I quite like it and am sorry to see
it made more difficult.

A somewhat related question: do I pay a performance penalty when I let a
function define an inner function like this:

def foo():

def bar()
...

bar()

compared to just defining it once outside:

def bar():
...

def foo():
...
bar()

I'm thinking that each execution of the first foo could spend a little
time defining a new bar each time, or is that not how things work?

I realize that defining bar as an inner function has the advantage of
being able to see variables in the namespace of foo.

Terry Reedy

unread,
Oct 4, 2008, 11:23:33 AM10/4/08
to pytho...@python.org
Martin Geisler wrote:

> A somewhat related question: do I pay a performance penalty when I let a
> function define an inner function like this:
>
> def foo():
>
> def bar()
> ...
>
> bar()

Some. The *code* for the body of bar is compiled as part of compiling
the body of foo, but each call of foo creates a new *function* object.

> compared to just defining it once outside:
>
> def bar():
> ...
>
> def foo():
> ...
> bar()
>
> I'm thinking that each execution of the first foo could spend a little
> time defining a new bar each time, or is that not how things work?
>
> I realize that defining bar as an inner function has the advantage of
> being able to see variables in the namespace of foo.

The alternative is to pass in the value(s) needed.

tjr

Message has been deleted

Martin Geisler

unread,
Oct 4, 2008, 4:57:23 PM10/4/08
to
Dennis Lee Bieber <wlf...@ix.netcom.com> writes:

> On Sat, 04 Oct 2008 13:14:40 +0200, Peter Otten <__pet...@web.de>
> declaimed the following in comp.lang.python:


>
>> In 3.0 it has to be rewritten as
>>
>> def f(ab):
>> a, b = ab
>> return a*b
>>
>> i. e. it needs a statement and an expression and is therefore no
>> longer suitable for a lambda.
>>
>

> Given that most lambda's are rather short, is it really that
> much of a pain to just use (for the above example) ab[0]*ab[1] without
> unpacking?

Well -- it is because the code used to be short and sweet that it feels
ugly to me to change

lambda (a, b): a * b

into

lambda ab: ab[0] * ab[1]

The first looks perfect -- it matches the math behind the code that I am
writing. The second does not look so nice.

From reading the PEP-3113 I got the impression that the author thought
that this feature was unused and didn't matter. With this I wish to say
that it matters to me.

Steven D'Aprano

unread,
Oct 4, 2008, 8:07:50 PM10/4/08
to
On Sat, 04 Oct 2008 22:57:23 +0200, Martin Geisler wrote:

> Dennis Lee Bieber <wlf...@ix.netcom.com> writes:
>
>> On Sat, 04 Oct 2008 13:14:40 +0200, Peter Otten <__pet...@web.de>
>> declaimed the following in comp.lang.python:
>>
>>> In 3.0 it has to be rewritten as
>>>
>>> def f(ab):
>>> a, b = ab
>>> return a*b
>>>
>>> i. e. it needs a statement and an expression and is therefore no
>>> longer suitable for a lambda.
>>>
>>>
>> Given that most lambda's are rather short, is it really that
>> much of a pain to just use (for the above example) ab[0]*ab[1] without
>> unpacking?
>
> Well -- it is because the code used to be short and sweet that it feels
> ugly to me to change
>
> lambda (a, b): a * b
>
> into
>
> lambda ab: ab[0] * ab[1]
>
> The first looks perfect -- it matches the math behind the code that I am
> writing. The second does not look so nice.

Here's another alternative. Compare:

>>> x = (2, 3)
>>> (lambda (a,b): a*b)(x)
6

with this:

>>> (lambda a,b: a*b)(*x)
6

> From reading the PEP-3113 I got the impression that the author thought
> that this feature was unused and didn't matter. With this I wish to say
> that it matters to me.

Alas, I think it's too late. I feel your pain.

The final release of Python 3.0 hasn't been made yet. If you really care
strongly about this, you could write to the python-dev mailing list and
ask if it is worth trying to change their mind about tuple unpacking.
They'll almost certainly say no, but there's a chance it might be
reverted in 3.1.

A tiny chance.


--
Steven

Steven D'Aprano

unread,
Oct 4, 2008, 8:16:50 PM10/4/08
to
On Sat, 04 Oct 2008 17:07:14 +0200, Martin Geisler wrote:

> A somewhat related question: do I pay a performance penalty when I let a
> function define an inner function like this:
>
> def foo():
>
> def bar()
> ...
>
> bar()
>
> compared to just defining it once outside:
>
> def bar():
> ...
>
> def foo():
> ...
> bar()
>
> I'm thinking that each execution of the first foo could spend a little
> time defining a new bar each time, or is that not how things work?
>
> I realize that defining bar as an inner function has the advantage of
> being able to see variables in the namespace of foo.

That is the main advantage, followed by reducing namespace pollution, but
yes there is a very small performance penalty.


>>> def outer(x):
... return x+1
...
>>> def func(x):
... return outer(x+1)
...
>>>
>>> def func2(x):
... def inner(x):
... return x+1
... return inner(x+1)
...
>>> assert func(37) == func2(37)
>>> from timeit import Timer
>>> t1 = Timer('func(23)', 'from __main__ import func')
>>> t2 = Timer('func2(23)', 'from __main__ import func2')
>>> t1.repeat()
[1.5711719989776611, 0.82663798332214355, 0.82708191871643066]
>>> t2.repeat()
[1.8273210525512695, 1.1913230419158936, 1.1786220073699951]

--
Steven

Martin Geisler

unread,
Oct 5, 2008, 4:38:43 AM10/5/08
to
Steven D'Aprano <st...@REMOVE-THIS-cybersource.com.au> writes:

> On Sat, 04 Oct 2008 22:57:23 +0200, Martin Geisler wrote:
>
> Here's another alternative. Compare:
>
>>>> x = (2, 3)
>>>> (lambda (a,b): a*b)(x)
> 6
>
> with this:
>
>>>> (lambda a,b: a*b)(*x)
> 6

Letting the callbacks take several arguments would definitely be the
nicest syntax, but I'm unfortunately restricted by the Twisted
framework: there all functions in a callback chain return one result
which is passed to the next callback as the first argument. That is why
I do a fair amount of tuple unpacking in my code.

>> From reading the PEP-3113 I got the impression that the author
>> thought that this feature was unused and didn't matter. With this I
>> wish to say that it matters to me.
>
> Alas, I think it's too late. I feel your pain.

Thanks! And I know it's too late, I should have found out about this
earlier :-(

> The final release of Python 3.0 hasn't been made yet. If you really
> care strongly about this, you could write to the python-dev mailing
> list and ask if it is worth trying to change their mind about tuple
> unpacking. They'll almost certainly say no, but there's a chance it
> might be reverted in 3.1.

I can't see this being changed back and forth like that, so I don't
think I'll bother. But thanks for the support.

Martin Geisler

unread,
Oct 5, 2008, 4:42:22 AM10/5/08
to
Steven D'Aprano <st...@REMOVE-THIS-cybersource.com.au> writes:

Very interesting, thanks for measuring this!

Duncan Booth

unread,
Oct 5, 2008, 5:43:24 AM10/5/08
to
Martin Geisler <m...@daimi.au.dk> wrote:

> Letting the callbacks take several arguments would definitely be the
> nicest syntax, but I'm unfortunately restricted by the Twisted
> framework: there all functions in a callback chain return one result
> which is passed to the next callback as the first argument. That is why
> I do a fair amount of tuple unpacking in my code.

If you only have to contend with that one pattern then you could write a
decorator to unpack a single argument. Maybe not ideal, but then you have
the option to write:

@unpacked
def product(a, b):
return a*b
ci.addCallback(product)

or

ci.addCallback(unpacked(lambda a,b: a*b))

for some suitable definition of unpacked.

Steven D'Aprano

unread,
Oct 5, 2008, 7:21:00 AM10/5/08
to
PEP 3113 offers the following recommendation for refactoring tuple
arguments:

def fxn((a, (b, c))):
pass

will be translated into:

def fxn(a_b_c):
(a, (b, c)) = a_b_c
pass

and similar renaming for lambdas.
http://www.python.org/dev/peps/pep-3113/


I'd like to suggest that this naming convention clashes with a very
common naming convention, lower_case_with_underscores. That's easy enough
to see if you replace the arguments a, b, c above to something more
realistic:

def function(vocab_list, (result, flag), max_value)

becomes:

def function(vocab_list, result_flag, max_value)

Function annotations may help here, but not everyone is going to use them
in the same way, or even in a way that is useful, and the 2to3 tool
doesn't add annotations.

It's probably impossible to avoid all naming convention clashes, but I'd
like to suggest an alternative which distinguishes between a renamed
tuple and an argument name with two words:

def function(vocab_list, (result, flag), max_value):
pass

becomes:

def function(vocab_list, t__result_flag, max_value):
result, flag = t__result_flag
pass

The 't__' prefix clearly marks the tuple argument as different from the
others. The use of a double underscore is unusual in naming conventions,
and thus less likely to clash with other conventions. Python users are
already trained to distinguish single and double underscores. And while
it's three characters longer than the current 2to3 behaviour, the length
compares favorably with the original tuple form:

t__result_flag
(result, flag)

What do people think? Is it worth taking this to the python-dev list?


--
Steven

Aaron "Castironpi" Brady

unread,
Oct 5, 2008, 9:26:14 AM10/5/08
to pytho...@python.org

There's the possibility that the most important words should go first in
this case:

result_flag__t

But, I'll admit that other people could have learned different orders of
scanning words than I, especially depending on their spoken language
backgrounds. A poll of the newsgroup isn't exactly academically
impartial sampling, but there aren't any better ways to make decisions,
are there? (I think it would be easy to make either one a habit.)

Here's the other option in the same context:

def function(vocab_list, result_flag__t, max_value):
result, flag = result_flag__t
pass

To be thorough, there's also a trailing double underscore option.

def function(vocab_list, result_flag__, max_value):
result, flag = result_flag__
pass

Which I don't recognize from any other usages, but I defer. If there
aren't any, conditionally, I think this is my favorite.

Peter Otten

unread,
Oct 5, 2008, 11:14:01 AM10/5/08
to
Steven D'Aprano wrote:

Let's see what the conversion tool does:

$ cat tmp.py
g = lambda (a, b): a*b + a_b
$ 2to3 tmp.py
RefactoringTool: Skipping implicit fixer: buffer
RefactoringTool: Skipping implicit fixer: idioms
RefactoringTool: Skipping implicit fixer: ws_comma
--- tmp.py (original)
+++ tmp.py (refactored)
@@ -1,1 +1,1 @@
-g = lambda (a, b): a*b + a_b
+g = lambda a_b1: a_b1[0]*a_b1[1] + a_b
RefactoringTool: Files that need to be modified:
RefactoringTool: tmp.py

So the current strategy is to add a numerical suffix if a name clash occurs.
The fixer clearly isn't in final state as for functions instead of lambdas
it uses xxx_todo_changeme.

> What do people think? Is it worth taking this to the python-dev list?

I suppose that actual clashes will be rare. If there is no clash a_b is the
best name and I prefer trying it before anything else.
I don't particularly care about what the fallback should be except that I
think it should stand out a bit more than the current numerical prefix.
xxx1_a_b, xxx2_a_b,... maybe?

Peter

Peter Otten

unread,
Oct 5, 2008, 11:15:27 AM10/5/08
to
Steven D'Aprano wrote:

Let's see what the conversion tool does:

$ cat tmp.py
g = lambda (a, b): a*b + a_b
$ 2to3 tmp.py
RefactoringTool: Skipping implicit fixer: buffer
RefactoringTool: Skipping implicit fixer: idioms
RefactoringTool: Skipping implicit fixer: ws_comma
--- tmp.py (original)
+++ tmp.py (refactored)
@@ -1,1 +1,1 @@
-g = lambda (a, b): a*b + a_b
+g = lambda a_b1: a_b1[0]*a_b1[1] + a_b
RefactoringTool: Files that need to be modified:
RefactoringTool: tmp.py

So the current strategy is to add a numerical suffix if a name clash occurs.
The fixer clearly isn't in final state as for functions instead of lambdas
it uses xxx_todo_changeme.

> What do people think? Is it worth taking this to the python-dev list?

I suppose that actual clashes will be rare. If there is no clash a_b is the


best name and I prefer trying it before anything else.
I don't particularly care about what the fallback should be except that I

think it should stand out a bit more than the current numerical suffix.
xxx1_a_b, xxx2_a_b,... maybe?

Peter

Terry Reedy

unread,
Oct 5, 2008, 12:13:18 PM10/5/08
to pytho...@python.org
Martin Geisler wrote:
> Steven D'Aprano <st...@REMOVE-THIS-cybersource.com.au> writes:

>>> From reading the PEP-3113 I got the impression that the author
>>> thought that this feature was unused and didn't matter.

And that there were good alternatives, and that there were technical
reasons why maintaining the feature in the face of other arguments
options would be a nuisance.

>>> With this I wish to say that it matters to me.
>> Alas, I think it's too late. I feel your pain.
>
> Thanks! And I know it's too late, I should have found out about this
> earlier :-(

For future reference, the time to have said something would have been
during the 3 month alpha phase, which is for testing feature and api
changes. I suspect, from reading the pydev discussion, that the answer
still would have been to either use a def statement and add the unpack
line or to subscript the tuple arg to stick with lambda expressions.
But who knows?

tjr

Harald Luessen

unread,
Oct 6, 2008, 1:10:26 PM10/6/08
to
On Sun, 05 Oct 2008 "Aaron \"Castironpi\" Brady" wrote:

>There's the possibility that the most important words should go first in
>this case:
>
>result_flag__t
>
>But, I'll admit that other people could have learned different orders of
>scanning words than I, especially depending on their spoken language
>backgrounds. A poll of the newsgroup isn't exactly academically
>impartial sampling, but there aren't any better ways to make decisions,
>are there? (I think it would be easy to make either one a habit.)
>
>Here's the other option in the same context:
>
>def function(vocab_list, result_flag__t, max_value):
> result, flag = result_flag__t
> pass
>
>To be thorough, there's also a trailing double underscore option.
>
>def function(vocab_list, result_flag__, max_value):
> result, flag = result_flag__
> pass
>
>Which I don't recognize from any other usages, but I defer. If there
>aren't any, conditionally, I think this is my favorite.

t__result_flag and result_flag__t have the advantage that you can
search for t__ or __t as start or end of a name if you want to
find and change all these places in the source. You can compare
it with the decision to use reinterpret_cast<long>(...) as a cast
operator in C++. It is ugly but much easier to find than (long)...
A search for __ alone would have too many hits in Python.

Harald

Brett C.

unread,
Oct 7, 2008, 3:59:20 PM10/7/08
to
On Oct 5, 9:13 am, Terry Reedy <tjre...@udel.edu> wrote:
> Martin Geisler wrote:
> > Steven D'Aprano <st...@REMOVE-THIS-cybersource.com.au> writes:
> >>> From reading the PEP-3113 I got the impression that the author
> >>> thought that this feature was unused and didn't matter.
>
> And that there were good alternatives, and that there were technical
> reasons why maintaining the feature in the face of other arguments
> options would be a nuisance.
>

As the author of PEP 3113, I should probably say something (kudos to
Python-URL bringing this thread to my attention).

There are two things to realize about the tuple unpacking that acted
as motivation. One is supporting them in the compiler is a pain.
Granted that is a weak argument that only the core developers like
myself care about.

Second, tuple unpacking in parameters breaks introspection horribly.
All one has to do is look at the hoops 'inspect' has to jump through
in order to figure out what the heck is going on to see how messy this
is. And I realize some people will say, "but if 'inspect' can handle
it, then who cares about the complexity", but that doesn't work if
'inspect' is viewed purely as a simple wrapper so you don't have to
remember some attribute names and not having to actually perform some
major hoops. I personally prefer being able to introspect from the
interpreter prompt without having to reverse-engineer how the heck
code objects deal with tuple unpacking.

>  >>> With this I wish to say that it matters to me.
>

Well, every feature matters to someone. Question is whether it matters
to enough people to warrant having a feature. I brought this up at
PyCon 2006 through a partially botched lightning talk and on python-
dev through the PEP, so this was not some snap decision on my part
that I rammed through python-dev; there was some arguing for keeping
it from some python-dev members, but Guido agreed with me in the end.

If you still hate me you can find me at PyCon 2009 and tar & feather
me *after* my talk. =)

> >> Alas, I think it's too late. I feel your pain.
>
> > Thanks! And I know it's too late, I should have found out about this
> > earlier :-(
>
> For future reference, the time to have said something would have been
> during the 3 month alpha phase, which is for testing feature and api
> changes.  I suspect, from reading the pydev discussion, that the answer
> still would have been to either use a def statement and add the unpack
> line or to subscript the tuple arg to stick with lambda expressions.
> But who knows?

I have not read this whole thread thoroughly, but it sounds like using
iterator unpacking at the call site (e.g., ``fxn(*args)`` when calling
your lambda) is out of the question because it is from a callback?

As Terry said, the alpha is one way you can give feedback if you don't
want to follow python-dev or python-3000 but still have your opinion
be heard. The other way is to subscribe to the PEP news feed (found
off of http://www.python.org/dev/peps/) to keep on top of PEPs as
practically all language changes have to result in a PEP at some
point. And of course the last option is to follow python-dev. =)

-Brett

bearoph...@lycos.com

unread,
Oct 7, 2008, 4:17:35 PM10/7/08
to
Brett C.:

> There are two things to realize about the tuple unpacking that acted
> as motivation. One is supporting them in the compiler is a pain.
> ...

> Second, tuple unpacking in parameters breaks introspection horribly.

Are there ways to re-implement from scratch similar native pattern-
matching features in CPython, like ones of Closure:
http://items.sjbach.com/16/some-notes-about-clojure
Or better ones of Scala, (or even OcaML, but those are static), etc,
that avoid that pain and introspection troubles?

Bye,
bearophile

Steven D'Aprano

unread,
Oct 11, 2008, 3:23:41 AM10/11/08
to


Possibly you have misunderstood me. I'm not concerned with a clash
between names, as in the following:

lambda a_b, (a, b):
maps to -> lambda a_b, a_b:

as I too expect they will be rare, and best handled by whatever mechanism
the fixer users to fix any other naming clash.

I am talking about a clash between *conventions*, where there could be
many argument names of the form a_b which are not intended to be two item
tuples.

In Python 2.x, when you see the function signature

def spam(x, (a, b))

it is clear and obvious that you have to pass a two-item tuple as the
second argument. But after rewriting it to spam(x, a_b) there is no such
help. There is no convention in Python that says "when you see a function
argument of the form a_b, you need to pass two items" (nor should there
be).

But given the deafening silence on this question, clearly other people
don't care much about misleading argument names.

*wink*

--
Steven

MRAB

unread,
Oct 11, 2008, 2:17:08 PM10/11/08
to
On Oct 11, 8:23 am, Steven D'Aprano <st...@REMOVE-THIS-
Could it be a double underscore instead, eg a__b,
first_item__second_item?

Aaron "Castironpi" Brady

unread,
Oct 11, 2008, 2:34:00 PM10/11/08
to
On Oct 11, 2:23 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.au> wrote:
snip

> I am talking about a clash between *conventions*, where there could be
> many argument names of the form a_b which are not intended to be two item
> tuples.
>
> In Python 2.x, when you see the function signature
>
> def spam(x, (a, b))
>
> it is clear and obvious that you have to pass a two-item tuple as the
> second argument. But after rewriting it to spam(x, a_b) there is no such
> help. There is no convention in Python that says "when you see a function
> argument of the form a_b, you need to pass two items" (nor should there
> be).
>
> But given the deafening silence on this question, clearly other people
> don't care much about misleading argument names.

No, we just document them. (ducks) And ambiguous is different from
misleading anyway. If the docs say pass a 2-tuple as the 2nd
parameter...?

0 new messages