[Python-ideas] Default arguments in Python - the return

59 views
Skip to first unread message

Pascal Chambon

unread,
May 8, 2009, 4:31:57 PM5/8/09
to python-ideas

Hello,


I'm surely not original in any way there, but I'd like to put back on the table the matter of "default argument values".
Or, more precisely, the "one shot" handling of default values, which makes that the same mutable objects, given once as default arguments, come back again and again at each function call.
They thus become some kinds of "static variables", which get polluted by the previous calls, whereas many-many-many python users still believe that they get a fresh new value at each function call.
I think I understand how default arguments are currently implemented (and so, "why" - technically - it does behave this way), but I'm still unsure of "why" - semantically - this must be so.

I've browsed lots of google entries on that subject, but as far as I'm concerned, I've found nothing in favor current semantic.
I've rather found dozens, hundreds of posts of people complaining that they got biten by this gotcha, many of them finishing with a "Never put mutable values in default arguments, unless you're very very sure of what you're doing !".

And no one seemed to enjoy the possibilities of getting "potentially static variables" this way. Static variables are imo a rather bad idea, since they create "stateful functions", that make debugging and maintenance more difficult ; but  when such static variable are, furthermore, potentially non-static (i.e when the corresponding function argument is supplied), I guess they become totally useless and dangerous - a perfect way to get hard-to-debug behaviours.

On the other hand, when people write "def func(mylist=[]):", they basically DO want a fresh new list at each call, be it given by the caller or the default argument system.
So it's really a pity to need tricks like
> def f(a, L=None):
> if L is None:
> L = []
to get what we want (and if None was also a possible value ? what other value should we put as a placeholder for "I'd like None or a fresh new list but I can't say it directly ?").

So I'd like to know : are there other "purely intellectual" arguments for/against the current semantic of default arguments (I might have missed some discussion on this subject, feel free to point them ?

Currently, this default argument handling looks, like a huge gotcha for newcomers, and, I feel, like an embarrassing wart to most pythonistas. Couldn't it be worth finding a new way of doing it ?
Maybe there are strong arguments against a change at that level ; for example, performance issues (I'm not good in those matters). But I need to ensure.

So here are my rough ideas on what we might do - if after having the suggestions from expert people, it looks like it's worth writting a PEP, I'll be willing to particpateon it.
Basically, I'd change the python system so that, when a default argument expression is encountered, instead of being executed, it's wrapped in some kind of zero-argument lambda expression, which gets pushed in the "func_defaults" attribute of the function.
And then, each time a default argument is required in a function call, this lambda expression gets evaluated and gives the expected value.

I guess this will mean some overhead during function call, so this might become another issue.
It's also a non retrocompatible change, so I assume we'd have to use a "from __future__ import XXX" until Python4000.
But I think the change is worth the try, because it's a trap which waits for all the python beginners.

So, if this matters hasn't already been marked somewhere as a no-go, I eagerly await the feedback of users and core developpers on the subject. :)

By the way, I'm becoming slightly allergical to C-like languages (too much hassle for too little gain, compared to high level dynamic languages), but if that proposition goes ahead, and no one wants to handle the implementation details, I'll put the hands in the engine ^^

Regards,
Pascal


Chris Rebert

unread,
May 8, 2009, 5:12:28 PM5/8/09
to Pascal Chambon, python-ideas
On Fri, May 8, 2009 at 1:31 PM, Pascal Chambon
<chambon...@wanadoo.fr> wrote:
> Hello,
>
> I'm surely not original in any way there, but I'd like to put back on the
> table the matter of "default argument values".
> Or, more precisely, the "one shot" handling of default values, which makes
> that the same mutable objects, given once as default arguments, come back
> again and again at each function call.
> They thus become some kinds of "static variables", which get polluted by the
> previous calls, whereas many-many-many python users still believe that they
> get a fresh new value at each function call.
> I think I understand how default arguments are currently implemented (and
> so, "why" - technically - it does behave this way), but I'm still unsure of
> "why" - semantically - this must be so.
<snip>

> So I'd like to know : are there other "purely intellectual" arguments
> for/against the current semantic of default arguments (I might have missed
> some discussion on this subject, feel free to point them ?

Point-point: http://mail.python.org/pipermail/python-ideas/2007-January/000121.html
And see also the links below.

> Currently, this default argument handling looks, like a huge gotcha for
> newcomers, and, I feel, like an embarrassing wart to most pythonistas.
> Couldn't it be worth finding a new way of doing it ?
> Maybe there are strong arguments against a change at that level ; for
> example, performance issues (I'm not good in those matters). But I need to
> ensure.
>
> So here are my rough ideas on what we might do - if after having the
> suggestions from expert people, it looks like it's worth writting a PEP,
> I'll be willing to particpateon it.
> Basically, I'd change the python system so that, when a default argument
> expression is encountered, instead of being executed, it's wrapped in some
> kind of zero-argument lambda expression, which gets pushed in the
> "func_defaults" attribute of the function.
> And then, each time a default argument is required in a function call, this
> lambda expression gets evaluated and gives the expected value.
>
> I guess this will mean some overhead during function call, so this might
> become another issue.
> It's also a non retrocompatible change, so I assume we'd have to use a "from
> __future__ import XXX" until Python4000.
> But I think the change is worth the try, because it's a trap which waits for
> all the python beginners.
>
> So, if this matters hasn't already been marked somewhere as a no-go, I
> eagerly await the feedback of users and core developpers on the subject. :)

It's basically been rejected. See GvR Pronouncement:
http://mail.python.org/pipermail/python-3000/2007-February/005715.html
regarding the pre-PEP "Default Argument Expressions":
http://mail.python.org/pipermail/python-3000/2007-February/005704.html

Unless your exact idea somehow differs significantly from my pre-PEP
(sounds like it doesn't IMHO), it's not gonna happen. It's basically
too magical.

Cheers,
Chris
--
http://blog.rebertia.com
_______________________________________________
Python-ideas mailing list
Python...@python.org
http://mail.python.org/mailman/listinfo/python-ideas

George Sakkis

unread,
May 8, 2009, 6:06:43 PM5/8/09
to python-ideas
On Fri, May 8, 2009 at 5:12 PM, Chris Rebert <pyi...@rebertia.com> wrote:

> On Fri, May 8, 2009 at 1:31 PM, Pascal Chambon

>> So, if this matters hasn't already been marked somewhere as a no-go, I
>> eagerly await the feedback of users and core developpers on the subject. :)
>
> It's basically been rejected. See GvR Pronouncement:
> http://mail.python.org/pipermail/python-3000/2007-February/005715.html
> regarding the pre-PEP "Default Argument Expressions":
> http://mail.python.org/pipermail/python-3000/2007-February/005704.html
>
> Unless your exact idea somehow differs significantly from my pre-PEP
> (sounds like it doesn't IMHO), it's not gonna happen. It's basically
> too magical.

FWIW I don't find the dual semantics, with explicit syntax for the new
semantics ("def foo(bar=new baz)") mentioned in the PEP too magical.
If even C, a relatively small language, affords two calling semantics,
why would it be too confusing for Python ? Perhaps that PEP might had
had better luck if it didn't propose replacing the current semantics
with the new.

George

Terry Reedy

unread,
May 8, 2009, 7:34:42 PM5/8/09
to python...@python.org
Pascal Chambon wrote:

> I'm surely not original in any way there, but I'd like to put back on
> the table the matter of "default argument values".

There have been two proposals:
1. Evaluate the expression once, store the result, and copy on each
function call.
- Expensive.
- Nearly always not needed.
- Not always possible.
2. Store the expression and evaluate on each function call (your
re-proposal).
- Expensive.
- The result may be different for each function call, and might raise an
exception.
- This is the job of the suite !!!!!!!!!!!!!!!!
- Which is to say, run-time code belongs in the function body, not the
header.

> And no one seemed to enjoy the possibilities of getting "potentially
> static variables" this way.

You did not search hard enough.

> Static variables are imo a rather bad idea,

So you want to take them away from everyone else. I think *that* is a
rather bad idea ;-). No one is forcing you to use them.

> On the other hand, when people write "def func(mylist=[]):", they
> basically DO want a fresh new list at each call,

Maybe, maybe not.

> be it given by the caller or the default argument system.
> So it's really a pity to need tricks like

> >/ def f(a, L=None):
> />/ if L is None:
> />/ L = []

Or don't supply a default arg if that is not what you really want.
Putting call time code in the function body is not a trick.

> /to get what we want (and if None was also a possible value ?

__none = object()
def(par = __none):
if par == __none: ...

as had been posted each time this question has been asked.

> I guess this will mean some overhead during function call,

I absolutely guarantee that this will. Functions calls are expensive.
Adding a function call for each default arg (and many functions have
more than one) multiplies the calling overhead.

> so this might become another issue.

Is and always has been.

Terry Jan Reedy

spir

unread,
May 9, 2009, 6:42:56 AM5/9/09
to python...@python.org
Le Fri, 08 May 2009 22:31:57 +0200,
Pascal Chambon <chambon...@wanadoo.fr> s'exprima ainsi:

> And no one seemed to enjoy the possibilities of getting "potentially
> static variables" this way. Static variables are imo a rather bad idea,
> since they create "stateful functions", that make debugging and
> maintenance more difficult ; but when such static variable are,
> furthermore, potentially non-static (i.e when the corresponding function
> argument is supplied), I guess they become totally useless and dangerous
> - a perfect way to get hard-to-debug behaviours.

If we want static vars, there are better places than default args for this. (See also the thread about memoizing). E.g. on the object when it's a method, or even on the func itself.

def squares(n):
square = n * n; print square
if square not in squares.static_list:
squares.static_list.append(n)
squares.static_list = []

squares(1);squares(2);squares(1);squares(3)
print squares.static_list

Denis
------
la vita e estrany

Pascal Chambon

unread,
May 9, 2009, 6:56:20 AM5/9/09
to Terry Reedy, python...@python.org
Thanks everyone for the feedback and the links (I was obviously too
confident in Google's first pages, to miss such things >_<)

Terry Reedy a écrit :


>> And no one seemed to enjoy the possibilities of getting "potentially
>> static variables" this way.
>
> You did not search hard enough.
>

Well, for sure some people here and there used that semantic to have,
for example, a "default cache" handling the requests for which a
specific cache isn't provided.
But that behavior can as easily be obtained with a much more explicit
way, which furthermore lets you access your default cache easily from
inside the function code, even when a specific cache is provided :

class a:
cache=[1,2,3]
def func(self, x, newcache=cache):
print "Current cache state :",y
y.append(x)
print "The static, default cache is ", cache

So I don't see the default argument trick as a "neat feature", rather as
a way of doing simple things obscure.

>> Static variables are imo a rather bad idea,
>
> So you want to take them away from everyone else. I think *that* is a
> rather bad idea ;-). No one is forcing you to use them.
>

I don't want to annihilate all traces of static variables :p ;
I just find them ugly, because they create stateful functions whose
state is hidden in them (like some do with free variables, too), and
that's imo not a "robust code best practice".

But what kills me with current default arguments is that those aren't
even real static variables : they're "potentially static variables", and
as far as I've seen, you have no easy way to check whether, for
instance, the argument value that you've gotten is the default, static
one, or a new one provided by the caller (of course, you can store the
default value somewhere else for reference, but it's lamely redundant).
If people want static variables in python, for example to avoid OO
programming and still have stateful functions, we can add an explicit
"static" keyword or its equivalent. But using the ambiguous value given
via a default-valued argument is not pretty, imo.
Unless we have a way to access, from inside a code block, the function
object in which this code block belongs.

Does it exist ? Do we have any way, from inside a call block, to browse
the default arguments that this code block might receive ?


>
>> I guess this will mean some overhead during function call,
>
> I absolutely guarantee that this will. Functions calls are expensive.
> Adding a function call for each default arg (and many functions have
> more than one) multiplies the calling overhead.
>
> > so this might become another issue.
>
> Is and always has been.
>

Well, if, like it was proposed in previous threads, the expression is
only reevaluated in particular circumstances (i.e, if the user asks it
with a special syntax), it won't take more time than the usual "if myarg
is None : myarg = []" ;
but I agree that alternate syntaxes have led to infinite and complex
discussions, and that the simpler solution I provided is likely to be
too CPU intensive, more than I expected...


>
>> /to get what we want (and if None was also a possible value ?
>
> __none = object()
> def(par = __none):
> if par == __none: ...
>
> as had been posted each time this question has been asked.

Well, I didn't turn my rhetorical question properly it seems ^^.
I wholly agree that you can always use another object as a placeholder,
but I don't quite like the idea of creating new instances just to
signify "that's not a valid value that you can use, create one brand new"

On the other hand, would anyone support my alternative wish, of having a
builtin "NotGiven", similar to "NotImplemented", and dedicated to this
somehow usual taks of "placeholder" ?
There would be two major pros for this, imo :
- giving programmers a handy object for all unvanted "mutable
default argument" situations, without having to think "is None a value I
might want to get ?"
- *Important* : by appearing in the beginning of the doc near True
and False, this keyword would be much more visible to beginners than the
deep pages on "default argument handling" ; thus, they'd have much more
chances to cross warnings on this Gotcha, than they currently have (and
seeing "NotGiven" in tutorials would force them to wonder why it's so,
it's imo much more explicit than seeing "None" values instead)

So, since reevaluation of arguments actually *is* a no-go, and
forbidding mutable arguments is obviously a no-go too, would you people
support this integrating of "NotGiven" (or any other name) in the
builtins ? It'd sound to me like a good practice.

Regards,
Pascal

Pascal Chambon

unread,
May 9, 2009, 7:16:46 AM5/9/09
to spir, python...@python.org
spir a écrit :
Well, I've just realized I'd sent a semi-dumb question in my previous answer :p

I'd never quite realized it was possible to store stuffs inside the function object, by retrieving it from inside the code object.

And it works pretty well...
In classes you can access your function via self, in closures they get caught in cells... it's only in global scope that there are problems : here, if you rename squares (newsquares = squares ; squares = None), you'll have an error by calling newsquares, because it searches "squares" in the global scope, without the help of "self" or closures.

Still, an explicit way of targetting "the function I'm in" would be sweet imo, but retrieving it that way is not far from being as handy.

Thanks for the tip that opened my eyes,
regards,
Pascal

Steven D'Aprano

unread,
May 9, 2009, 7:32:35 AM5/9/09
to python...@python.org
On Sat, 9 May 2009 08:56:20 pm Pascal Chambon wrote:

> But what kills me with current default arguments is that those aren't
> even real static variables : they're "potentially static variables",
> and as far as I've seen, you have no easy way to check whether, for
> instance, the argument value that you've gotten is the default,
> static one, or a new one provided by the caller (of course, you can
> store the default value somewhere else for reference, but it's lamely
> redundant).

I'm not really sure why you would want to do that. The whole point of
default values is to avoid needing to care whether or not the caller
has provided an argument or not.

[...]


> Does it exist ? Do we have any way, from inside a call block, to
> browse the default arguments that this code block might receive ?

>>> def spam(n=42):
... return "spam "*n
...
>>> spam.func_defaults
(42,)


dir(func_object) is your friend :)


[...]


> but I agree that alternate syntaxes have led to infinite and complex
> discussions, and that the simpler solution I provided is likely to be
> too CPU intensive, more than I expected...

I would support... no, that's too strong. I wouldn't oppose the
suggestion that Python grow syntax for "evaluate this default argument
every time the function is called (unless the argument is given by the
caller)". The tricky part is coming up with good syntax and a practical
mechanism.

[...]


> On the other hand, would anyone support my alternative wish, of
> having a builtin "NotGiven", similar to "NotImplemented", and
> dedicated to this somehow usual taks of "placeholder" ?

There already is such a beast: None is designed to be used as a
placeholder for Not Given, Nothing, No Result, etc.

If None is not suitable, NotImplemented is also a perfectly good
built-in singleton object which can be used as a sentinel. It's already
used as a sentinel for a number of built-in functions and operators.
There's no reason you can't use it as well.


> There would be two major pros for this, imo :
> - giving programmers a handy object for all unvanted "mutable
> default argument" situations, without having to think "is None a
> value I might want to get ?"

But then they would need to think "Is NotGiven a value I might want to
get, so I can pass it on to another function unchanged?", and you would
then need to create another special value ReallyNotGiven. And so on.


> - *Important* : by appearing in the beginning of the doc near
> True and False, this keyword would be much more visible to beginners
> than the deep pages on "default argument handling" ; thus, they'd
> have much more chances to cross warnings on this Gotcha, than they
> currently have (and seeing "NotGiven" in tutorials would force them
> to wonder why it's so, it's imo much more explicit than seeing "None"
> values instead)

Heh heh heh, he thinks beginners read manuals :-)


> So, since reevaluation of arguments actually *is* a no-go, and
> forbidding mutable arguments is obviously a no-go too, would you
> people support this integrating of "NotGiven" (or any other name) in
> the builtins ? It'd sound to me like a good practice.

-1 on an extra builtin. There's already two obvious ones, and if for
some reason you need to accept None and NotImplemented as valid data,
then you can create an unlimited number of sentinels with object(). The
best advantage of using object() is that because the sentinel is unique
to your module, you can guarantee that nobody can accidentally pass it,
or expect to use it as valid data.

--
Steven D'Aprano

Steven D'Aprano

unread,
May 9, 2009, 7:36:12 AM5/9/09
to python...@python.org
On Sat, 9 May 2009 09:16:46 pm Pascal Chambon wrote:
> Still, an explicit way of targetting "the function I'm in" would be
> sweet imo, but retrieving it that way is not far from being as handy.

There's a rather long discussion on the comp.lang.python newsgroup at
the moment about that exact question. Look for the recent thread
titled "Self function".

If you can't get Usenet and don't like Google Groups, the c.l.py
newsgroup is also available as a python mailing list, and a gmane
mailing list.


--
Steven D'Aprano

Steven D'Aprano

unread,
May 9, 2009, 7:39:53 AM5/9/09
to python...@python.org
On Sat, 9 May 2009 09:36:12 pm Steven D'Aprano wrote:
> There's a rather long discussion on the comp.lang.python newsgroup at
> the moment about that exact question

Er, to be precise, by "at the moment" I actually mean "over the last few
days". The thread seems to have more-or-less finished now.

Of course, no Usenet thread is every *completely* finished. Please feel
free to resurrect it if you have any good ideas, questions or insight
into the issue.

spir

unread,
May 9, 2009, 8:34:08 AM5/9/09
to python...@python.org
Le Sat, 09 May 2009 12:56:20 +0200,

Pascal Chambon <chambon...@wanadoo.fr> s'exprima ainsi:

> If people want static variables in python, for example to avoid OO

> programming and still have stateful functions, we can add an explicit
> "static" keyword or its equivalent.

This is far from beeing pythonic anyway, I guess. Ditto for storing data on the func itself (as shown in another post). It provide a way of linking together data and behaviour; similar techniques are used e.g. in Lisp, so that many Lisp people find OO pretty useless.
But python has OO in-built, and even as mainsteam paradigm. Data related to behaviour should be set on an object.

> But using the ambiguous value given
> via a default-valued argument is not pretty, imo.
> Unless we have a way to access, from inside a code block, the function
> object in which this code block belongs.
>
> Does it exist ? Do we have any way, from inside a call block, to browse
> the default arguments that this code block might receive ?

This is a feature of much more reflexive/meta languages like Io (or again Lisp), that were indeed designed from scratch with this capacity in mind and intended as a major programming feature.
In Io you can even access the 'raw' message _before_ evaluation, so that you get the expression of argument, not only the resulting value.

Denis
------
la vita e estrany

Georg Brandl

unread,
May 9, 2009, 11:38:12 AM5/9/09
to python...@python.org
Pascal Chambon schrieb:

> Still, an explicit way of targetting "the function I'm in" would be
> sweet imo, but retrieving it that way is not far from being as handy.

You could use e.g.

def selffunc(func):
@wraps(func)
def newfunc(*args, **kwds):
return func(func, *args, **kwds)
return newfunc

@selffunc
def foo(func, a):
func.cache = a

to avoid the "ugly" lookup of the function in the global namespace.

Georg

--
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.

Tennessee Leeuwenburg

unread,
May 9, 2009, 8:19:01 PM5/9/09
to Pascal Chambon, python-ideas
Hi Pascal,

Taking the example of 

def foo(bar = []):
  bar.append(4)
  print(bar)

I'm totally with you in thinking that what is 'natural' is to expect to get a new, empty, list every time. However this isn't want happens. As far as I'm concerned, that should more or less be the end of the discussion in terms of what should ideally happen. 

The responses to the change in behaviour which I see as more natural are, to summarise, as follows:
  -- For all sorts of technical reasons, it's too hard
  -- It changes the semantics of the function definition being evaluated at compile time
  -- It's not what people are used to

With regards to the second point, it's not like the value of arguments is set at compile time, so I don't really see that this stands up. I don't think it's intuitive, it's just that people become accustomed to it. There is indeed, *some sense* in understanding that the evaluation occurs at compile-time, but there is also a lot of sense (and in my opinion, more sense) in understanding the evaluation as happening dynamically when the function is called. 

With regards to the first point, I'm not sure that this is as significant as all of that, although of course I defer to the language authors here. However, it seems as though it could be no more costly than the lines of code which most frequently follow to initialise these variables. 

On the final point, that's only true for some people. For a whole lot of people, they stumble over it and get it wrong. It's one of the most un-Pythonic things which I have to remember about Python when programming -- a real gotcha. I don't see it as changing one way of doing things for another equally valid way of doing things, but changing something that's confusing and unexpected for something which is far more natural and, to me, Pythonic.

For me, Python 3k appears to be a natural place to do this. Python 3 still appears to be regarded as a work-in-progress by most people, and I don't think that it's 'too late' to change for Python 3k. Perhaps, given the timing, the people involved, the complexity of change etc, then for pragmatic reasons this may have to be delayed, but I don't think that's a good thing. I'd much rather see it done, personally. I think that many people would feel the same way.

Regards,
-Tennessee

_______________________________________________
Python-ideas mailing list
Python...@python.org
http://mail.python.org/mailman/listinfo/python-ideas




--
--------------------------------------------------
Tennessee Leeuwenburg
http://myownhat.blogspot.com/
"Don't believe everything you think"

Greg Ewing

unread,
May 9, 2009, 9:23:42 PM5/9/09
to python...@python.org
Steven D'Aprano wrote:

> Of course, no Usenet thread is every *completely* finished.

Unless Nazis have been mentioned, of course. (Oops,
looks like I just ended this thread -- sorry about
that!)

--
Greg

Steven D'Aprano

unread,
May 9, 2009, 9:23:36 PM5/9/09
to python...@python.org
On Sun, 10 May 2009 10:19:01 am Tennessee Leeuwenburg wrote:
> Hi Pascal,
> Taking the example of
>
> def foo(bar = []):
> bar.append(4)
> print(bar)
>
> I'm totally with you in thinking that what is 'natural' is to expect
> to get a new, empty, list every time.

That's not natural to me. I would be really, really surprised by the
behaviour you claim is "natural":

>>> DEFAULT = 3
>>> def func(a=DEFAULT):
... return a+1
...
>>> func()
4
>>> DEFAULT = 7
>>> func()
8

For deterministic functions, the same argument list should return the
same result each time. By having default arguments be evaluated every
time they are required, any function with a default argument becomes
non-deterministic. Late evaluation of defaults is, essentially,
equivalent to making the default value a global variable. Global
variables are rightly Considered Harmful: they should be used with
care, if at all.


> However this isn't want
> happens. As far as I'm concerned, that should more or less be the end
> of the discussion in terms of what should ideally happen.

As far as I'm concerned, what Python does now is the idea behaviour.
Default arguments are part of the function *definition*, not part of
the body of the function. The definition of the function happens
*once* -- the function isn't recreated each time you call it, so
default values shouldn't be recreated either.


> The responses to the change in behaviour which I see as more natural
> are, to summarise, as follows:
> -- For all sorts of technical reasons, it's too hard
> -- It changes the semantics of the function definition being
> evaluated at compile time
> -- It's not what people are used to

And it's not what many people want.

You only see the people who complain about this feature. For the
multitude of people who expect it or like it, they have no reason to
say anything (except in response to complaints). When was the last time
you saw somebody write to the list to say "Gosh, I really love that
Python uses + for addition"? Features that *just work* never or rarely
get mentioned.


> With regards to the second point, it's not like the value of
> arguments is set at compile time, so I don't really see that this
> stands up.

I don't see what relevance that has. If the arguments are provided at
runtime, then the default value doesn't get used.


> I don't think it's intuitive,

Why do you think that intuitiveness is more valuable than performance
and consistency?

Besides, intuitiveness is a fickle thing. Given this pair of functions:

def expensive_calculation():
time.sleep(60)
return 1

def useful_function(x=expensive_calculation()):
return x + 1

I think people would be VERY surprised that calling useful_function()
with no arguments would take a minute *every time*, and would complain
that this slowness was "unintuitive".


> it's just that people become
> accustomed to it. There is indeed, *some sense* in understanding that
> the evaluation occurs at compile-time, but there is also a lot of
> sense (and in my opinion, more sense) in understanding the evaluation
> as happening dynamically when the function is called.

No. The body of the function is executed each time the function is
called. The definition of the function is executed *once*, at compile
time. Default arguments are part of the definition, not the body, so
they too should only be executed once. If you want them executed every
time, put them in the body:

def useful_function(x=SENTINEL):
if x is SENTINEL:
x = expensive_calculation()
return x+1

> With regards to the first point, I'm not sure that this is as
> significant as all of that, although of course I defer to the
> language authors here. However, it seems as though it could be no
> more costly than the lines of code which most frequently follow to
> initialise these variables.
>
> On the final point, that's only true for some people. For a whole lot
> of people, they stumble over it and get it wrong. It's one of the
> most un-Pythonic things which I have to remember about Python when
> programming -- a real gotcha.

I accept that it is a Gotcha. The trouble is, the alternative behaviour
you propose is *also* a Gotcha, but it's a worse Gotcha, because it
leads to degraded performance, surprising introduction of global
variables where no global variables were expected, and a breakdown of
the neat distinction between creating a function and executing a
function.

But as for it being un-Pythonic, I'm afraid that if you really think
that, your understanding of Pythonic is weak. From the Zen:

The Zen of Python, by Tim Peters

Special cases aren't special enough to break the rules.
Although practicality beats purity.
If the implementation is hard to explain, it's a bad idea.

(1) Assignments outside of the body of a function happen once, at
compile time. Default values are outside the body of the function. You
want a special case for default values so that they too happen at
runtime. That's not special enough to warrant breaking the rules.

(2) The potential performance degradation of re-evaluating default
arguments at runtime is great. For practical reasons, it's best to
evaluate them once only.

(3) In order to get the behaviour you want, the Python compiler would
need a more complicated implementation which would be hard to explain.


> I don't see it as changing one way of
> doing things for another equally valid way of doing things, but
> changing something that's confusing and unexpected for something
> which is far more natural and, to me, Pythonic.

I'm sorry, while re-evaluation of default arguments is sometimes useful,
it's more often NOT useful. Most default arguments are simple objects
like small ints or None. What benefit do you gain from re-evaluating
them every single time? Zero benefit. (Not much cost either, for simple
cases, but no benefit.)

But for more complex cases, there is great benefit to evaluating default
arguments once only, and an easy work-around for those rare cases that
you do want re-evaluation.


> For me, Python 3k appears to be a natural place to do this. Python 3
> still appears to be regarded as a work-in-progress by most people,
> and I don't think that it's 'too late' to change for Python 3k.

Fortunately you're not Guido, and fortunately this isn't going to
happen. I recommend you either accept that this behaviour is here to
stay, or if you're *particularly* enamoured of late evaluation
behaviour of defaults, that you work on some sort of syntax to make it
optional.

--
Steven D'Aprano

Carl Johnson

unread,
May 9, 2009, 10:06:06 PM5/9/09
to python...@python.org
I think this is a case where there are pros and cons on both sides.
There are a lot of pros to the current behavior (performance,
flexibility, etc.), but it comes with the con of confusing newbies and
making people go through the same song and dance to set a  "sentinel
value" when the want the other behavior and they can't ensure that
None won't be passed. The newbie problem can't be fixed from now until
Python 4000, since it would break a lot of existing uses of default
values, but we could cut down on the annoyance of setting and check a
sentinel value by introducing a new keyword, eg.

def f(l=fresh []):
   ...

instead of

__blank = object()
def f(l=__blank):
   if l is __blank:
       l = []
   ...

The pros of a new keyword are saving 3 lines and being more clear
upfront about what's going on with the default value . The con is that
adding a new keyword bloats the language. We could try reusing an
existing keyword, but none of the current ones seem to fit:

and                 elif                import              return
as                  else                in                  try
assert              except              is                  while
break               finally             lambda              with
class               for                 not                 yield
continue            from                or
def                 global              pass
del                 if                  raise

(I copied this from Python 3.0's help, but there seems to be a
documentation error: nonlocal, None, True, and False are also keywords
in Python 3+.)

The best one on the current list it seems to me would be "else" as in

def f(l else []):
   ...

But I dunno… It just not quite right, you know?

So, I'm -0 on changing the current behavior, but I'm open to it if
someone can find a way to do it that isn't just an ad hoc solution to
this one narrow problem but has a wider general use.

Tennessee Leeuwenburg

unread,
May 9, 2009, 10:06:50 PM5/9/09
to Steven D'Aprano, python...@python.org

> For me, Python 3k appears to be a natural place to do this. Python 3
> still appears to be regarded as a work-in-progress by most people,
> and I don't think that it's 'too late' to change for Python 3k.

Fortunately you're not Guido, and fortunately this isn't going to
happen. I recommend you either accept that this behaviour is here to
stay, or if you're *particularly* enamoured of late evaluation
behaviour of defaults, that you work on some sort of syntax to make it
optional.


Thank you for the rest of the email, which was (by and large) well-considered and (mostly) stuck to the points of the matter. I will get to them in proper time when I have been able to add to the argument in a considered way after fully understanding your points.

However, this last section really got under my skin. It seems completely inappropriate to devolve any well-intentioned email discussion into an appalling self-service ad-hominem attack. Your assertion of your ethical viewpoint (use of Fortunately without a backing argument), attempt to bully me out of my position (recomment you accept this behaviour is here to stay) are not appreciated. You have *your* view of what is fortunate, right and appropriate. I took every care NOT to assert my own viewpoint as universally true; you have not  done so.

Guido is just a person, as you are just a person, as I am just a person. Can we not please just stick to a simple, civilised discussion of the point without trying to win cheap debating points or use the "Zen" of Python to denigrate people who have either genuinely failed to grasp some aspect of a concept, or whose intuition is simply different. Without people whose intuition is different, no advancement is possible. Without debate about what constitutes the "Zen" of Python, the "Zen" of Python must always be static, unchanging, unchallenged and therefore cannot grow. I do not think that is what anyone meant when they were penning the "Zen" of Python.

This list is not best-served by grandstanding. It may not even be best-served by the now effectively personal debate which you have drawn me into through your personalisation of the issue (I quote: "your understanding is weak"). Terms such as weak and strong are inherently laden with ethical and social overtones -- incomplete, misplaced, or any number of other qualifiers could have kept the debate to the factual level.

Regards,
-Tennessee

Scott David Daniels

unread,
May 10, 2009, 1:28:13 AM5/10/09
to python...@python.org

Any argument for changing to a more "dynamic" default scheme had better
have a definition of the behavior of the following code, and produce a
good rationale for that behavior:

x = 5
def function_producer(y):
def inner(arg=x+y):
return arg + 2
return inner

x = 6
f1 = function_producer(x)
x = 4.1
y = 7
f2 = function_producer(3)
print x, y, f1(), f2()
del y
x = 45
print x, f1(), f2()
del x
print f1(), f2()

--Scott David Daniels
Scott....@Acm.Org

Terry Reedy

unread,
May 10, 2009, 1:47:12 AM5/10/09
to python...@python.org
Tennessee Leeuwenburg wrote:
>
> > For me, Python 3k appears to be a natural place to do this. Python 3
> > still appears to be regarded as a work-in-progress by most people,
> > and I don't think that it's 'too late' to change for Python 3k.

Sorry, it *is* too late. The developers have been very careful about
breaking 3.0 code in 3.1 only with strong justification. 3.1 is in
feature freeze as of a few days ago.

> Fortunately you're not Guido, and fortunately this isn't going to
> happen. I recommend you either accept that this behaviour is here to
> stay, or if you're *particularly* enamoured of late evaluation
> behaviour of defaults, that you work on some sort of syntax to make it
> optional.
> Thank you for the rest of the email, which was (by and large)
> well-considered and (mostly) stuck to the points of the matter. I will
> get to them in proper time when I have been able to add to the argument
> in a considered way after fully understanding your points.
>
> However, this last section really got under my skin. It seems completely
> inappropriate to devolve any well-intentioned email discussion into an
> appalling self-service ad-hominem attack.

I do not see any attack whatsoever, just advice which you took wrongly.

> ...se of Fortunately without a backing argument),

'Fortunately' as is clear from the context, was in respect to your
expressed casual attitude toward breaking code. Some people have a
negative reaction to that. In any case, it is a separate issue from
'default arguments'.

> attempt to bully me out of my position (recomment you accept this
behaviour
> is here to stay) are not appreciated.

He recommended that you not beat your head against a brick wall because
of a misconception about what is currently socially possible. He then
suggested something that *might* be possible. If that advice offends
you, so be it.

Terry Jan Reedy

George Sakkis

unread,
May 10, 2009, 2:32:07 AM5/10/09
to python-ideas
On Sun, May 10, 2009 at 1:28 AM, Scott David Daniels
<Scott....@acm.org> wrote:
>
> Any argument for changing to a more "dynamic" default scheme had better
> have a definition of the behavior of the following code, and produce a
> good rationale for that behavior:
>
>    x = 5
>    def function_producer(y):
>        def inner(arg=x+y):
>            return arg + 2
>        return inner

I don't think the proposed scheme was ever accused of not being
well-defined. Here's the current equivalent dynamic version:

x = 5
def function_producer(y):

missing = object()
def inner(arg=missing):
if arg is missing:
arg = x+y


return arg + 2
return inner

-1 for changing the current semantics (too much potential breakage),
+0.x for a new keyword that adds dynamic semantics (and removes the
need for the sentinel kludge).

George

Larry Hastings

unread,
May 10, 2009, 6:14:32 AM5/10/09
to python-ideas
George Sakkis wrote:
+0.x for a new keyword that adds dynamic semantics (and removes the
need for the sentinel kludge).


We don't need new syntax for it.  Here's a proof-of-concept hack that you can do it with a function decorator.
import copy

def clone_arguments(f):
  default_args = list(f.func_defaults)
  if len(default_args) < f.func_code.co_argcount:
    delta = f.func_code.co_argcount - len(default_args)
    default_args = ([None] * delta) + default_args
  def fn(*args):
    if len(args) < default_args:
      args = args + tuple(copy.deepcopy(default_args[len(args):]))
    return f(*args)

  return fn

@clone_arguments
def thing_taking_array(a, b = []):
  b.append(a)
  return b

print thing_taking_array('123')
print thing_taking_array('abc')

-1 on changing Python one iota for this,


/larry/

Carl Johnson

unread,
May 10, 2009, 6:34:32 AM5/10/09
to python...@python.org
Larry Hastings wrote:
> George Sakkis wrote:
>
> +0.x for a new keyword that adds dynamic semantics (and removes the
> need for the sentinel kludge).
>
> We don't need new syntax for it.  Here's a proof-of-concept hack that you
> can do it with a function decorator.

Your decorator only works for mutables where you just want a deep
copy. It doesn't work for cases where you want a whole expression to
be re-evaluated from scratch. (Maybe for the side effects or
something.) That said, it couldn't be that hard to work out a similar
decorator using lambda thunks instead. The internals of the decorator
would be something like:

for n, arg in enumerate(args):
if arg is defaults[n]: #If you didn't get passed anything
args[n] = defaults[n]() #Unthunk the lambda

The usage might be:

@dynamicdefaults
def f(arg=lambda: dosomething()):

It cuts 3 lines of boilerplate down to one line, but makes all your
function calls a little slower.

-- Carl

spir

unread,
May 10, 2009, 7:19:29 AM5/10/09
to python...@python.org
Le Sat, 9 May 2009 16:06:06 -1000,
Carl Johnson <cmjohnson....@gmail.com> s'exprima ainsi:

> I think this is a case where there are pros and cons on both sides.
> There are a lot of pros to the current behavior (performance,
> flexibility, etc.), but it comes with the con of confusing newbies and
> making people go through the same song and dance to set a  "sentinel
> value" when the want the other behavior and they can't ensure that
> None won't be passed. The newbie problem can't be fixed from now until
> Python 4000, since it would break a lot of existing uses of default
> values, but we could cut down on the annoyance of setting and check a
> sentinel value by introducing a new keyword, eg.
>
> def f(l=fresh []):
>    ...
>
> instead of
>
> __blank = object()
> def f(l=__blank):
>    if l is __blank:
>        l = []
>    ...

[...]

Maybe the correctness of the current behaviour can be checked by a little mental experiment.

=======
Just imagine python hasn't default arguments yet, and they are the object of a PEP. An implementation similar to the current one is proposed. Then, people realise that in the case where the given value happens to be mutable _and_ updated in the function body,... What do you think should/would be decided?

-1- Great, we get static variables for free. It is a worthful feature we expected for a while. In numerous use cases they will allow easier and much more straightforward code. Let's go for it.

-2- Well, static variables may be considered useful, in which case there should be a new PEP for them. Conceptually, they are a totally different feature, we shall certainly not mess up both, shall we?
=======

I bet for n°2, for the reasoning of people stating it's a major gotcha will be hard to ignore. But may be wrong. Still, default arguments actually *are* called "default arguments", which means they should be considered as such, while they do not behave as such in all cases.
Now, we must consider the concrete present situation in which their real behaviour is used as a common workaround. I do not really understand why default args are used as static vars while at least another possibility exists in python which is semantically much more consistent:

### instead of "def callSave(number, record=[])"
### just set record on the func:
def callSave(value):
callSave.record.append(value)
return callSave.record
callSave.record = []
print callSave(1) ; print callSave(2) ; print callSave(3)
==>
[1]
[1, 2]
[1, 2, 3]

Also, func attributes are an alternative for another common (mis)use of default arguments, namely the case of a function factory:

def paramPower(exponent):
### instead of "def power(number, exponent=exponent)"
### just set exponent on the func:
def power(number):
return number**power.exponent
power.exponent = exponent
return power
power3 = paramPower(3) ; power5 = paramPower(5)
print power3(2) ; print power5(2)
==>
8
32

In both cases, tha notion of a func attribute rather well matches the variable value's meaning. As a consequence, I find this solution much nicer for a func factory as well as for a static variable.

Denis
------
la vita e estrany

Tennessee Leeuwenburg

unread,
May 10, 2009, 8:12:25 AM5/10/09
to Terry Reedy, python...@python.org

However, this last section really got under my skin. It seems completely inappropriate to devolve any well-intentioned email discussion into an appalling self-service ad-hominem attack.

I do not see any attack whatsoever, just advice which you took wrongly.

...se of Fortunately without a backing argument),

'Fortunately' as is clear from the context, was in respect to your expressed casual attitude toward breaking code.  Some people have a negative reaction to that.  In any case, it is a separate issue from 'default arguments'.


> attempt to bully me out of my position (recomment you accept this behaviour
> is here to stay) are not appreciated.

He recommended that you not beat your head against a brick wall because of a misconception about what is currently socially possible.  He then suggested something that *might* be possible.  If that advice offends you, so be it.

It's not the content of the advice (don't push stuff uphill) which got to me at all, it was the tone and manner in which it was conveyed. Much of the email was well-balanced, which I fully acknowledged. Maybe you're just more inclined to overlook a few bits of innuendo, and probably most of the time so am I. However, it's actually not okay, and the implied personal criticism was very clearly present. It wasn't severe, ,perhaps my reaction was quite forceful, but it's just not okay to be putting people down.

I don't have a casual attitude towards breaking code, just an open mind towards discussions on their merits. I don't really appreciate the negative tones, and I'm sure that if anyone else is in the firing line, they wouldn't appreciate either, even if it to some extent it's all a bit of a storm in a teacup. Unless someone who is happy to cop a bit of flak stands up and says that's not on, then maintaining a "thick skin" -- i.e. putting up with people putting you down, be it through a clear and direct put-down, or through a more subtle implication -- becomes the norm. It becomes acceptable, perhaps indeed even well-regarded, to take a certain viewpoint then suggest that anyone who doesn't share it is doing something wrong.

Well nuts to that. Emails are, as everyone should know, an unclear communication channel. I've found myself on the wrong side of this kind of debate before, and I've heard plenty of stories of people who were put down, pushed out or made to feel stupid -- and for what? There are just so many stories, many of which I have heard first-hand, of people who have felt alienated on online lists where prowess and insight are so highly regarded that they become means by which others are put down. It's that larger problem which people need not to put up with.

However, I'm just about to go offline for 12 hours or so, and I know the US will be waking up to their emails shortly, so I just wanted to take this opportunity before the sun rotates again to say to the list and the original author that I'd really like to avoid a continued shouting contest or make anyone upset. I've obviously ruffled some feathers already, and I guess probably this email may ruffle some more, but really I just want to make clear that :
  (a) It's not okay to put myself or anyone else down, claiming some personal superiority
  (b) That attitude is all this email is about. It doesn't need to be any bigger than that.

Regards,
-Tennessee

Georg Brandl

unread,
May 10, 2009, 10:58:30 AM5/10/09
to python...@python.org
spir schrieb:

> Also, func attributes are an alternative for another common (mis)use of default arguments, namely the case of a function factory:
>
> def paramPower(exponent):
> ### instead of "def power(number, exponent=exponent)"
> ### just set exponent on the func:
> def power(number):
> return number**power.exponent
> power.exponent = exponent
> return power
> power3 = paramPower(3) ; power5 = paramPower(5)
> print power3(2) ; print power5(2)
> ==>
> 8
> 32

You don't need a function attribute here; just "exponent" will work fine.

The problem is where you define multiple functions, e.g. in a "for" loop,
and function attributes don't help there:

adders = []

for i in range(10):
def adder(x):
return x + i

This will fail to do what it seems to do; you need to have some kind of
binding "i" in a scope where it stays constant. You could use a "make_adder"
factory function, similar to your paramPower, but that is more kludgy than
necessary, because it can easily be solved by a default argument.

Georg

--
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.

_______________________________________________

Georg Brandl

unread,
May 10, 2009, 10:59:11 AM5/10/09
to python...@python.org
Tennessee Leeuwenburg schrieb:

> I don't have a casual attitude towards breaking code, just an open mind
> towards discussions on their merits. I don't really appreciate the
> negative tones, and I'm sure that if anyone else is in the firing line,
> they wouldn't appreciate either, even if it to some extent it's all a
> bit of a storm in a teacup. Unless someone who is happy to cop a bit of
> flak stands up and says that's not on, then maintaining a "thick skin"
> -- i.e. putting up with people putting you down, be it through a clear
> and direct put-down, or through a more subtle implication -- becomes the
> norm. It becomes acceptable, perhaps indeed even well-regarded, to take
> a certain viewpoint then suggest that anyone who doesn't share it is
> doing something wrong.

The problem is that people whose proposal immediately meets negative
reactions usually feel put down no matter what exactly you say to them.
If there was a polite way of saying "This will not change, please don't
waste more of our time with this discussion." that still gets the point
across, I would be very grateful.

Georg

--
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.

_______________________________________________

Arnaud Delobelle

unread,
May 10, 2009, 12:10:33 PM5/10/09
to Larry Hastings, python-ideas

On 10 May 2009, at 11:14, Larry Hastings wrote:

> George Sakkis wrote:
>>
>> +0.x for a new keyword that adds dynamic semantics (and removes the
>> need for the sentinel kludge).
>
>
> We don't need new syntax for it. Here's a proof-of-concept hack
> that you can do it with a function decorator.
> import copy

Not that we don't need syntax for default arguments either :) Here is
a decorator that does it:

def default(**defaults):
defaults = defaults.items()
def decorator(f):
def decorated(*args, **kwargs):
for name, val in defaults:
kwargs.setdefault(name, val)
return f(*args, **kwargs)
return decorated
return decorator


Here it is in action:


>>> z=1
>>> @default(z=z)
... def foo(a, z):
... print a + z
...
>>> z=None
>>> foo(3)
4
>>>
>>> @default(history=[])
... def bar(x, history):
... history.append(x)
... return list(history)
...
>>> map(bar, 'spam')
[['s'], ['s', 'p'], ['s', 'p', 'a'], ['s', 'p', 'a', 'm']]


Let's get rid of default arguments altogether, and we will have solved
the problem!


Furthemore, by removing default arguments from the language, we can
let people choose what semantics they want for default arguments.
I.e, if they want them to be reevaluated each time, they could write
the default decorator as follows (it is exactly the same as the one
above except for a pair of parentheses that have been added on one line.

def dynamic_default(**defaults):
defaults = defaults.items()
def decorator(f):
def decorated(*args, **kwargs):
for name, val in defaults:
kwargs.setdefault(name, val())
# ^^
return f(*args, **kwargs)
return decorated
return decorator


Example:


>>> @dynamic_default(l=list)
... def baz(a, l):
... l.append(a)
... return l
...
>>> baz(2)
[2]
>>> baz(3)
[3]


;)

--
Arnaud

George Sakkis

unread,
May 10, 2009, 1:00:15 PM5/10/09
to python-ideas
On Sun, May 10, 2009 at 12:10 PM, Arnaud Delobelle
<arn...@googlemail.com> wrote:

> Furthemore, by removing default arguments from the language, we can let
> people choose what semantics they want for default arguments.  I.e, if they
> want them to be reevaluated each time, they could write the default
> decorator as follows (it is exactly the same as the one above except for a
> pair of parentheses that have been added on one line.

Cute, but that's still a subset of what the dynamic semantics would
provide; the evaluated thunks would have access to the previously
defined arguments:

def foo(a, b, d=(a*a+b+b)**0.5, s=1/d):
return (a,b,d,s)

would be equivalent to

missing = object()
def foo(a, b, d=missing, s=missing):
if d is missing:
d = (a*a+b+b)**0.5
if s is missing:
s = 1/d
return (a,b,d,s)

George

Scott David Daniels

unread,
May 10, 2009, 2:30:49 PM5/10/09
to python...@python.org
George Sakkis wrote:
> On Sun, May 10, 2009 at 1:28 AM, Scott David Daniels
> <Scott....@acm.org> wrote:
>> Any argument for changing to a more "dynamic" default scheme had better
>> have a definition of the behavior of the following code, and produce a
>> good rationale for that behavior:
>>
>> x = 5
>> def function_producer(y):
>> def inner(arg=x+y):
>> return arg + 2
>> return inner
>
> I don't think the proposed scheme was ever accused of not being
> well-defined. Here's the current equivalent dynamic version:
>
> x = 5
> def function_producer(y):
> missing = object()
> def inner(arg=missing):
> if arg is missing:
> arg = x+y
> return arg + 2
> return inner

So sorry.

def function_producer(y):
def inner(arg=x+y):
return arg + 2

y *= 10
return inner

I was trying to point out that it becomes much trickier building
functions with dynamic parts, and fluffed the example.

--Scott David Daniels
Scott....@Acm.Org

Larry Hastings

unread,
May 10, 2009, 3:27:15 PM5/10/09
to Arnaud Delobelle, python-ideas

Arnaud Delobelle wrote:
> Not that we don't need syntax for default arguments either :) Here is
> a decorator that does it: [...] Let's get rid of default arguments
> altogether, and we will have solved the problem! Furthemore, by
> removing default arguments from the language, we can let people choose
> what semantics they want for default arguments. [...] ;)

Comedy or not, I don't support getting rid of default arguments. Nor do
I support changing the semantics of default arguments so they represent
code that is run on each function invocation.

As I demonstrated, people can already choose what semantics they want
for default arguments, by choosing whether or not to decorate their
functions with clone_arguments or the like. We don't need to remove
default arguments from the language for that to happen.


/larry/

George Sakkis

unread,
May 10, 2009, 4:06:53 PM5/10/09
to python-ideas
On Sun, May 10, 2009 at 1:00 PM, George Sakkis <george...@gmail.com> wrote:

> On Sun, May 10, 2009 at 12:10 PM, Arnaud Delobelle
> <arn...@googlemail.com> wrote:
>
>> Furthemore, by removing default arguments from the language, we can let
>> people choose what semantics they want for default arguments.  I.e, if they
>> want them to be reevaluated each time, they could write the default
>> decorator as follows (it is exactly the same as the one above except for a
>> pair of parentheses that have been added on one line.
>
> Cute, but that's still a subset of what the dynamic semantics would
> provide; the evaluated thunks would have access to the previously
> defined arguments:
>
> def foo(a, b, d=(a*a+b+b)**0.5, s=1/d):
>    return (a,b,d,s)
>
> would be equivalent to
>
> missing = object()
> def foo(a, b, d=missing, s=missing):
>    if d is missing:
>        d = (a*a+b+b)**0.5
>    if s is missing:
>        s = 1/d
>    return (a,b,d,s)


Just for kicks, here's a decorator that supports dependent dynamically
computed defaults; it uses eval() to create the lambdas on the fly:

@evaldefaults('s','d')
def foo(a, b, d='(a*a+b*b)**0.5', t=0.1, s='(1+t)/(d+t)'):
return (a,b,d,t,s)

print foo(3,4)

#=======

import inspect
import functools
# from http://code.activestate.com/recipes/551779/
from getcallargs import getcallargs

def evaldefaults(*eval_params):
eval_params = frozenset(eval_params)
def decorator(f):
params,_,_,defaults = inspect.getargspec(f)
param2default = dict(zip(params[-len(defaults):], defaults))
if defaults else {}
param2lambda = {}
for p in eval_params:
argsrepr = ','.join(params[:params.index(p)])
param2lambda[p] = eval('lambda %s: %s' % (argsrepr,
param2default[p]),
f.func_globals)
@functools.wraps(f)
def wrapped(*args, **kwargs):
allkwargs,missing = getcallargs(f, *args, **kwargs)
missing_eval_params = eval_params.intersection(missing)
f_locals = {}
for i,param in enumerate(params):
value = allkwargs[param]
if param in missing_eval_params:
allkwargs[param] = value = param2lambda[param](**f_locals)
f_locals[param] = value
return f(**allkwargs)
return wrapped
return decorator

Tennessee Leeuwenburg

unread,
May 10, 2009, 8:45:31 PM5/10/09
to Steven D'Aprano, python...@python.org
On Sun, May 10, 2009 at 11:23 AM, Steven D'Aprano <st...@pearwood.info> wrote:
On Sun, 10 May 2009 10:19:01 am Tennessee Leeuwenburg wrote:
> Hi Pascal,
> Taking the example of
>
> def foo(bar = []):
>   bar.append(4)
>   print(bar)
>
> I'm totally with you in thinking that what is 'natural' is to expect
> to get a new, empty, list every time.

That's not natural to me. I would be really, really surprised by the
behaviour you claim is "natural":

>>> DEFAULT = 3
>>> def func(a=DEFAULT):
...     return a+1
...
>>> func()
4
>>> DEFAULT = 7
>>> func()
8

Good example! If I may translate that back into the example using a list to make sure I've got it right...

default = []

def func(a=default):
   a.append(5)

func()
func()

default will now be [5,5]
 
For deterministic functions, the same argument list should return the
same result each time. By having default arguments be evaluated every
time they are required, any function with a default argument becomes
non-deterministic. Late evaluation of defaults is, essentially,
equivalent to making the default value a global variable. Global
variables are rightly Considered Harmful: they should be used with
care, if at all.


If I can just expand on that point somewhat...

In the example I gave originally, I had in mind someone designing a function, whereby it could be called either with some pre-initialised term, or otherwise it would use a default value of []. I imagined a surprised designer finding that the default value of [] was a pointer to a specific list, rather than a new empty list each time.

e.g.

def foo(bar = []):
    bar.append(5)
    return bar

The same argument list (i.e. no arguments) would result in a different result being returned every time. On the first call, bar would be [5], then [5,5], then [5,5,5]; yet the arguments passed (i.e. none, use default) would not have changed.

You have come up with another example. I think it is designed to illustrate that a default argument doesn't need to specify a default value for something, but could be a default reference (such as a relatively-global variable). In that case, it is modifying something above its scope. To me, that is what you would expect under both "ways of doing things". I wonder if I am missing your point...

I'm totally with you on the Global Variables Are Bad principle, however. I don't design them in myself, and where I have worked with them, usually they have just caused confusion.

> However this isn't want
> happens. As far as I'm concerned, that should more or less be the end
> of the discussion in terms of what should ideally happen.

As far as I'm concerned, what Python does now is the idea behaviour.
Default arguments are part of the function *definition*, not part of
the body of the function. The definition of the function happens
*once* -- the function isn't recreated each time you call it, so
default values shouldn't be recreated either.

I agree that's how you see things, and possibly how many people see things, but I don't accept that it is a more natural way of seeing things. However, what *I* think is more natural is just one person's viewpoint... I totally see the philosophical distinction you are trying to draw, and it certainly does help to clarify why things are the way they are. However, I just don't know that it's the best way they could be.

> The responses to the change in behaviour which I see as more natural
> are, to summarise, as follows:
>   -- For all sorts of technical reasons, it's too hard
>   -- It changes the semantics of the function definition being
> evaluated at compile time
>   -- It's not what people are used to

And it's not what many people want.

You only see the people who complain about this feature. For the
multitude of people who expect it or like it, they have no reason to
say anything (except in response to complaints). When was the last time
you saw somebody write to the list to say "Gosh, I really love that
Python uses + for addition"? Features that *just work* never or rarely
get mentioned.

:) ... well, that's basically true. Of course there are some particular aspects of Python which are frequently mentioned as being wonderful, but I see your point.  However, I'm not sure we really know one way or another about what people want then -- either way.
 
> With regards to the second point, it's not like the value of
> arguments is set at compile time, so I don't really see that this
> stands up.

I don't see what relevance that has. If the arguments are provided at
runtime, then the default value doesn't get used.

I think this is the fundamental difference -- to me speaks worlds :) ... I think you just have a different internal analogy for programming than I do. That's fine. To me, I don't see that a line of code should not be dynamically evaluated just because it's part of the definition. I just don't see why default values shouldn't be (or be able to be) dynamically evaluated. Personally I think that doing it all the time is more natural, but I certainly don't see why allowing the syntax would be bad. I'd basically do that 100% of the time. I'm not sure I've ever used a default value other than None in a way which I wouldn't want dynamically evaluated.
 
> I don't think it's intuitive,

Why do you think that intuitiveness is more valuable than performance
and consistency?

Because I like Python more than C? I'm pretty sure everyone here would agree than in principle, elegance of design and intuitive syntax are good. Agreeing on what that means might involve some robust discussion, but I think everyone would like the same thing. Well, consistency is pretty hard to do without... :)
 
Besides, intuitiveness is a fickle thing. Given this pair of functions:

def expensive_calculation():
   time.sleep(60)
   return 1

def useful_function(x=expensive_calculation()):
   return x + 1

I think people would be VERY surprised that calling useful_function()
with no arguments would take a minute *every time*, and would complain
that this slowness was "unintuitive".

That seems at first like a good point. It is a good point, but I don't happen to side with you on this issue, although I do see that many people might. The code that I write is not essentially performance-bound. It's a lot more design-bound (by which I mean it's very complicated, and anything I can do to simplify it is well-worth a bit of a performance hit).

However, when the design options are available (setting aside what default behaviour should be), it's almost always possible to design things how you'd like them.

e.g.

def speed_critical_function(x=None):
    if x is None:
       time.sleep(60:

    return 1

def handy_simple_function(foo=5, x=[]): or maybe (foo=5, x = new []):
    for i in range(5):
        x.append(i)

    return x

Then, thinking about it a little more (and bringing back a discussion of default behaviour), I don't really see why the implementation of the dynamic function definition would be any slower than using None to indicate it wasn't passed in, followed by explicit default-value setting.

 
> it's just that people become
> accustomed to it. There is indeed, *some sense* in understanding that
> the evaluation occurs at compile-time, but there is also a lot of
> sense (and in my opinion, more sense) in understanding the evaluation
> as happening dynamically when the function is called.

No. The body of the function is executed each time the function is
called. The definition of the function is executed *once*, at compile
time. Default arguments are part of the definition, not the body, so
they too should only be executed once. If you want them executed every
time, put them in the body:

def useful_function(x=SENTINEL):
   if x is SENTINEL:
       x = expensive_calculation()
   return x+1

I agree that's how things *are* done, but I just don't see why it should be that way, beyond it being what people are used to. It seems like there is no reason why it would be difficult to implement CrazyPython which does things as I suggest. Given that, it also doesn't seem like there is some inherent reason to prefer the design style of RealActualPython over CrazyPython. Except, of course that RealActualPython exists and I can use it right now (thanks developers!), versus CrazyPython which is just an idea.



Your logic is impeccable :) ... yet, if I may continue to push my wheelbarrow uphill for a moment longer, I would argue that is an implementation detail, not a piece of design philosophy.
 
(2) The potential performance degradation of re-evaluating default
arguments at runtime is great. For practical reasons, it's best to
evaluate them once only.

Maybe that's true. I guess I have two things to say on that point. The first is that I'm still not sure that's really true in a problematic way. Anyone wanting efficiency could continue to use sentinel  values of None (which obviously don't need to be dynamically evaluated) while other cases would surely be no slower than the initialisation code would be anyway. Is the cost issue really that big a problem?

The other is that while pragmatics is, of course, a critical issue, it's also true that it's well-worth implementing more elegant language features if possible. It's always a balance. The fastest languages are always less 'natural', while the more elegant and higher-level languages somewhat slower. Where a genuine design improvement is found, I think it's worth genuinely considering including that improvement, even if it is not completely pragmatic.
 
(3) In order to get the behaviour you want, the Python compiler would
need a more complicated implementation which would be hard to explain.

Yes, that's almost certainly true.
 
> I don't see it as changing one way of
> doing things for another equally valid way of doing things, but
> changing something that's confusing and unexpected for something
> which is far more natural and, to me, Pythonic.

I'm sorry, while re-evaluation of default arguments is sometimes useful,
it's more often NOT useful. Most default arguments are simple objects
like small ints or None. What benefit do you gain from re-evaluating
them every single time? Zero benefit. (Not much cost either, for simple
cases, but no benefit.)

But for more complex cases, there is great benefit to evaluating default
arguments once only, and an easy work-around for those rare cases that
you do want re-evaluation.

Small ints and None are global pointers (presumably!) so there is no need to re-evaluate them every time. The list example is particularly relevant (ditto empty dictionary) since I think that would be one of the most common cases for re-evaluation. Presumably a reasonably efficient implementation could be worked out such that dynamic evaluation of the default arguments (and indeed the entire function definition) need only occur where a dynamic default value were included.

I agree that the workaround is not that big a deal once you're fully accustomed to How Things Work, but it just seems 'nicer' to allow dynamic defaults. That's all I really wanted to say in the first instance, I didn't think that position would really get anyone's back up.
 
Regards,
-Tennessee

Tennessee Leeuwenburg

unread,
May 10, 2009, 9:09:50 PM5/10/09
to Georg Brandl, python...@python.org


On Mon, May 11, 2009 at 12:59 AM, Georg Brandl <g.br...@gmx.net> wrote:
Tennessee Leeuwenburg schrieb:

> I don't have a casual attitude towards breaking code, just an open mind
> towards discussions on their merits. I don't really appreciate the
> negative tones, and I'm sure that if anyone else is in the firing line,
> they wouldn't appreciate either, even if it to some extent it's all a
> bit of a storm in a teacup. Unless someone who is happy to cop a bit of
> flak stands up and says that's not on, then maintaining a "thick skin"
> -- i.e. putting up with people putting you down, be it through a clear
> and direct put-down, or through a more subtle implication -- becomes the
> norm. It becomes acceptable, perhaps indeed even well-regarded, to take
> a certain viewpoint then suggest that anyone who doesn't share it is
> doing something wrong.

The problem is that people whose proposal immediately meets negative
reactions usually feel put down no matter what exactly you say to them.
If there was a polite way of saying "This will not change, please don't
waste more of our time with this discussion." that still gets the point
across, I would be very grateful.

I understand that. I think there is a good way to do it. First of all, I would recognise that this is the python-ideas list, not the python-dev list, and that this is *exactly* the place to discuss ideas on their merits, and potentially put aside pragmatics to engage in a discussion of design philosophy.

For bad ideas, I would suggest:
   "Thanks for your contribution. However, this has been discussed quite a lot before, and the groundswell of opinion is most likely that this is not going to be a good addition to Python. However, if you'd like to discuss the idea further, please consider posting it to comp.lang.py."

For good/okay ideas that just won't get up, I would suggest:
  "Thanks for your contribution. I see your point, but I don't think it's likely to get enough traction amongst the developer community for someone else to implement it. However, if you'd like more feedback on your ideas so that you can develop a PEP or patch, please feel free to have a go. However, please don't be disappointed if it doesn't get a lot of support unless you are happy to provide some more justification for your position.".

I don't really think anyone on a mailing list really needs to waste any more time than they want to -- just ignore the thread.

I would definitely avoid things like:
  "You clearly have no idea what you are talking about"
  "If you only knew what I knew, you'd know differently"

It's probably not possible to avoid people with an idea feeling deflated if their ideas are not popular, but on an ideas list such as this, I think that having conversations should be encouraged. Certainly that's what got under my skin. If I was chatting in person, or with friends, or in a meeting, the appropriate thing to do would be to say "Hey, that's a bit rough!" and then probably the attitude would be wound back, or the person would respond with "Oh, that's not what I meant, I just meant this..." and the misunderstanding would be quickly resolved.

Unfortunately, email just *sucks* for telling the difference between someone with a chip on their shoulder, or someone who is just being helpful and made a bad choice of words.

Cheers,
-T

Terry Reedy

unread,
May 11, 2009, 12:05:26 AM5/11/09
to python...@python.org
George Sakkis wrote:
> On Sun, May 10, 2009 at 1:28 AM, Scott David Daniels
> <Scott....@acm.org> wrote:
>> Any argument for changing to a more "dynamic" default scheme had better
>> have a definition of the behavior of the following code, and produce a
>> good rationale for that behavior:
>>
>> x = 5
>> def function_producer(y):
>> def inner(arg=x+y):
>> return arg + 2
>> return inner

In this version, x is resolved each time when function_producer is
called which could be different each time. The x = 5 line is possible
irrelevant.

> I don't think the proposed scheme was ever accused of not being
> well-defined. Here's the current equivalent dynamic version:
>
> x = 5
> def function_producer(y):
> missing = object()
> def inner(arg=missing):
> if arg is missing:
> arg = x+y
> return arg + 2
> return inner


In this version, x is resolved each time each of the returned inners is
called, which could be different each time and which could be different
from any of the times function_producer was called. Not equivalent.

Given how often this issue is resuscitated, I am will consider raising
my personal vote from -1 to -0.

My friendly advice to anyone trying to promote a change is to avoid
categorical (and false or debatable) statements like "Newbies get
confused" (Some do, some do not) or "The intuitive meaning is" (to you
maybe, not to me). Also avoid metaphysical issues, especially those
that divide people.*

Do focus on practical issues, properly qualified statements, and how a
proposal would improve Python.

Terry Jan Reedy

* The initial argument for changing the meaning of int/int amounted to
this: "integers are *merely* a subset of reals, therefore....". Those
with a different philosophy dissented and the actual proposal was
initially lost in the resulting fire.

Stephen J. Turnbull

unread,
May 11, 2009, 4:57:49 AM5/11/09
to Tennessee Leeuwenburg, Georg Brandl, python...@python.org
> On Mon, May 11, 2009 at 12:59 AM, Georg Brandl <g.br...@gmx.net> wrote:

> > The problem is that people whose proposal immediately meets
> > negative reactions usually feel put down no matter what exactly
> > you say to them. If there was a polite way of saying "This will
> > not change, please don't waste more of our time with this
> > discussion." that still gets the point across, I would be very
> > grateful.

I thought that the way to do that is to say "This proposal is
un-Pythonic".[1]

Now, a claim that something is "un-Pythonic" is basically the end of
discussion, since in the end it's a claim that the BDFL would reject
it, not something based entirely on "objective criteria". Even the
BDFL sometimes doesn't know what's Pythonic until he sees it! So a
bare claim of "un-Pythonic" is borrowing the BDFL's authority, which
should be done very cautiously.

IMO, the "problem" arose here because Tennessee went outside of
discussing the idea on its merits. Rather, he wrote "the current
situation seems un-Pythonic," but didn't point to violations of any of
the usual criteria. OTOH, Steven did discuss the idea on its Pythonic
merits, showing that it arguably violates at least three of the tenets
in the Zen of Python.

In turn, Steven could have left well-enough alone and avoided
explicitly pointing out that, therefore, Tennessee doesn't seem to
understand what "Pythonic" is. FWIW, early in my participation in
Python-Dev I was told, forcefully, that I wasn't qualified to judge
Pythonicity. That statement was (and is) true, and being told off was
a very helpful experience for me. How and when to say it is another
matter, but avoiding it entirely doesn't serve the individual or the
community IMO.

Note that this whole argument does not apply to terms like "natural,"
"intuitive," "readable," etc., only to the pivotal term "Pythonic".
The others are matters of individual opinion. AIUI, "Pythonic" is a
matter of *the BDFL's* opinion (if it comes down to that, and
sometimes it does).


Footnotes:
[1] If it's a fundamental conceptual problem. "This proposal is
incompatible" can be used if it's a matter of violating the language
definition according to the language reference and any relevant PEPs.

Stephen J. Turnbull

unread,
May 11, 2009, 5:29:40 AM5/11/09
to python...@python.org
Terry Reedy writes:

> Given how often this issue is resuscitated, I am will consider raising
> my personal vote from -1 to -0.

> Do focus on practical issues, properly qualified statements, and how a
> proposal would improve Python.

One thing I would like to see addressed is use-cases where the
*calling* syntax *should* use default arguments.[1] In the case of
the original example, the empty list, I think that requiring the
argument, and simply writing "foo([])" instead of "foo()" is better,
on two counts: EIBTI, and TOOWTDI. And it's certainly not an
expensive adjustment.

In a more complicated case, it seems to me that defining (and naming)
a separate function would often be preferable to defining a
complicated default, or explicitly calling a function in the actual
argument. Ie, rather than

def consume_integers(ints=fibonacci_generator()):
for i in ints:
# suite and termination condition

why not

def consume_integers(ints):
for i in ints:
# suite and termination condition

def consume_fibonacci():
consume_integers(fibonacci_generator())

or

def consume_integers_internal(ints):
for i in ints:
# suite and termination condition

def consume_integers():
consume_integers_internal(fibonacci_generator())

depending on how frequent or intuitive the "default" Fibonacci
sequence is? IMO, for both above use-cases EIBTI applies as an
argument that those are preferable to a complex, dynamically-
evaluated default, and for the second TOOWTDI also applies.

Footnotes:
[1] In the particular cases being advocated as support for dynamic
evaluation of default arguments, not in general. It is clear to me
that having defaults for rarely used option arguments is a good thing,
and I think that is a sufficient justification for Python to support
default arguments.

Aahz

unread,
May 11, 2009, 8:29:54 AM5/11/09
to python...@python.org
On Mon, May 11, 2009, Tennessee Leeuwenburg wrote:
> On Sun, May 10, 2009 at 11:23 AM, Steven D'Aprano <st...@pearwood.info>wrote:
>>
>> (1) Assignments outside of the body of a function happen once, at
>> compile time. Default values are outside the body of the function. You
>> want a special case for default values so that they too happen at
>> runtime. That's not special enough to warrant breaking the rules.
>
> Your logic is impeccable :) ... yet, if I may continue to push my
> wheelbarrow uphill for a moment longer, I would argue that is an
> implementation detail, not a piece of design philosophy.

I'm not going to look it up, but in the past, Guido has essentially
claimed that this behavior is by design (not so much by itself but in
conjunction with other deliberate decisions). I remind you of this:

"Programming language design is not a rational science. Most reasoning
about it is at best rationalization of gut feelings, and at worst plain
wrong." --GvR, python-ideas, 2009-3-1
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/

"It is easier to optimize correct code than to correct optimized code."
--Bill Harlan

Pascal Chambon

unread,
May 11, 2009, 3:49:14 PM5/11/09
to python...@python.org

Well well well,

lots of interesting points have flowed there, I won't have any chance of reacting to each one ^^


>And no one seemed to enjoy the possibilities of getting "potentially static variables" this way.
>You did not search hard enough.

Would anyone mind pointing me to people that have made a sweet use of mutable default arguments ?
At the moment I've only run into "memoization" thingies, which looked rather awkward to me : either the "memo" has to be used from elsewhere in the program, and having to access it by browsing the "func_defaults" looks not at all "KISS", or it's really some private data from the function, and having it exposed in its interface is rather error-prone.


[...]
  
Does it exist ? Do we have any way, from inside a call block, to
browse the default arguments that this code block might receive ?
    
[...]

dir(func_object) is your friend :)

  
Whoups, I meant "from inside a *code* block" :p
But that's the "self function" thread you noted, I missed that one... thanks for pointing it ^^


On the other hand, would anyone support my alternative wish, of
having a builtin "NotGiven", similar to "NotImplemented", and
dedicated to this somehow usual taks of "placeholder" ?
    
There already is such a beast: None is designed to be used as a 
placeholder for Not Given, Nothing, No Result, etc. 

If None is not suitable, NotImplemented is also a perfectly good 
built-in singleton object which can be used as a sentinel. It's already 
used as a sentinel for a number of built-in functions and operators. 
There's no reason you can't use it as well.

  
Well, to me there was like a ternary semantic for arguments :
*None -> make without this argument / don't use this feature
*"NotGiven" -> create the parameter yourself
*"Some value" -> use this value as a parameter
But after reflexion, it's quite rare that the three meanings have to be used in the same place, so I guess it's ok like it is...

Even though, still, I'd not be against new, more explicit builtins. "None" has too many meanings to be "self documenting", and I feel "NotImplemented" doesn't really fit where we mean "notGiven"
That's the same thing for exceptions : I've seen people forced to make ugly workarounds because they got "ValueError" where they would have loved to get "EmptyIterableError" or other precise exceptions.
But maybe I'm worrying on details there :p

Heh heh heh, he thinks beginners read manuals :-)

  
 ^^ I'm maybe the only one, but I've always found the quickest way to learn a language/library was to read the doc.
Wanna learn python ? Read the language reference, then the library reference. Wanna know what's the object model of PHP5 versus PHP4 ? Read the 50 pages chapter on that matter. Wanna play with Qt ? Read the class libraries first. :p
Good docs get read like novels, and it's fun to cross most of the gotchas and implementation limitations without having to be biten by them first. People might have the feeling that they gain time by jumping to the practice, I've rather the feeling that they lose hell a lot of it, that way.



Back to the "mutable default arguments" thingy :
I think I perfectly see the points in favor of having them as attributes of the function method, as it's the case currently.
It does make sense in many ways, even if less sense than "class attributes", for sure (the object model of python is rock solid to me, whereas default arguments are on a thin border that few languages had the courage to explore - most of them forbidding non-litteral constants).

But imo, it'd be worth having a simple and "consensual way" of obtaining dynamic evaluation of default arguments, without the "if arg==none:" pattern.
The decorators that people have submitted have sweet uses, but they either deepcopy arguments, use dynamic evaluation of code strings, or force to lamba-wrap all arguments ; so they're not equivalent to what newbies would most of the time expect - a reevaluation of the python code they entered after '='.

So the best, imo, would really be a keyword or some other form that reproduces with an easy syntax the "lambda-wrapping" we had.

If adding keywords is too violent, what would you people think of some notation similar to what we already have in the "function arguments world ", i.e stars ?

def func(a, c = *[]):
    pass

Having 1, 2 or 3 stars in the "default argument" expression, wouldn't it be OK ? I guess they have no meaning there at the moment, so we could give them one : "keep that code as a lamda functio nand evaluate it at each function call".
Couldn't we ?
The risk would be confusion with the other "*" and "**", but in this case we might put 3 stars (yeah, that's much but...).

Any comment on this ?

Regards,
Pascal

PS : Has anyone read Dale Carneghie's books here ? That guy is a genius of social interactions, and I guess that if 1/3 of posters/mailers had read him, there would never be any flame anymore on forums and mailing lists.
Contrarily to what his titles might look like, he doesn't promote hypocrisy or cowardness ; he simply points lots of attitudes (often unconscious) that ruin discussions without helping in any way the matter go forward.
I'm myself used to letting scornful or aggressives sentences pass by ; but others don't like them, and I fully understand why.
So could't we smoothen edges, in order to keep the discusion as it's supposed to be - a harmless sharing of pros and cons arguments, which endangers no one's life - instead of having it randomly turned into a confrontation of egos, pushing each other down as in an attempt not to drown ?
http://en.wikipedia.org/wiki/How_to_Win_Friends_and_Influence_People



Chris Rebert

unread,
May 11, 2009, 4:32:21 PM5/11/09
to Pascal Chambon, python...@python.org
On Mon, May 11, 2009 at 12:49 PM, Pascal Chambon
<chambon...@wanadoo.fr> wrote:
<snip>

> So the best, imo, would really be a keyword or some other form that
> reproduces with an easy syntax the "lambda-wrapping" we had.
>
> If adding keywords is too violent, what would you people think of some
> notation similar to what we already have in the "function arguments world ",
> i.e stars ?
>
> def func(a, c = *[]):
>     pass
>
> Having 1, 2 or 3 stars in the "default argument" expression, wouldn't it be
> OK ? I guess they have no meaning there at the moment, so we could give them
> one : "keep that code as a lamda functio nand evaluate it at each function
> call".
> Couldn't we ?
> The risk would be confusion with the other "*" and "**", but in this case we
> might put 3 stars (yeah, that's much but...).
>
> Any comment on this ?

Seems unnecessarily confusing and sufficiently unrelated to the
current use of stars in Python. -1 on this syntax.
I'd look for a different punctuation/keyword.

Cheers,
Chris
--
http://blog.rebertia.com

Stephen J. Turnbull

unread,
May 11, 2009, 11:36:09 PM5/11/09
to Pascal Chambon, python...@python.org
Pascal Chambon writes:

> So could't we smoothen edges, in order to keep the discusion as it's
> supposed to be - a harmless sharing of pros and cons arguments, which
> endangers no one's life -

In discussions about Python development, misuse of the term "Pythonic"
to support one's personal preference is not harmless. It leads to
confusion of newbies, and ambiguity in a term that is already rather
precise, and becoming more so with every PEP (though it is hard to
express in a few words as a definition). The result is that the BDFL
may use that term at his pleasure, but the rest of us risk being
brought up short by somebody who knows better.

> instead of having it randomly turned into a confrontation of egos,

This was not a random event. It was triggered by, *and responded only
to*, the misuse of the word "Pythonic".

> pushing each other down as in an attempt not to drown ?

Hey, I haven't seen one claim that "this feature would look good in
Perl" yet. The gloves are still on.<wink>

Tennessee Leeuwenburg

unread,
May 12, 2009, 12:22:31 AM5/12/09
to Stephen J. Turnbull, python...@python.org
On Tue, May 12, 2009 at 1:36 PM, Stephen J. Turnbull <ste...@xemacs.org> wrote:
Pascal Chambon writes:

 > So could't we smoothen edges, in order to keep the discusion as it's
 > supposed to be - a harmless sharing of pros and cons arguments, which
 > endangers no one's life -

In discussions about Python development, misuse of the term "Pythonic"
to support one's personal preference is not harmless.  It leads to
confusion of newbies, and ambiguity in a term that is already rather
precise, and becoming more so with every PEP (though it is hard to
express in a few words as a definition).  The result is that the BDFL
may use that term at his pleasure, but the rest of us risk being
brought up short by somebody who knows better.

 > instead of having it randomly turned into a  confrontation of egos,

This was not a random event.  It was triggered by, *and responded only
to*, the misuse of the word "Pythonic".


I guess it's never occurred to me, and I wouldn't have thought it would be immediately clear to everyone, that Pythonic simply means "Whatever BDFL thinks". I've always thought it meant "elegant and in keeping with the design philosophy of Python", and up for discussion and interpretation by everyone. I never thought that it would be used as a means of *preventing* discussion about what was or was not 'Pythonic'. *Obviously*, BDFL's opinions on the language are authoritative, but that doesn't make them beyond discussion.

This is the Python Ideas list, not the dev-list, and I was discussing my own interpretation, not trying to force anyone on anything. To recall a quote I heard once, "You are entitled to your own opinion, but not your own facts". I would have thought that expressing ones' opinion about what is or is not Pythonic is a wonderful thing to encourage. It's like encouraging people to discuss what elegant code looks like, or the merits of a piece of writing.

I thought was very clear that I was talking about my interpretation of what was Pythonic, and clear that I was in no way talking about trying to claim authority. I feel a bit like I've been targetted by the thought police, truth be told, although that overstates matters. I didn't think I was in any way saying "My way is absolutely more Pythonic, you should all think like me", but much more along the lines of, "Hey, I think my solution captures something elegant and Pythonic, surely that's worth talking about even if there are some practical considerations involved".  I just thought I'd be clear in saying "seems to me to be more Pythonic" rather than "is more Pythonic".

Where are people going to talk freely about their interpretation of what is and isn't Pythonic, if not the ideas list? I'm also subscribed to the python-dev list, and I've never attempted to force an opinion there. Isn't *this* list the right place to have conversations about these concepts? I don't think people should be pulled short for talking about Pythonicity, just for trying to impose their world-view. That's what rubs wrongly -- being told you're not even supposed to *talk* about something, or not be entitled to an opinion on something. I would have thought that getting involved in discussing the Zen of Python is something that should be a part of everyone's learning and growth, rather than something which is delivered like a dogma. That's not to say there isn't a right answer on many issues, but it has to be acceptable to discuss the issues and to hold personal opinions.

Cheers,
-T

Aahz

unread,
May 12, 2009, 12:42:56 AM5/12/09
to python...@python.org
On Tue, May 12, 2009, Tennessee Leeuwenburg wrote:
>
> I thought was very clear that I was talking about my interpretation
> of what was Pythonic, and clear that I was in no way talking about
> trying to claim authority. I feel a bit like I've been targetted by
> the thought police, truth be told, although that overstates matters. I
> didn't think I was in any way saying "My way is absolutely more
> Pythonic, you should all think like me", but much more along the lines
> of, "Hey, I think my solution captures something elegant and Pythonic,
> surely that's worth talking about even if there are some practical
> considerations involved". I just thought I'd be clear in saying
> "seems to me to be more Pythonic" rather than "is more Pythonic".

That may have been your intent, but it sure isn't what I read in your
original post. I suggest you re-read it looking for what might get
interpreted as obstreperous banging on the table:

http://mail.python.org/pipermail/python-ideas/2009-May/004601.html

If you still don't see it, I'll discuss it with you (briefly!) off-list;
that kind of tone discussion is really off-topic for this list.

"It is easier to optimize correct code than to correct optimized code."
--Bill Harlan

Tennessee Leeuwenburg

unread,
May 12, 2009, 1:06:52 AM5/12/09
to Aahz, python...@python.org


On Tue, May 12, 2009 at 2:42 PM, Aahz <aa...@pythoncraft.com> wrote:
On Tue, May 12, 2009, Tennessee Leeuwenburg wrote:
>
> I thought was very clear that I was talking about my interpretation
> of what was Pythonic, and clear that I was in no way talking about
> trying to claim authority. I feel a bit like I've been targetted by
> the thought police, truth be told, although that overstates matters. I
> didn't think I was in any way saying "My way is absolutely more
> Pythonic, you should all think like me", but much more along the lines
> of, "Hey, I think my solution captures something elegant and Pythonic,
> surely that's worth talking about even if there are some practical
> considerations involved".  I just thought I'd be clear in saying
> "seems to me to be more Pythonic" rather than "is more Pythonic".

That may have been your intent, but it sure isn't what I read in your
original post.  I suggest you re-read it looking for what might get
interpreted as obstreperous banging on the table:

http://mail.python.org/pipermail/python-ideas/2009-May/004601.html

If you still don't see it, I'll discuss it with you (briefly!) off-list;
that kind of tone discussion is really off-topic for this list.

Agreed. Anyone else who wants to chime in, feel free to email me off-list. Regardless of the rights and wrongs, I'll of course be extra-careful in future to be crystal clear about my meaning.

As far as I can tell, my original email is littered with terms like 'seems to me', 'in my opinion', 'personally' etc which I would think would convey to anyone that I'm talking about a personal opinion and not trying to discredit any one or any thing. Not that it's about counting, but I count no fewer than seven occasions where I point out that I am advancing a personal opinion rather than making a universal statement. I'm not sure what else I should have done.

Cheers,
-T

Stephen J. Turnbull

unread,
May 12, 2009, 5:42:15 AM5/12/09
to Tennessee Leeuwenburg, python...@python.org
Tennessee Leeuwenburg writes:

> Where are people going to talk freely about their interpretation of
> what is and isn't Pythonic, if not the ideas list?

comp.lang.python seems more appropriate for free discussion.

Pascal Chambon

unread,
May 12, 2009, 3:56:28 PM5/12/09
to Stephen J. Turnbull, python...@python.org
Well, since adding new keywords or operators is very sensitive, and the existing ones are rather exhausted, it won't be handy to propose a new syntax...

One last idea I might have : what about something like

    def myfunc(a, b, c = yield []):
              pass


I'm not expert in english, but I'd say the following "equivalents" of yield (dixit WordWeb) are in a rather good semantic area :
*Be the cause or source of
*Give or supply
*Cause to happen or be responsible for
*Bring in

Of course the behaviour of this yield is not so close from the one we know, but there is no interpretation conflict for the parser, and we might quickly get used to it :
* yield in default argument => reevaluate the expression each time
* yield in function body => return value and prepare to receive one

How do you people feel about this ?
Regards,
Pascal

PS : I've heard some month ago someone who compared the new high level languages as new religions - with the appearance of notions like "py-evangelistes" and stuffs - whereas it had (in his opinion) never been so for older languages :p
That's imo somehow true (and that's rather a good sign for those languages) ; I feel phrases like "pythonic" or perl's "TIMTOWTDI" have gained some kind of sacred aura ^^

Stephen J. Turnbull

unread,
May 13, 2009, 3:34:02 AM5/13/09
to Pascal Chambon, python...@python.org
Pascal Chambon writes:

> One last idea I might have : what about something like
>

> * def myfunc(a, b, c = yield []):
> pass*

As syntax, I'd be -0.5. But my real problem is that the whole concept
seems to violate several of the "Zen" tenets. Besides those that
Steven D'Aprano mentioned, I would add two more. First is "explicit
is better than implicit" at the function call. True, for the leading
cases of "[]" and "{}", "def foo(bar=[])" is an easy mistake for
novice Python programmers to make with current semantics. And I agree
that it is very natural to initialize an unspecified list or
dictionary with the empty object. But those also have a convenient
literal syntax that makes it clear that the object is constructed at
function call time: "myfunc([])" and "myfunc({})". Since that syntax
is always available even with the proposed new semantics, I feel this
proposal also violates "there's one -- and preferably only one --
obvious way to do it".

I understand the convenience argument, but that is frequently rejected
on Python-Dev with "your editor can do that for you; if it doesn't,
that's not a problem with Python, it's a problem with your editor." I
also see the elegance and coherence of Tennessee's proposal to
*always* dynamically evaluate, but I don't like it. Given that always
evaluating dynamically is likely to have performance impact that is as
surprising as the behavior of "def foo(bar=[])", I find it easy to
reject that proposal on the grounds of "although practicality beats
purity".

I may be missing something, but it seems to me that the proponents of
this change have yet to propose any concrete argument that it is more
Pythonic than the current behavior of evaluating default expressions
at compile time, and expressing dynamic evaluation by explicitly
invoking the expression conditionally in the function's suite, or as
an argument.

Regarding the syntax, the recommended format for argument defaults is

def myfunc(a, b, c=None):
pass

Using a keyword (instead of an operator) to denote dynamically
evaluated defaults gives:

def myfunc(a, b, c=yield []):
pass

which looks like a typo to me. I feel there should be a comma after
"yield", or an index in the brackets. (Obviously, there can't be a
comma because "[]" can't be a formal parameter, and yield is a keyword
so it can't be the name of a sequence or mapping. So the syntax
probably 'works' in terms of the parser. I'm describing my esthetic
feeling, not a technical objection.)

As for "yield" itself, I would likely be confused as to when the
evaluation takes place. "Yield" strikes me as an imperative, "give me
the value *now*", ie, at compile time.

Jacob Holm

unread,
May 13, 2009, 5:00:41 AM5/13/09
to Pascal Chambon, python...@python.org
Pascal Chambon wrote:
> One last idea I might have : what about something like
>
> * def myfunc(a, b, c = yield []):
> pass*
>

> [...], but there is no interpretation conflict for the parser, and we

> might quickly get used to it


I am surprised that there is no conflict, but it looks like you are
technically right. The parentheses around the yield expression are
required in the following (valid) code:


>>> def gen():
... def func(arg=(yield 'starting')):
... return arg
... yield func
...
>>> g = gen()
>>> g.next()
'starting'
>>> f = g.send(42)
>>> f()
42


I would hate to see the meaning of the above change depending on whether
the parentheses around the yield expression were there or not, so -1 on
using "yield" for this.

I'm +0 on the general idea of adding a keyword for delayed evaluation of
default argument expressions.

- Jacob

spir

unread,
May 13, 2009, 6:25:23 AM5/13/09
to python...@python.org
Le Wed, 13 May 2009 16:34:02 +0900,
"Stephen J. Turnbull" <ste...@xemacs.org> s'exprima ainsi:

> But those also have a convenient
> literal syntax that makes it clear that the object is constructed at
> function call time: "myfunc([])" and "myfunc({})".

I totaly agree with you, here. And as you say "explicit, etc..."

Then, the argument must not be 'defaulted' at all in the function def. This is a semantic change and a pity when the argument has an "obvious & natural" default value.
While we're at removing default args and requiring them to be explicit instead, why have default args at all in python?
Especially, why should immutable defaults be OK, and mutable ones be wrong and replaced with explicit value at call time? I mean that this distinction makes no sense conceptually: the user just wants a default; this is rather python internal soup detail raising an issue.

I would rather require the contrary, namely that static vars should not be messed up with default args. This is not only confusion in code, but also design flaw. here's how I see it various possibilities (with some amount of exageration ;-):

def f(arg, lst=[]):
# !!! lst is no default arg, !!!
# !!! it's a static var instead !!!
# !!! that will be updated in code !!!
<do things>

def f(arg):
<do things>
f.lst = [] # init static var

or

def f(arg):
# init static var
try:
assert f.lst
except AttributeError:
f.lst = []
<do things>

Denis
------
la vita e estrany

spir

unread,
May 13, 2009, 6:38:09 AM5/13/09
to python...@python.org
Le Wed, 13 May 2009 16:34:02 +0900,
"Stephen J. Turnbull" <ste...@xemacs.org> s'exprima ainsi:

> I


> also see the elegance and coherence of Tennessee's proposal to
> *always* dynamically evaluate, but I don't like it. Given that always
> evaluating dynamically is likely to have performance impact that is as
> surprising as the behavior of "def foo(bar=[])", I find it easy to
> reject that proposal on the grounds of "although practicality beats
> purity".

I do not understand why defaults should be evaluated dynamically (at runtime, I mean) in the proposal.

This can happen, I guess, only for default that depend on the non-local scope:

def f(arg, x=a):
<body>

If we want this this a changing at runtime, then we'd better be very clear and explicit:

def f(arg, x=UNDEF):
# case x is not provided, use current value of a
if x is UNDEF: x = a
<body>

(Also, if x is not intended as a user defined argument, then there is no need for a default at all. We simply have a non-referentially transparent func.)

Denis
------
la vita e estrany

Stephen J. Turnbull

unread,
May 13, 2009, 8:01:43 AM5/13/09
to spir, python...@python.org
spir writes:

> Especially, why should immutable defaults be OK, and mutable ones
> be wrong and replaced with explicit value at call time?

"Although practicality beats purity." Consider a wrapper for
something like Berkeley DB which has a plethora of options for
creating external objects, but which most users are not going to care
about. It makes sense for those options to be optional. Otherwise
people will be writing

def create_db(path):
"""Create a DB with all default options at `path`."""
create_db_with_a_plethora_of_options(path,a,b,c,d,e,f,g,h,i,j,k,l,m)

def create_db_with_nondefault_indexing(path,indexer):
"""Create a DB with `indexer`, otherwise default options, at `path`."""
create_db_with_a_plethora_of_options(path,indexer,b,c,d,e,f,g,h,i,j,k,l,m)

etc, etc, ad nauseum. No, thank you!

> I would rather require the contrary, namely that static vars should
> not be messed up with default args. This is not only confusion in
> code, but also design flaw. here's how I see it various
> possibilities (with some amount of exageration ;-):
>
> def f(arg, lst=[]):
> # !!! lst is no default arg, !!!
> # !!! it's a static var instead !!!
> # !!! that will be updated in code !!!
> <do things>
>
> def f(arg):
> <do things>
> f.lst = [] # init static var

Don't generators do the right thing here?

def f(arg):
lst = []
while True:
<do things>
yield None

MRAB

unread,
May 13, 2009, 10:28:48 AM5/13/09
to python...@python.org
There's the suggestion that Carl Johnson gave:

def myfunc(a, b, c else []):
pass

or there's:

def myfunc(a, b, c def []):
pass

where 'def' stands for 'default' (or "defaults to").

Jeremy Banks

unread,
May 13, 2009, 10:52:57 AM5/13/09
to MRAB, python...@python.org
To someone who's a novice to this, could someone explain to me why it
has to be an existing keyword at all? Since not identifiers are valid
in that context anyway, why couldn't it be a new keyword that can
still be used as an identifier in valid contexts? For example (not
that I advocate this choice of keyword at all):

def foo(bar reinitialize_default []): # <-- it's a keyword here
reinitialize_default = "It's an identifier here!"

That would be a syntax error now and if it were defined as a keyword
only in that context it wouldn't introduce backwards compatibility
problems and wouldn't force us to reuse an existing keyword in a
context that may be a bit of a stretch.

Is there a reason that this wouldn't be a viable approach?

George Sakkis

unread,
May 13, 2009, 11:29:44 AM5/13/09
to Jeremy Banks, python-ideas
On Wed, May 13, 2009 at 10:52 AM, Jeremy Banks <jer...@jeremybanks.ca> wrote:

> To someone who's a novice to this, could someone explain to me why it
> has to be an existing keyword at all? Since not identifiers are valid
> in that context anyway, why couldn't it be a new keyword that can
> still be used as an identifier in valid contexts? For example (not
> that I advocate this choice of keyword at all):
>
>    def foo(bar reinitialize_default []): # <-- it's a keyword here
>        reinitialize_default = "It's an identifier here!"
>
> That would be a syntax error now and if it were defined as a keyword
> only in that context it wouldn't introduce backwards compatibility
> problems and wouldn't force us to reuse an existing keyword in a
> context that may be a bit of a stretch.
>
> Is there a reason that this wouldn't be a viable approach?

Traditionally, keywords are recognized at the lexer level, which then
passes tokens to the parser. Lexers are pretty simple (typically
constants and regular expressions) and don't take the context into
account. In principle what you're saying could work, but given the
significant reworking of the lexer/parser it would require, it's quite
unlikely to happen, for better or for worse.

George

spir

unread,
May 13, 2009, 11:54:37 AM5/13/09
to python...@python.org
Le Wed, 13 May 2009 11:52:57 -0300,
Jeremy Banks <jer...@jeremybanks.ca> s'exprima ainsi:

> To someone who's a novice to this, could someone explain to me why it
> has to be an existing keyword at all? Since not identifiers are valid
> in that context anyway, why couldn't it be a new keyword that can
> still be used as an identifier in valid contexts? For example (not
> that I advocate this choice of keyword at all):
>
> def foo(bar reinitialize_default []): # <-- it's a keyword here
> reinitialize_default = "It's an identifier here!"
>
> That would be a syntax error now and if it were defined as a keyword
> only in that context it wouldn't introduce backwards compatibility
> problems and wouldn't force us to reuse an existing keyword in a
> context that may be a bit of a stretch.
>
> Is there a reason that this wouldn't be a viable approach?

My opinion on this is you're basically right. Even 'print' (for py<3.0) could be an identifier you could use in an assignment (or in any value expression), I guess, for parse patterns are different:
print_statement : "print" expression
assignment : name '=' expression
So you can safely have "print" as name, or inside an expression. Even "print print" should work !

But traditionnally grammars are not built as a single & total definition of the whole language (like is often done using e.g. PEG, see http://en.wikipedia.org/wiki/Parsing_Expression_Grammar) but as a 2-layer definition: one for tokens (lexicon & morphology) and one for higher-level patterns (syntax & structure).
The token layer is performed by a lexer that will not take the context into account to recognize tokens, so that it could not distinguish several, syntactically & semantically different, occurrences of "print" like above.
As a consequence, in most languages,
key word = reserved word

There may be other reasons I'm not aware of.

Denis
------
la vita e estrany

Scott David Daniels

unread,
May 13, 2009, 3:30:04 PM5/13/09
to python...@python.org
spir wrote:
> My opinion on this is you're basically right. Even 'print' (for py<3.0) could be an identifier you could use in an assignment (or in any value expression), I guess, for parse patterns are different:
> print_statement : "print" expression
> assignment : name '=' expression
> So you can safely have "print" as name, or inside an expression. Even "print print" should work !

But you would not want
print print
and
print(print)
to have two different meanings.
In Python, extra parens are fair around expressions,
and print(print) is clearly a function call.

--Scott David Daniels
Scott....@Acm.Org

CTO

unread,
May 13, 2009, 3:18:37 PM5/13/09
to python...@python.org
On May 12, 3:56 pm, Pascal Chambon <chambon.pas...@wanadoo.fr> wrote:
> Well, since adding new keywords or operators is very sensitive, and the
> existing ones are rather exhausted, it won't be handy to propose a new
> syntax...
>
> One last idea I might have : what about something like
>
> *    def myfunc(a, b, c = yield []):
>               pass*

>
> I'm not expert in english, but I'd say the following "equivalents" of
> yield (dixit WordWeb) are in a rather good semantic area :
> *Be the cause or source of
> *Give or supply
> *Cause to happen or be responsible for
> *Bring in
>
> Of course the behaviour of this yield is not so close from the one we
> know, but there is no interpretation conflict for the parser, and we
> might quickly get used to it :
> * yield in default argument => reevaluate the expression each time
> * yield in function body => return value and prepare to receive one
>
> How do you people feel about this ?
> Regards,
> Pascal

I'm not a fan. If you thought not reevaluating function expressions
was confusing for newbies, wait until you see what making up a new
kind of yield will do for them.

Why not just push for some decorators that do this to be included in
stdlib? I see the utility, but not the point of adding extra syntax.

>>> @Runtime
... def f(x=a**2+2b+c):
... return x
...
>>> a = 1
>>> b = 2
>>> c = 3
>>> f()
8

This seems much more intuitive and useful to me than adding new
meanings to yield.

Geremy Condra

Arnaud Delobelle

unread,
May 13, 2009, 3:44:14 PM5/13/09
to CTO, python...@python.org

On 13 May 2009, at 20:18, CTO wrote:
> Why not just push for some decorators that do this to be included in
> stdlib? I see the utility, but not the point of adding extra syntax.
>
>>>> @Runtime
> ... def f(x=a**2+2b+c):
> ... return x
> ...
>>>> a = 1
>>>> b = 2
>>>> c = 3
>>>> f()
> 8
>
> This seems much more intuitive and useful to me than adding new
> meanings to yield.

This is not possible.

def f(x=a**2+2*b+c):
return x

is compiled to something very much like:

_tmp = x**2+2*b+c
def f(x=_tmp):
return x

So it is impossible to find out what expression yields the default
value of x by just looking at f. You have to use lambda or use George
Sakkis' idea of using strings for defaults and evaluating them at call-
time (but I'm not sure this will work reliably with nested functions).

--
Arnaud

Pascal Chambon

unread,
May 13, 2009, 5:09:27 PM5/13/09
to python-ideas
Greg Falcon a écrit :
> Notice that
>
> def foo():
> def myfunc(a, b, c = (yield [])):
> pass
>
> already has a meaning in Python. Under your proposal,
>
>
Aowh... I didn't even think this was legal, but indeed this works... so
it's obviously dead for "yield".

> Anyhow, since you've decided you want add a new kind of default
> argument spelled differently than the standard one, I have to ask what
> this buys you beyond
>
> def foo():
> def myfunc(a, b, c = None):
> if c is not None: # or a setinel
> c = []
>
> Sure, it takes two more lines of code to write this (three if you want
> to use an object() instance as your sentinel to allow None being
> passed in), but does not require a new bit of language syntax for
> every Python programmer to learn...
>
> Greg F
>
>
Well, not only do we have to write more code, but we lose in
self-documentation I guess (I'd rather have the default appearing in the
signature than in the function code - pydoc and stuffs won't notice it),
and I find slightly disappointing the principle of a "sentinel", i.e "I
would have wanted to do something there but I can't so I'll do it
farther, take that in the meantime".

> There's the suggestion that Carl Johnson gave:
>
> def myfunc(a, b, c else []):
> pass
>
> or there's:
>
> def myfunc(a, b, c def []):
> pass
>
> where 'def' stands for 'default' (or "defaults to").

Those phrases could do it, I'm just worryied about the fact that
semantically, they make (to me) no difference with "c = []".
None of those ways looks more "dynamic" than teh other, so it might be
hard to explain why "=" means compiel time, and "def" means runtime.

I must admit I'm totally out of ideas then.

Regards,
Pascal

Terry Reedy

unread,
May 13, 2009, 5:39:37 PM5/13/09
to python...@python.org
Jeremy Banks wrote:
> To someone who's a novice to this, could someone explain to me why it
> has to be an existing keyword at all? Since not identifiers are valid
> in that context anyway, why couldn't it be a new keyword that can
> still be used as an identifier in valid contexts? For example (not
> that I advocate this choice of keyword at all):
>
> def foo(bar reinitialize_default []): # <-- it's a keyword here
> reinitialize_default = "It's an identifier here!"
>
> That would be a syntax error now and if it were defined as a keyword
> only in that context it wouldn't introduce backwards compatibility
> problems and wouldn't force us to reuse an existing keyword in a
> context that may be a bit of a stretch.
>
> Is there a reason that this wouldn't be a viable approach?

At one time, 'as' was only a keyword in the context of import.
So it is 'viable'. But it was a bit confusing for programmers and messy
implementation-wise and I think the developers were glad to promote 'as'
to a full keyword and would be reluctant to go down that road again.

Terry Reedy

unread,
May 13, 2009, 5:58:46 PM5/13/09
to python...@python.org
MRAB wrote:

> There's the suggestion that Carl Johnson gave:
>
> def myfunc(a, b, c else []):
> pass
>
> or there's:
>
> def myfunc(a, b, c def []):
> pass
>
> where 'def' stands for 'default' (or "defaults to").

I had the idea of def f(c=:[]): where ':' is intended to invoke the idea
of lambda, since the purpose is to turn the expression into a function
that is automatically called (which is why lambda alone is not enough).
So I would prefer c = def [] where def reads 'auto function defined
by...'.

or c = lambda::[] where the extra ':' indicates that that the function
is auto-called

or c = lambda():[], (now illegal), where () is intended to show that the
default arg is the result of calling the function defined by the
expression. lambda:[]() (now legal) would mean to (uselessly) call the
function immediately.

Thinking about it, I think those who want a syntax to indicate that the
expression should be compiled into a function and called at runtime
should build on the existing syntax (lambda...) for indicating that an
expression should be compiled into a function, rather than inventing a
replacement for that.

Terry Jan Reedy

Bruce Leban

unread,
May 13, 2009, 6:04:05 PM5/13/09
to python-ideas
Here's what I'd like:

def myfunc(a, b, c = *lambda: expression):
  stuff

The use of the lambda keyword here makes the scope of any variables in the expression clear. The use of the prefix * makes the syntax invalid today, suggests dereferencing and doesn't hide the overhead. This is equivalent to:

__unset = object()
__default = lambda: expression
def mfunc(a, b, c = __unset):
  if c == __unset:
    c = __default()
  stuff

--- Bruce

Steven D'Aprano

unread,
May 13, 2009, 6:01:32 PM5/13/09
to python...@python.org
On Thu, 14 May 2009 05:18:37 am CTO wrote:

> If you thought not reevaluating function expressions
> was confusing for newbies, wait until you see what making up a new
> kind of yield will do for them.
>
> Why not just push for some decorators that do this to be included in
> stdlib? I see the utility, but not the point of adding extra syntax.

Even if a decorator solution can be made to work, it seems to me that
the difficulty with a decorator solution is that it is
all-or-nothing -- you can decorate the entire parameter list, or none
of the parameters, but not some of the parameters. You can bet that
people will say they want delayed evaluation of some default arguments
and compile-time evaluation of others, in the same function definition.

There are work-arounds, of course, but there are perfectly adequate
work-arounds for the lack of delayed evaluation defaults now, and it
hasn't stopped the complaints.

I'm going to suggest that any syntax should be applied to the formal
parameter name, not the default value. This feels right to me -- we're
saying that it's the formal parameter that is "special" for using
delayed semantics, not that the default object assigned to it is
special. Hence it should be the formal parameter that is tagged, not
the default value.

By analogy with the use of the unary-* operator, I suggest we use a new
unary-operator to indicate the new semantics. Inside the parameter
list, &x means to delay evaluation of the default argument to x to
runtime:

def parrot(a, b, x=[], &y=[], *args, **kwargs):

As a bonus, this will allow for a whole new series of bike-shedding
arguments about which specific operator should be used. *grin*

Tagging a parameter with unary-& but failing to specify a default value
should be a syntax error:

def parrot(&x, &y=[]):

Likewise for unary-& outside of a parameter list.

Bike-shedding away... *wink*

--
Steven D'Aprano

MRAB

unread,
May 13, 2009, 6:20:49 PM5/13/09
to python...@python.org
Well, going back to 'def', it could mean 'deferred until call-time':

def parrot(a, b, x=[], y=def [], *args, **kwargs):

Georg Brandl

unread,
May 13, 2009, 6:20:46 PM5/13/09
to python...@python.org
Bruce Leban schrieb:

> Here's what I'd like:
>
> def myfunc(a, b, c = *lambda: expression):
> stuff
>
> The use of the lambda keyword here makes the scope of any variables in
> the expression clear. The use of the prefix * makes the syntax invalid
> today, suggests dereferencing and doesn't hide the overhead.

Why not

@calldefaults
def myfunc(a, b, c = lambda: expression):
pass

which should be possible without introducing new syntax.

Georg

Bruce Leban

unread,
May 13, 2009, 6:33:48 PM5/13/09
to Georg Brandl, python...@python.org
Sure you can do it with a decorator. Although you might want one default to have this behavior and one not to. And putting the * right next to the lambda makes this a bit more explicit to my eye.

--- Bruce

Terry Reedy

unread,
May 13, 2009, 7:02:08 PM5/13/09
to python...@python.org
Steven D'Aprano wrote:
> On Thu, 14 May 2009 05:18:37 am CTO wrote:
>
>> If you thought not reevaluating function expressions
>> was confusing for newbies, wait until you see what making up a new
>> kind of yield will do for them.
>>
>> Why not just push for some decorators that do this to be included in
>> stdlib? I see the utility, but not the point of adding extra syntax.
>
> Even if a decorator solution can be made to work, it seems to me that
> the difficulty with a decorator solution is that it is
> all-or-nothing -- you can decorate the entire parameter list, or none
> of the parameters, but not some of the parameters. You can bet that
> people will say they want delayed evaluation of some default arguments
> and compile-time evaluation of others, in the same function definition.

Not all or nothing, and selection is easy. A decorator could only call
callable objects, and could/should be limited to calling function
objects or even function objects named '<lambda>'. And if one wanted
the resulting value to such a function, escape the default lambda
expression with lambda.

x=[1,2]
@call_lambdas
def f(a=len(x), lst = lambda:[], func = lambda: lambda x: 2*x):
# a is int 2, lst is a fresh list, func is a one-parameter function

Terry Jan Reedy

CTO

unread,
May 13, 2009, 7:03:34 PM5/13/09
to python...@python.org

Thanks for the input, but I've already written the code to do this.
It
is available at <URL:http://code.activestate.com/recipes/576751/>.
For
those with hyperlink allergies, the snippet posted above reevaluates
the function whenever it is called, and can be used like so:

>>> from runtime import runtime
>>> @runtime
... def example1(x, y=[]):
... y.append(x)
... return y
...
>>> example1(1)
[1]
>>> example1(2)
[2]

or, as posted above,

>>> a, b, c = 0, 1, 2
>>> @runtime
... def example2(x=a**2+2*b+c):
... return x
...
>>> example2()
4
>>> a = 5
>>> example2()
29

The gode given is slow and ugly, but it does appear-
at least to me- to do what is being asked here.

Geremy Condra

CTO

unread,
May 13, 2009, 7:08:02 PM5/13/09
to python...@python.org
A caveat on my previously posted code: as mentioned in
another thread earlier today, it will not work on
functions entered into the interpreter.

Terry Reedy

unread,
May 13, 2009, 7:08:50 PM5/13/09
to python...@python.org
Bruce Leban wrote:
> Here's what I'd like:
>
> def myfunc(a, b, c = *lambda: expression):
> stuff
>
> The use of the lambda keyword here makes the scope of any variables in
> the expression clear. The use of the prefix * makes the syntax invalid
> today, suggests dereferencing and doesn't hide the overhead. This is
> equivalent to:

There is a proposal, which I thought was accepted in principle, to make
'* seq' valid generally, not just in arg-lists. to mean 'unpack the
sequence'. * (lambda:1,2)() would then be valid, and without the call,
would be a runtime, not syntax error.

Other than that ;0(, it would be an interesting idea.

> __unset = object()
> __default = lambda: expression
> def mfunc(a, b, c = __unset):
> if c == __unset:
> c = __default()
> stuff

tjr

George Sakkis

unread,
May 13, 2009, 8:10:42 PM5/13/09
to python...@python.org
On Wed, May 13, 2009 at 7:08 PM, Terry Reedy <tjr...@udel.edu> wrote:

> Bruce Leban wrote:
>>
>> Here's what I'd like:
>>
>> def myfunc(a, b, c = *lambda: expression):
>>  stuff
>>
>> The use of the lambda keyword here makes the scope of any variables in the
>> expression clear. The use of the prefix * makes the syntax invalid today,
>> suggests dereferencing and doesn't hide the overhead. This is equivalent to:
>
> There is a proposal, which I thought was accepted in principle, to make '*
> seq' valid generally, not just in arg-lists. to mean 'unpack the sequence'.
> * (lambda:1,2)() would then be valid, and without the call, would be a
> runtime, not syntax error.
>
> Other than that ;0(, it would be an interesting idea.

Then how about putting the * before the parameter ?

def myfunc(a, b, *c = lambda: expression):

It's currently a syntax error, although the fact that "*arg" and
"*arg=default" would mean something completely different is
problematic. Still the same idea can be applied for some other
operator (either valid already or not).

Regardless of the actual operator, I came up with the following
additional subproposals.

Subproposal (1): Get rid of the explicit lambda for dynamic arguments. That is,

def myfunc(a, b, *x=[]):

would be equivalent to what previous proposals would write as

def myfunc(a, b, *x=lambda: []):

Subproposal (2): If subproposal (1) is accepted, we could get for free
(in terms of syntax at least) dynamic args depending on previous ones.
That is,

def myfunc(a, b, *m=(a+b)/2):

would mean

def myfunc(a, b, *m = lambda a,b: (a+b)/2):

with the lambda being passed the values of a and b at runtime.

Thoughts ?

George

Bruce Leban

unread,
May 13, 2009, 10:41:17 PM5/13/09
to George Sakkis, python...@python.org
On Wed, May 13, 2009 at 5:10 PM, George Sakkis <george...@gmail.com> wrote:
On Wed, May 13, 2009 at 7:08 PM, Terry Reedy <tjr...@udel.edu> wrote:

> Bruce Leban wrote:
>> def myfunc(a, b, c = *lambda: expression):
>>  stuff

> There is a proposal, which I thought was accepted in principle, to make '*
> seq' valid generally, not just in arg-lists. to mean 'unpack the sequence'.
> * (lambda:1,2)() would then be valid, and without the call, would be a
> runtime, not syntax error.

Then how about putting the * before the parameter ?

   def myfunc(a, b, *c = lambda: expression):

It's currently a syntax error, although the fact that "*arg" and
"*arg=default" would mean something completely different is
problematic. Still the same idea can be applied for some other
operator (either valid already or not).
 
I think similar syntax should do similar things. If *arg means one thing and &arg means something else, that's confusing. There are lots of non confusing alternatives:

   def foo(a, b := lambda: bar):
   def foo(a, b = & lambda: bar):
   def foo(a, @dynamic b = lambda: bar):   # adding decorators on parameters
   and more

Subproposal (1): Get rid of the explicit lambda for dynamic arguments. That is,

   def myfunc(a, b, *x=[]):

would be equivalent to what previous proposals would write as

   def myfunc(a, b, *x=lambda: []):

Explicit is better than implicit. There's a thunk getting created here, right? Don't you want that to be obvious? I do.

Subproposal (2): If subproposal (1) is accepted, we could get for free
(in terms of syntax at least) dynamic args depending on previous ones.
That is,

   def myfunc(a, b, *m=(a+b)/2):

would mean

   def myfunc(a, b, *m = lambda a,b: (a+b)/2):

with the lambda being passed the values of a and b at runtime.

It's not free and that adds quite a bit of complexity. Note that default parameters are evaluated in the context of where the function is defined, NOT in the middle of setting the other function parameters. (That's true for this new case too.)

Sure Lisp has let and let* but the proposal here is NOT to provide arbitrary computational ability in the parameter list but to provide a way to have defaults that are not static. We shouldn't over-engineer things just because we can.

--- Bruce

George Sakkis

unread,
May 14, 2009, 12:51:33 AM5/14/09
to python-ideas
On Wed, May 13, 2009 at 10:41 PM, Bruce Leban <br...@leapyear.org> wrote:
>
> On Wed, May 13, 2009 at 5:10 PM, George Sakkis <george...@gmail.com>
> wrote:
>>
>> On Wed, May 13, 2009 at 7:08 PM, Terry Reedy <tjr...@udel.edu> wrote:
>>
>> > Bruce Leban wrote:
>> >> def myfunc(a, b, c = *lambda: expression):
>> >>  stuff
>>
>> > There is a proposal, which I thought was accepted in principle, to make
>> > '*
>> > seq' valid generally, not just in arg-lists. to mean 'unpack the
>> > sequence'.
>> > * (lambda:1,2)() would then be valid, and without the call, would be a
>> > runtime, not syntax error.
>>
>> Then how about putting the * before the parameter ?
>>
>>    def myfunc(a, b, *c = lambda: expression):
>>
>> It's currently a syntax error, although the fact that "*arg" and
>> "*arg=default" would mean something completely different is
>> problematic. Still the same idea can be applied for some other
>> operator (either valid already or not).
>
>
> I think similar syntax should do similar things. If *arg means one thing and
> &arg means something else, that's confusing.

Confusing?? It's certainly no more confusing than using *args for one
thing and **args for something else.

>> Subproposal (1): Get rid of the explicit lambda for dynamic arguments.
>> That is,
>>
>>    def myfunc(a, b, *x=[]):
>>
>> would be equivalent to what previous proposals would write as
>>
>>    def myfunc(a, b, *x=lambda: []):
>
> Explicit is better than implicit. There's a thunk getting created here,
> right? Don't you want that to be obvious? I do.

Explicitness is in the eye of the beholder, and also an acquired
taste. @decorator is less explicit than f = decorator(f) and yet it's
generally considered a successful addition these days, despite the
strong opposition it met initially.

>> Subproposal (2): If subproposal (1) is accepted, we could get for free
>> (in terms of syntax at least) dynamic args depending on previous ones.
>> That is,
>>
>>    def myfunc(a, b, *m=(a+b)/2):
>>
>> would mean
>>
>>    def myfunc(a, b, *m = lambda a,b: (a+b)/2):
>>
>> with the lambda being passed the values of a and b at runtime.
>
> It's not free and that adds quite a bit of complexity. Note that default
> parameters are evaluated in the context of where the function is defined,
> NOT in the middle of setting the other function parameters. (That's true for
> this new case too.)

Sure, but I'm not sure what's your point here. The compiler can
generate bytecode to the effect of:

def myfunc(a, b, *m = (a+b)/2):
if m is not passed:
m = default_m(a,b)
# actual body follows

Ideally an optimizer would further check whether it's safe to inline
the expression (which should be the typical case) to avoid the
function call overhead.

> Sure Lisp has let and let* but the proposal here is NOT to provide arbitrary
> computational ability in the parameter list but to provide a way to have
> defaults that are not static. We shouldn't over-engineer things just because
> we can.

Agreed, the primary target is to fix the common gotcha of mutable
defaults, and I'd rather see this handled than nothing at all. A
secondary goal that can be achieved here though is to reduce the
overusage of None (or other sentinels for that matter). There are
several articles to the effect of "Null considered harmful"; also SQL
as well as some statically typed languages provide both nullable and
non-nullable types and assume the latter by default. The only useful
operation to a sentinel is identity check, so you know that every `x =
sentinel` assignment should be followed by one or more `if x is
sentinel` checks. Fewer nulls means potentially less conditional logic
mental overhead. I'm not claiming we should get rid of None of course;
there are legitimate reasons for different behavior under different
conditions. Here however we're talking about a very specific pattern:
def f(a, b=sentinel):
if b is sentinel: b = <expression>

It may not seem such a big deal, but then again I don't think it's
less trivial than the case for the (eventually) accepted ternary
operator vs an if/else statement.

spir

unread,
May 14, 2009, 2:50:42 AM5/14/09
to python...@python.org
Le Wed, 13 May 2009 12:30:04 -0700,
Scott David Daniels <Scott....@Acm.Org> s'exprima ainsi:

> spir wrote:
> > My opinion on this is you're basically right. Even 'print' (for py<3.0)
> > could be an identifier you could use in an assignment (or in any value
> > expression), I guess, for parse patterns are different: print_statement :
> > "print" expression assignment : name '=' expression So you can
> > safely have "print" as name, or inside an expression. Even "print print"
> > should work !
>
> But you would not want
> print print
> and
> print(print)
> to have two different meanings.
> In Python, extra parens are fair around expressions,
> and print(print) is clearly a function call.
>

You're right ;-)

Denis
------
la vita e estrany

Arnaud Delobelle

unread,
May 14, 2009, 4:08:13 AM5/14/09
to CTO, python...@python.org
2009/5/14 CTO <deba...@gmail.com>:

> Thanks for the input, but I've already written the code to do this.
> It
> is available at <URL:http://code.activestate.com/recipes/576751/>.

I should have said "it's impossible short of looking at the source
code or doing some very sophisticated introspection of the bytecode of
the module the function is defined in".

Even so, your recipe doesn't quite work in several cases, aside from
when the source code is not accessible. Two examples:


def nesting():
default = 3
@runtime
def example3(x=default):
return x
example3()

nesting()


@runtime
def name(x=a):
return x

name()

* The first one fails because default is not a global variable, thus
not accessible from within the runtime decorator. I don't know how if
this can be fixed. Note that for the function to exec() at all, you
need to e.g. modify remove_decorators so that it also removes initial
whitespace, something like:

def remove_decorators(source):
"""Removes the decorators from the given function"""
lines = source.splitlines()
lines = [line for line in lines if not line.startswith('@')]
indent = 0
while lines[0][indent] == ' ':
indent += 1
new_source = '\n'.join(line[indent:] for line in lines)
return new_source


* The second one fails because of a clash of names. I guess that can
be fixed by specifying what the locals and globals are explicitely in
the calls to exec and eval.

--
Arnaud

CTO

unread,
May 14, 2009, 5:32:14 AM5/14/09
to python...@python.org
> I should have said "it's impossible short of looking at the source
> code or doing some very sophisticated introspection of the bytecode of
> the module the function is defined in".

Any which way you slice this it will require that literal code *not*
be interpreted until execution time. There are other ways to do that-
storing it in strings, as George Sakkis does, modifying the language
itself, as is the proposal here, or reading and parsing the original
source. But you're right- more info is needed than what the bytecode
contains.

> Even so, your recipe doesn't quite work in several cases, aside from
> when the source code is not accessible.

Obviously, you are quite correct. Scoping in particular is difficult
both to understand and to properly handle- had me chasing my tail
for about twenty minutes earlier, actually- and I'm sure this is a
security nightmare, but it does (generally) what is being asked for
here. And it does so without recourse to changing the syntax.

Here's another possible mechanism:

def runtime(f):
"""Evaluates a function's annotations at runtime."""
annotations = getfullargspec(f)[-1]
@wraps(f)
def wrapped(*args, **kwargs):
defaults = {k: eval(v) for k, v in annotations.items()}
defaults.update(kwargs)
return f(*args, **defaults)
return wrapped

@runtime
def example1(x, y:'[]'):
y.append(x)
return y

@runtime
def example2(x:'a**2+2*b+c'):
return x

Pretty simple, although it messes with the call syntax pretty
badly, effectively treating a non-keyword argument as a
keyword-only argument. There's probably a way around that
but I doubt I'm going to see it tonight.

The point is, I don't really see the point in adding a new
syntax. There are *lots* of incomplete solutions floating
around to this issue, and it will probably take a lot less
work to make one of those into a complete solution than it
will to add a new syntax, if that makes any sense at all.

Also, do you mind posting any problems you find in that
to the activestate message board so there is a record
there?

Geremy Condra

spir

unread,
May 14, 2009, 7:44:07 AM5/14/09
to python...@python.org
Le Wed, 13 May 2009 20:10:42 -0400,
George Sakkis <george...@gmail.com> s'exprima ainsi:

> Subproposal (2): If subproposal (1) is accepted, we could get for free
> (in terms of syntax at least) dynamic args depending on previous ones.
> That is,
>
> def myfunc(a, b, *m=(a+b)/2):
>
> would mean
>
> def myfunc(a, b, *m = lambda a,b: (a+b)/2):
>
> with the lambda being passed the values of a and b at runtime.

While I understand the intent, this seems complicated to me. I find clearer to express m in the func body; using a sentinel if m is a real default arg (meaning it could possibly be passed by the user).

UNDEF = object()
def myfunc(a, b, m=UNDEF):
if m is UNDEF:
m = (a+b)/2)

Generally speaking, I find ok the need of sentinels for clarifying rare and non-obvious cases such as runtime-changing default values:

def somefunc(arg, m=UNDEF):
if m is UNDEF:
m = runtimeDefaultVal()

While I do not find ok the need of a sentinel to avoid the common gotcha of a default value beeing "back-updated" when the corresponding local var is changed in the func body:

def otherfunc(arg, l=UNDEF):
if l is UNDEF:
l = []
<possibly update l>

Denis
------
la vita e estrany

Gerald Britton

unread,
May 14, 2009, 10:29:04 AM5/14/09
to Scott David Daniels, python...@python.org
print(print) is not a function call in 2.x:

>>> import types
>>> def f(): pass
...
>>> isinstance(f, types.FunctionType)
True
>>> isinstance(print, types.FunctionType)
File "<stdin>", line 1
isinstance(print, types.FunctionType)
^
SyntaxError: invalid syntax
>>> p = "hi there"
>>> print p
hi there
>>> print(p)
hi there


(print_) is interpreted as an expression, which is then passed to the
print statement

--
Gerald Britton

Gerald Britton

unread,
May 14, 2009, 10:30:58 AM5/14/09
to Scott David Daniels, python...@python.org
Typo:

> (print_) is interpreted as an expression, which is then passed to the
> print statement

should be:

(p) is interpreted as an expression, which is then passed to the print statement

Steven D'Aprano

unread,
May 14, 2009, 5:25:28 PM5/14/09
to python...@python.org
On Thu, 14 May 2009 09:44:07 pm spir wrote:

> Generally speaking, I find ok the need of sentinels for clarifying
> rare and non-obvious cases such as runtime-changing default values:
>
> def somefunc(arg, m=UNDEF):
> if m is UNDEF:
> m = runtimeDefaultVal()
>
> While I do not find ok the need of a sentinel to avoid the common
> gotcha of a default value beeing "back-updated" when the
> corresponding local var is changed in the func body:
>
> def otherfunc(arg, l=UNDEF):
> if l is UNDEF:
> l = []
> <possibly update l>

But those two idioms are the same thing!

In the first case, if m is not provided by the caller, your function has
to produce a fresh object at runtime. It does this by calling
runtimeDefaultVal() which returns some unspecified object.

In the second case, if l is not provided by the caller, your function
has to produce a fresh object at runtime. It does this by calling [].
This is merely a special case of the first case, where
runtimeDefaultVal() simply returns [] every time.

--
Steven D'Aprano

Steven D'Aprano

unread,
May 14, 2009, 6:20:45 PM5/14/09
to python...@python.org
On Thu, 14 May 2009 12:41:17 pm Bruce Leban wrote:

> >    def myfunc(a, b, *x=[]):
> >
> > would be equivalent to what previous proposals would write as
> >
> >    def myfunc(a, b, *x=lambda: []):
>
> Explicit is better than implicit. There's a thunk getting created
> here, right? Don't you want that to be obvious? I do.

No, I don't want it to be obvious. I don't care about thunks, I care
that x gets bound at runtime. I don't care what the implementation is:
whether it is a thunk, eval(), voodoo or something else, just so long
as it works.

As for your argument that it is better to be explicit, when you want to
add two numbers and compare them with a third, do you write:

(1 .__add__(1)).__eq__(2)

instead of

1+1 == 2? "Explicit is better than implicit", right?

No. 1+1=2 *is* explicit, because that's the Python syntax for addition.
All those double-underscore method calls are implementation details
that do not belong in "standard" Python code. If Python develops new
syntax for late-binding of default arguments, that too will be
explicit, and any reference to thunks (or any other mechanism) will be
an implementation detail. The syntax shouldn't depend on the
implementation.

lambda is already disliked by many people, including Guido. I don't
think any suggestion that we make lambda more confusing by giving it
two very different meanings ("create a thunk" inside function parameter
lists, and "create a function" everywhere else) will be very popular on
python-dev.

--
Steven D'Aprano

Steven D'Aprano

unread,
May 14, 2009, 6:27:07 PM5/14/09
to python...@python.org
On Thu, 14 May 2009 09:03:34 am CTO wrote:

> Thanks for the input, but I've already written the code to do this.
> It
> is available at <URL:http://code.activestate.com/recipes/576751/>.

[...]


> The gode given is slow and ugly, but it does appear-
> at least to me- to do what is being asked here.

Your code seems to work only if the source to the function is available.
That will mean it can't be used by people who want to distribute .pyc
files only.


--
Steven D'Aprano

CTO

unread,
May 14, 2009, 7:28:02 PM5/14/09
to python...@python.org
On May 14, 6:27 pm, Steven D'Aprano <st...@pearwood.info> wrote:
> On Thu, 14 May 2009 09:03:34 am CTO wrote:
>
> > Thanks for the input, but I've already written the code to do this.
> > It
> > is available at <URL:http://code.activestate.com/recipes/576751/>.
>
> [...]
>
> > The gode given is slow and ugly, but it does appear-
> > at least to me- to do what is being asked here.
>
> Your code seems to work only if the source to the function is available.
> That will mean it can't be used by people who want to distribute .pyc
> files only.
>
> --
> Steven D'Aprano

I think the list is eating my replies, but suffice to say that there's
a new version of the recipe at <URL: http://code.activestate.com/recipes/576754/>
that doesn't have that limitation and looks pretty close to the syntax
proposed above.

Example:
>>> @runtime
... def myfunc(x, y, z: lambda:[]):
... z.extend((x,y))
... return z
...
>>> myfunc(1, 2)
[1, 2]
>>> myfunc(3, 4)
[3, 4]
>>> myfunc(1, 2, z=[3, 4])
[3, 4, 1, 2]

Geremy Condra

Steven D'Aprano

unread,
May 14, 2009, 8:15:33 PM5/14/09
to python...@python.org
On Thu, 14 May 2009 09:02:08 am Terry Reedy wrote:
> Steven D'Aprano wrote:
> > On Thu, 14 May 2009 05:18:37 am CTO wrote:
> >> If you thought not reevaluating function expressions
> >> was confusing for newbies, wait until you see what making up a new
> >> kind of yield will do for them.
> >>
> >> Why not just push for some decorators that do this to be included
> >> in stdlib? I see the utility, but not the point of adding extra
> >> syntax.
> >
> > Even if a decorator solution can be made to work, it seems to me
> > that the difficulty with a decorator solution is that it is
> > all-or-nothing -- you can decorate the entire parameter list, or
> > none of the parameters, but not some of the parameters. You can bet
> > that people will say they want delayed evaluation of some default
> > arguments and compile-time evaluation of others, in the same
> > function definition.
>
> Not all or nothing, and selection is easy. A decorator could only
> call callable objects, and could/should be limited to calling
> function objects or even function objects named '<lambda>'.

Some people don't like writing:

def f(x=SENTINEL):
if x is SENTINEL: x = []

and wish to have syntax so they can write something approaching:

def f(x=[]):
...

but have a fresh [] bound to x. You're supporting the syntax:

@call_lambdas # Geremy Condra uses the name 'runtime'
def f(x=lambda:[]):
...

(For the record, I've suggested creating a unary-& operator so that we
can write "def f(&x=[])" to get late-binding of x.)

If I were to use the proposed late-binding feature, I would want it to
be easy to use and obvious. I don't mind having to learn special
syntax -- I'm not asking for it to be intuitive or guessable. But
having to define the default value as a function (with or without
lambda!) *and* call a decorator doesn't seem either easy or obvious. It
feels like a kludge designed to get around a limitation of the
language. (If you don't like the negative connotations of 'kludge',
read it as 'hack' instead.) In other words, it looks like your
suggestion is "let's find another idiom for late-binding default
arguments" rather than "let's give Python built-in support for optional
late-binding of default arguments".

If the first one is your intention, then I'll just walk away from this
discussion. I already have a perfectly obvious and explicit idiom for
late-binding of default arguments. I don't need a second one,
especially one which I find exceedingly inelegant and ugly. If you want
to use that in your own code, go right ahead, but I hope it never makes
it into any code I ever need to read. -1 from me on any solution which
requires both a decorator and special treatment of defaults in the
parameter list.

In my opinion, only a solution with built-in support from the compiler
is worth supporting. Anything else is a heavyweight, complicated
solution for a problem that already has a lightweight, simple solution:
use a sentinel. We already have a concise, fast, straightforward idiom
which is easily learned and easily written, and while it's not
intuitively obvious to newbies, neither is the suggested
decorator+lambda solution. We don't need a complicated, verbose,
hard-to-explain, hard-to-implement solution as well.

--
Steven D'Aprano

Steven D'Aprano

unread,
May 14, 2009, 8:19:26 PM5/14/09
to python...@python.org
On Fri, 15 May 2009 09:28:02 am CTO wrote:
> On May 14, 6:27 pm, Steven D'Aprano <st...@pearwood.info> wrote:
> > On Thu, 14 May 2009 09:03:34 am CTO wrote:
> > > Thanks for the input, but I've already written the code to do
> > > this. It
> > > is available at
> > > <URL:http://code.activestate.com/recipes/576751/>.
> >
> > [...]
> >
> > > The gode given is slow and ugly, but it does appear-
> > > at least to me- to do what is being asked here.
> >
> > Your code seems to work only if the source to the function is
> > available. That will mean it can't be used by people who want to
> > distribute .pyc files only.
> >
> > --
> > Steven D'Aprano
>
> I think the list is eating my replies, but suffice to say that
> there's a new version of the recipe at <URL:
> http://code.activestate.com/recipes/576754/> that doesn't have that
> limitation and looks pretty close to the syntax proposed above.


And instead has another limitation, namely that it only works if you
pass the non-default argument by keyword.

f(123, y=456) # works
f(123, 456) # fails if y has been given a default value.

--
Steven D'Aprano

CTO

unread,
May 14, 2009, 10:37:19 PM5/14/09
to python...@python.org
> Some people don't like writing:
>
> def f(x=SENTINEL):
>     if x is SENTINEL: x = []
>
> and wish to have syntax so they can write something approaching:
>
> def f(x=[]):
>     ...
>

And I understand that. However, I don't think it's important
enough to make it worth changing the language, adding to
Python's already significant function call overhead, or
making the job of parsing function signatures more difficult.
If there is a mechanism to do this inside of Python- and
there are several- it is my personal opinion that those
should be used in preference to modifying the language. As
I am neither the smartest nor most competent programmer
here, feel free to disregard my opinion- but the code I
have produced matches one of the proposed syntaxes very
closely, even if it is not the one you prefer.

> but have a fresh [] bound to x. You're supporting the syntax:
>
> @call_lambdas  # Geremy Condra uses the name 'runtime'
> def f(x=lambda:[]):
>     ...
>

For the record, I'm not supporting a syntax. I'm simply
stating that this can be done in Python as it currently
stands, and that I am most emphatically not in favor
of making function signatures any more complex than
they already are.

> (For the record, I've suggested creating a unary-& operator so that we
> can write "def f(&x=[])" to get late-binding of x.)
>

It's simple, short, and concise. If I were to get behind
a proposal to change the language to support this feature,
I would probably either get behind this one or perhaps
a more general system for adding a metaclass equivalent
to functions. However, as things stand I remain unconvinced
that any of these things are necessary, or even
particularly desirable, given the aforementioned
complexity of function signatures.

> If I were to use the proposed late-binding feature, I would want it to
> be easy to use and obvious. I don't mind having to learn special
> syntax -- I'm not asking for it to be intuitive or guessable. But
> having to define the default value as a function (with or without
> lambda!) *and* call a decorator doesn't seem either easy or obvious. It
> feels like a kludge designed to get around a limitation of the
> language. (If you don't like the negative connotations of 'kludge',
> read it as 'hack' instead.) In other words, it looks like your
> suggestion is "let's find another idiom for late-binding default
> arguments" rather than "let's give Python built-in support for optional
> late-binding of default arguments".

My suggestion is neither to find another idiom or to build in
late-binding support. Some people- yourself included- want a
new syntax. I demonstrated that close approximations of some
of the mentioned syntaxes were possible in the language already,
and while I appreciate that your preferred syntax is not on that
list, I remain unconvinced that its purported benefits outweigh
what I perceive to be its drawbacks.

> If the first one is your intention, then I'll just walk away from this
> discussion. I already have a perfectly obvious and explicit idiom for
> late-binding of default arguments. I don't need a second one,
> especially one which I find exceedingly inelegant and ugly. If you want
> to use that in your own code, go right ahead, but I hope it never makes
> it into any code I ever need to read. -1 from me on any solution which
> requires both a decorator and special treatment of defaults in the
> parameter list.

If you are satisfied with the existing idiom, then use it. If you're
not,
my code is out there. If you don't like that, then write your own.

> In my opinion, only a solution with built-in support from the compiler
> is worth supporting.

I'm afraid I'm unconvinced on that point.

> Anything else is a heavyweight, complicated
> solution for a problem that already has a lightweight, simple solution:
> use a sentinel. We already have a concise, fast, straightforward idiom
> which is easily learned and easily written, and while it's not
> intuitively obvious to newbies, neither is the suggested
> decorator+lambda solution. We don't need a complicated, verbose,
> hard-to-explain, hard-to-implement solution as well.
>
> --
> Steven D'Aprano

I think I've already addressed this point, but once more for
the record, I'm just not convinced that any of this- my code
or your proposed changes- are needed. Until then you can have
my -1.

Geremy Condra

CTO

unread,
May 14, 2009, 10:41:19 PM5/14/09
to python...@python.org

Correct. However, I remain confident that someone with ever so
slightly
more skill than myself can correct that problem- since you already
seem to have taken a look at it, maybe that's something you could
do? Thanks in advance,

Geremy Condra

Bruce Leban

unread,
May 14, 2009, 11:32:39 PM5/14/09
to Steven D'Aprano, python...@python.org
On Thu, May 14, 2009 at 3:20 PM, Steven D'Aprano <st...@pearwood.info> wrote:
On Thu, 14 May 2009 12:41:17 pm Bruce Leban wrote:

> Explicit is better than implicit. There's a thunk getting created
> here, right? Don't you want that to be obvious? I do.

No, I don't want it to be obvious. I don't care about thunks, I care
that x gets bound at runtime. I don't care what the implementation is:
whether it is a thunk, eval(), voodoo or something else, just so long
as it works.

As for your argument that it is better to be explicit, when you want to
add two numbers and compare them with a third, do you write:

(1 .__add__(1)).__eq__(2)

instead of

1+1 == 2?

Absolutely not. This is a false analogy. The anology would be having an implicit multiplication operator and writing (a b c) instead of (a * b * c).
 

<snip> The syntax shouldn't depend on the

implementation.

lambda is already disliked by many people, including Guido. I don't
think any suggestion that we make lambda more confusing by giving it
two very different meanings ("create a thunk" inside function parameter
lists, and "create a function" everywhere else) will be very popular on
python-dev.

I'm *not* suggesting a new meaning for lambda! This is the same meaning that it has right now. The new meaning is the decorator attached to the default assignment that says evaluate that lambda.

I'll use an @dynamic decorator-like syntax to illustrate. These would be valid:

def foo(a, b = @dynamic lambda: []):
def foo(a, b = @dynamic lambda: list()):
def foo(a, b = @dynamic list):
def foo(a, b = @dynamic random.random):

and this would not:

def foo(a, b = @dynamic [])
def foo(a, b = @dynamic 5)

because @dynamic says that the thing that follows is called to generate a dynamic default parameter value and you can't call [] or 5.

My point about creating a thunk is *not* an implementation detail. The point here is that if you use one of the forms above with a lambda, it's the lambda creating a thunk/closure/voodoo thing at this point in the program, *not* the @dynamic decorator. The scope of that lambda is exactly what it looks like it is with or without the @dynamic decorator. Likewise, in the random.random, example, it's the value of random.random at the time the function is defined, not some later value that might be assigned to that name.

If you use some other syntax that doesn't look like a lambda, I have to learn the scoping rules for that syntax. I already know the rules for lambda.

--- Bruce

Steven D'Aprano

unread,
May 14, 2009, 11:38:43 PM5/14/09
to python...@python.org
On Fri, 15 May 2009 12:37:19 pm CTO wrote:
> > Some people don't like writing:
> >
> > def f(x=SENTINEL):
> >     if x is SENTINEL: x = []
> >
> > and wish to have syntax so they can write something approaching:
> >
> > def f(x=[]):
> >     ...
>
> And I understand that. However, I don't think it's important
> enough to make it worth changing the language, adding to
> Python's already significant function call overhead, or
> making the job of parsing function signatures more difficult.
> If there is a mechanism to do this inside of Python- and
> there are several- it is my personal opinion that those
> should be used in preference to modifying the language. As
> I am neither the smartest nor most competent programmer
> here, feel free to disregard my opinion- but the code I
> have produced matches one of the proposed syntaxes very
> closely, even if it is not the one you prefer.

Your code also "add[s] to Python's already significant function call
overhead" as well as "making the job of parsing function signatures
more difficult".

I don't mean to dump on your code. What you are trying to do is
obviously very difficult from pure Python code, and the solutions you
have come up with are neat kludges. But a kludge is still a kludge, no
matter how neat it is :)

[...]


> > (For the record, I've suggested creating a unary-& operator so that
> > we can write "def f(&x=[])" to get late-binding of x.)
>
> It's simple, short, and concise. If I were to get behind
> a proposal to change the language to support this feature,
> I would probably either get behind this one or perhaps
> a more general system for adding a metaclass equivalent
> to functions. However, as things stand I remain unconvinced
> that any of these things are necessary, or even
> particularly desirable, given the aforementioned
> complexity of function signatures.

I think we two at least agree. I don't think there's anything wrong with
the current sentinel idiom. It's not entirely intuitive to newbies, or
those who don't fully understand Python's object-binding model, but I
don't consider that a flaw. So I don't see the compile-time binding of
default args to be a problem that needs solving.

But other people do, and they are loud and consistent in their
complaints. Given that the squeaky wheel (sometimes) gets the grease,
I'd just like to see a nice solution to a (non-)problem rather than an
ugly solution.

So I'm +0 on my proposal -- I don't think it solves a problem that needs
solving, but other people do. I'm -1 on decorator+lambda solutions,
because not only do they not solve a problem that needs solving, but
they don't solve it in a particularly ugly and inefficient way *wink*


> My suggestion is neither to find another idiom or to build in
> late-binding support. Some people- yourself included-

I think you've misunderstood my position. I'm one of the people
defending the current semantics of default arg binding. But since
others want optional late binding, I'm just trying to find a syntax
that doesn't bite :)


> want a
> new syntax. I demonstrated that close approximations of some
> of the mentioned syntaxes were possible in the language already,
> and while I appreciate that your preferred syntax is not on that
> list, I remain unconvinced that its purported benefits outweigh
> what I perceive to be its drawbacks.

Just out of curiosity, what do you see as the drawbacks? The ones that
come to my mind are:

* people who want late binding to be standard will be disappointed
(but that will be true of any solution)

* requires changes to Python's parser, to allow unary-&
(but that will probably be very simple)

* requires charges to Python's compiler, to allow for some sort of
late-binding semantics (thunks?)
(but that will probably be very hard)

* requires people to learn one more feature
(so newbies will still be confused that def f(x=[]) doesn't behave as
they expect).

--
Steven D'Aprano

Tennessee Leeuwenburg

unread,
May 15, 2009, 12:16:04 AM5/15/09
to Steven D'Aprano, python...@python.org
A thought from another direction...

Any chance we could have the interpreter raise a warning for the case

def foo(a = []):
  #stuff

?

The empty list and empty dict args would, I imagine, be the two most common mistakes. Showing a warning might, at least, solve the problem of people tripping over the syntax.

Cheers,
-T

Curt Hagenlocher

unread,
May 15, 2009, 12:31:34 AM5/15/09
to Tennessee Leeuwenburg, python...@python.org
I think this takes the discussion in a more practical direction. Imagine that there were a special method name __immutable__ to be implemented appropriately by all builtin types. Any object passed as a default argument would be checked to see that its type implements __immutable__ and that __immutable__() is True. Failure would mean a warning or even an error in subsequent versions.

User-defined types could implement __immutable__ as they saw fit, in the traditional Pythonic consenting-adults-ly way.

CTO

unread,
May 15, 2009, 12:48:53 AM5/15/09
to python...@python.org
[super-snip]

> Just out of curiosity, what do you see as the drawbacks?
[snip]

1) It adds to the complexity (and therefore overhead) of
calling functions- not just the functions which use it,
but even functions which operate as normal. Python
already has a hefty penalty for calling functions, and
I really don't want it to get any heavier. My 'solutions',
as incomplete as they are, at least don't slow down
anything else.
2) It adds to the complexity of introspecting functions.
Take a good look at inspect.getfullargspec- its a
nightmare, and either it gets worse under this (bad)
or it doesn't include information that is available
to the compiler (even worse).

In addition to those minuses, it doesn't actually add
to the capabilities of the language. If this were a
proposal to add early action to Python (the equivalent
of metaclasses or, to some extent, macro replacement)
I would be much more likely to support it, despite the
heavier syntax.

So, the existing idiom works pretty well, there
doesn't seem to be a very good substitute, it slows
the whole language down to implement, and it doesn't
add any power if you do.

Like I say, I'm unconvinced.

Geremy Condra

Stephen J. Turnbull

unread,
May 15, 2009, 1:08:18 AM5/15/09
to Steven D'Aprano, python...@python.org
Steven D'Aprano writes:

> (For the record, I've suggested creating a unary-& operator so that
> we can write "def f(&x=[])" to get late-binding of x.)

Could you summarize that discussion briefly?

Chris Rebert

unread,
May 15, 2009, 2:13:23 AM5/15/09
to Tennessee Leeuwenburg, python...@python.org
On Thu, May 14, 2009 at 9:16 PM, Tennessee Leeuwenburg
<tleeuw...@gmail.com> wrote:

+1 on throwing a ValueError for non-hash()-able (and thus probably
mutable) default argument values. It's by no means perfect since
objects are hash()-able by default using their ID, but it would at
least help in the frequent "well-behaved mutable container object"
cases.
The barrier to this idea would be the code breakage involved; IMHO,
code exploiting mutable defaults as static variables is in poor style
anyway, but backward compatibility is a significant concern of the
BDFL and Python devs; though I would hope the breakage might be seen
as justifiable in this case.

Cheers,
Chris
--
http://blog.rebertia.com

Chris Rebert

unread,
May 15, 2009, 2:20:45 AM5/15/09
to Curt Hagenlocher, python...@python.org
> On Thu, May 14, 2009 at 9:16 PM, Tennessee Leeuwenburg
> <tleeuw...@gmail.com> wrote:
>>
>> A thought from another direction...
>>
>> Any chance we could have the interpreter raise a warning for the case
>>
>> def foo(a = []):
>>   #stuff
>>
>> ?
>>
>> The empty list and empty dict args would, I imagine, be the two most
>> common mistakes. Showing a warning might, at least, solve the problem of
>> people tripping over the syntax.

On Thu, May 14, 2009 at 9:31 PM, Curt Hagenlocher <cu...@hagenlocher.org> wrote:
> I think this takes the discussion in a more practical direction. Imagine
> that there were a special method name __immutable__ to be implemented
> appropriately by all builtin types. Any object passed as a default argument
> would be checked to see that its type implements __immutable__ and that
> __immutable__() is True. Failure would mean a warning or even an error in
> subsequent versions.
>
> User-defined types could implement __immutable__ as they saw fit, in the
> traditional Pythonic consenting-adults-ly way.

(A) Python's new Abstract Base Classes would probably be a better way
of doing such checking rather than introducing a new special method

(B) What about having an __immutable__() that returned an immutable
version of the object if possible? Then all default arguments could be
converted to immutables at definition-time, with errors if a default
cannot be made immutable? It would eliminate the performance concerns
since the overhead would only be incurred once (when the function gets
defined), rather than with each function call.

Carl Johnson

unread,
May 15, 2009, 3:30:18 AM5/15/09
to python...@python.org
Bruce Leban wrote:

> I'll use an @dynamic decorator-like syntax to illustrate. These would be
> valid:
>
> def foo(a, b = @dynamic lambda: []):
> def foo(a, b = @dynamic lambda: list()):
> def foo(a, b = @dynamic list):
> def foo(a, b = @dynamic random.random):
>
> and this would not:
>
> def foo(a, b = @dynamic [])
> def foo(a, b = @dynamic 5)
>
> because @dynamic says that the thing that follows is called to generate a
> dynamic default parameter value and you can't call [] or 5.

Hmm, very interesting, but in your example what is "dynamic" doing?
Are you proposing it as a keyword to signal "here comes a dynamic
default"? Do we really need it? Why not something like this:

def five_appender(x=@list):
x.append(5)
return x

>>> five_appender()
[5]
>>> five_appender()
[5]

The idea is that @ is a magic sigil meaning, "call this if no argument
is passed in." So, as per your prior example @[] or @5 would be result
in a runtime error, since they're not callable. If for some reason you
want a fresh 5 (I can't think of why, since it's immutable, but
whatever), you would need to use a lambda:

def n_appender(n=@lambda: 5, x=@list):
x.append(n)
return x

Do y'all think this is enough inline with how @ is already used to
make sense? Or is it too different from the existing use of @?

-- Carl Johnson

CTO

unread,
May 15, 2009, 4:13:36 AM5/15/09
to python...@python.org

On May 15, 3:30 am, Carl Johnson <cmjohnson.mailingl...@gmail.com>
wrote:

If we're making magical objects, why not just make a magical object
that gives you the ability to defer the execution of a block of code
until an operation is performed on it? That way at least it makes
sense if you've learned the rest of the language.

Geremy Condra

Steven D'Aprano

unread,
May 15, 2009, 4:14:50 AM5/15/09
to python...@python.org
On Fri, 15 May 2009 02:48:53 pm CTO wrote:
> [super-snip]
>
> > Just out of curiosity, what do you see as the drawbacks?
>
> [snip]
>
> 1) It adds to the complexity (and therefore overhead) of
> calling functions- not just the functions which use it,
> but even functions which operate as normal.

Without an implementation, how can you possibly predict the cost of it?


> Python
> already has a hefty penalty for calling functions,

I think you're badly mistaken. Python has a hefty cost for looking up
names, but the overhead to *call* a function once you have looked up
the name is minimal.

>>> from timeit import Timer
>>> def f():
... pass
...
>>> min(Timer('f', 'from __main__ import f').repeat())
0.32181000709533691
>>> min(Timer('f()', 'from __main__ import f').repeat())
0.35797882080078125

No significant difference between looking up f and looking up f and
calling it.

Even if you give the function a complex signature, it's still relatively
lightweight:

>>> def g(a=1, b=2, c=3, d=4, e=5, f=6, g=7, h=8, *args, **kwargs):
... pass
...
>>> min(Timer('g()', 'from __main__ import g').repeat())
0.55176901817321777


> and
> I really don't want it to get any heavier. My 'solutions',
> as incomplete as they are, at least don't slow down
> anything else.

Oh the irony. Decorators are very heavyweight. Here's a decorator that
essentially does nothing at all, and it triples the cost of calling the
function:

>>> from functools import wraps
>>> def decorator(f):
... @wraps(f)
... def inner(*args, **kwargs):
... return f(*args, **kwargs)
... return inner
...
>>> @decorator
... def h():
... pass
...
>>> min(Timer('h()', 'from __main__ import h').repeat())
1.1645870208740234

I think, before making claims as to what's costly and what isn't, you
should actually do some timing measurements.


> 2) It adds to the complexity of introspecting functions.
> Take a good look at inspect.getfullargspec- its a
> nightmare, and either it gets worse under this (bad)
> or it doesn't include information that is available
> to the compiler (even worse).

Well obviously this is going to make getfullargspec more complicated.
But tell me, what do you think your solution using decorators does to
getfullargspec?


> In addition to those minuses, it doesn't actually add
> to the capabilities of the language.

It's an incremental improvement. Currently, late-binding of defaults
requires boilerplate code. This will eliminate that boilerplate code.


> If this were a
> proposal to add early action to Python (the equivalent
> of metaclasses or, to some extent, macro replacement)
> I would be much more likely to support it, despite the
> heavier syntax.
>
> So, the existing idiom works pretty well,

100% agreed!

> there doesn't seem to be a very good substitute,

Not without support in the compiler.

> it slows the whole language down to implement,

You can't know that.

> and it doesn't add any power if you do.

It reduces boilerplate, which is a good thing. Probably the *only* good
thing, but still a good thing.

--
Steven D'Aprano

CTO

unread,
May 15, 2009, 4:24:36 AM5/15/09
to python...@python.org

> Python-id...@python.orghttp://mail.python.org/mailman/listinfo/python-ideas

Steven D'Aprano

unread,
May 15, 2009, 4:44:54 AM5/15/09
to Stephen J. Turnbull, python...@python.org
On Fri, 15 May 2009 03:08:18 pm Stephen J. Turnbull wrote:

> Could you summarize that discussion briefly?

Many newbies, and some more experienced programmers, are confused by the
behaviour of functions when parameters are given default mutable
arguments:

>>> def f(x=[]):
... x.append(1)
... return x
...
>>> f()
[1]
>>> f()
[1, 1]

Some people are surprised by this behaviour, and would prefer that the
default value for x be freshly created each time it is needed. This is
one of the most common, and most acrimonious, topics of discussion on
comp.lang.python. The standard idiom for the expected behaviour is to
insert boilerplate code that checks for a sentinel:

def f(x=None):
if x is None: x = []
x.append(1)
return x

The chances of having the standard behaviour changed are slim, at best,
for various reasons including backward compatibility and runtime
efficiency. Also, I believe Guido has ruled that the standard behaviour
will not be changed.

However, some have suggested that if the standard compile-time creation
of defaults won't be changed, perhaps it could be made optional, with
special syntax, or perhaps a decorator, controlling the behaviour. See
these two proof-of-concept decorators, by Geremy Condra, for example:

http://code.activestate.com/recipes/576751/
http://code.activestate.com/recipes/576754/

I'm not convinced by decorator-based solutions, so I'll pass over them.
I assume that any first-class solution will require cooperation from
the compiler, and thus move the boilerplate out of the function body
into the byte code. (Or whatever implementation is used -- others have
suggested using thunks.)

Assuming such compiler support is possible, it only remains to decide on
syntax for it. Most suggested syntax I've seen has marked the default
value itself, e.g.: def f(x = new []). Some have suggested overloading
lambda, perhaps with some variation like def f(x = *lambda:[]).

I suggest that the markup should go on the formal parameter name, not
the default value: we're marking the formal parameter as "special" for
using delayed semantics, not that the default object (usually [] or {})
will be special.

Some years ago, Python overloaded the binary operators * and ** for use
as special markers in parameter lists. I suggest we could do the same,
by overloading the & operator in a similar fashion: inside the
parameter list, &x would mean to delay evaluation of the default
argument:

def f(x=[], &y=[])

x would use the current compile-time semantics, y would get the new
runtime semantics.

I don't have any particular reason for choosing & over any other binary
operator. I think ^ would also be a good choice.

Tagging a parameter with unary-& but failing to specify a default value
should be a syntax error. Likewise for unary-& outside of a parameter
list. (At least until such time as somebody suggests a good use for
such a thing.)


--
Steven D'Aprano

CTO

unread,
May 15, 2009, 4:45:16 AM5/15/09
to python...@python.org

On May 15, 4:14 am, Steven D'Aprano <st...@pearwood.info> wrote:

[snip]


>
> Without an implementation, how can you possibly predict the cost of it?
>

[snip]

You're right. Please provide code.

Geremy Condra

It is loading more messages.
0 new messages