Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Copy constructors

2 views
Skip to first unread message

David Smith

unread,
Aug 9, 2001, 2:55:54 PM8/9/01
to
It appears that the preferred way to make a copy constructor is to
define the __copy__ method. In the examples I have seen, __copy__ then
creates a new object in the usual way, which invokes __init__. In a
class I have at hand, __init__ does some real work, which I want to
bypass -- I want to clone the results of that work. I don't want to
redefine __init__'s parameter list (to permit passing only one parameter
of the same class), because it would screw up IDLE's automatic parameter
list prompting.

Is there a way for __copy__ to create a bare object of the same class,
which it can proceed to populate?

David Smith

Alex

unread,
Aug 9, 2001, 4:13:36 PM8/9/01
to

Have a look at the __getstate__ and __setstate__ methods, described in
http://python.org/doc/lib/module-pickle.html

Alex.

Alex Martelli

unread,
Aug 9, 2001, 4:14:06 PM8/9/01
to
"David Smith" <d...@labs.agilent.com> wrote in message
news:3B72DCBA...@labs.agilent.com...
...

> class I have at hand, __init__ does some real work, which I want to
> bypass -- I want to clone the results of that work. I don't want to
...

> Is there a way for __copy__ to create a bare object of the same class,
> which it can proceed to populate?

Piece of cake:

class Fleep:
def __init__(self, x, y, z):
print 'lots',x,'of',y,'work',z
def __copy__(self):
class Temp: pass
newbie = Temp()
newbie.__class__=self.__class__
print "very little work"


Alex

Steve Holden

unread,
Aug 9, 2001, 8:03:37 PM8/9/01
to
"Alex Martelli" <ale...@yahoo.com> wrote in message
news:9kuqs...@enews4.newsguy.com...

Surely I'm missing something here, but if (as I surmise) __copy__() is a
factory, shouldn't it let someone have newbie before it gets garbage
collected?

very-little-work-should-maybe-instead-return-newbie-ly y'rs - steve
--
http://www.holdenweb.com/

Joal Heagney

unread,
Aug 10, 2001, 6:01:57 AM8/10/01
to

And adding onto that an automatic copy of the instance's __dict__ -->

>>> import copy
>>> class Fleep:
... def __init__(self,x,y,z):
... print 'lots',x,'of',y,'work',z
... self.x = x
... self.y = y
... self.z = z
... def __copy__(self):
... class Temp: pass
... newbie = Temp()
... newbie.__class__ = self.__class__
... newbie.__dict__ = copy.deepcopy(self.__dict__)
... print "very little work"
... return newbie

I LOVE this language. Alex, you're empty class trick just made it into
my private python scrap-book.
--
Joal Heagney is: _____ _____
/\ _ __ __ _ | | _ ___ |
/__\|\ || ||__ |\ || |___|/_\|___] |
/ \ \_||__ ||___| \_|! | | \ \ !

Alex Martelli

unread,
Aug 10, 2001, 7:15:16 AM8/10/01
to
"Joal Heagney" <s71...@student.gu.edu.au> wrote in message
news:3B73B115...@student.gu.edu.au...
...

> I LOVE this language. Alex, you're empty class trick just made it into
> my private python scrap-book.

Glad you liked it! That convinced me to post it as a recipe to the
cookbook, with slightly more complete discussion -- please see
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66507
Comments, ratings and feedback are welcome, as usual!


Alex

Andrew Dalke

unread,
Aug 10, 2001, 3:35:00 AM8/10/01
to

Joal Heagney wrote:
>And adding onto that an automatic copy of the instance's __dict__ -->

>... def __copy__(self):


>... class Temp: pass
>... newbie = Temp()
>... newbie.__class__ = self.__class__
>... newbie.__dict__ = copy.deepcopy(self.__dict__)
>... print "very little work"
>... return newbie

Or

def __copy__(self):
return loads(dumps(self))

(but this assumes the created string is small enough to be
negligible.)

Andrew
da...@dalkescientific.com

David Smith

unread,
Aug 10, 2001, 10:40:56 AM8/10/01
to
That works. Thanks muchly.
David

Aahz Maruch

unread,
Aug 10, 2001, 10:59:29 AM8/10/01
to
In article <3B72DCBA...@labs.agilent.com>,

David Smith <d...@labs.agilent.com> wrote:
>
>It appears that the preferred way to make a copy constructor is to
>define the __copy__ method. In the examples I have seen, __copy__ then
>creates a new object in the usual way, which invokes __init__. In a
>class I have at hand, __init__ does some real work, which I want to
>bypass -- I want to clone the results of that work. I don't want to
>redefine __init__'s parameter list (to permit passing only one parameter
>of the same class), because it would screw up IDLE's automatic parameter
>list prompting.

I don't understand that last sentence. How does permitting only one
argument screw up IDLE?
--
--- Aahz <*> (Copyright 2001 by aa...@pobox.com)

Hugs and backrubs -- I break Rule 6 http://www.rahul.net/aahz/
Androgynous poly kinky vanilla queer het Pythonista

"i-write-best-when-the-audience-is-me-ly y'rs - tim"

Guido van Rossum

unread,
Aug 10, 2001, 11:32:04 PM8/10/01
to
"Alex Martelli" <ale...@yahoo.com> writes:

While I probably introduced this myself (pickle uses it), I have one
reservation. Assignment to self.__class__ is unique to Python -- it's
not an idiom one can easily translate to other languages. It's also a
relatively new Python feature (I don't even know if Jython supports
it).

But maybe more importantly, I don't know how to support this esoteric
feature after the type/class unification is complete. Under the new
system, not all instances are born the same: instances may have slots
for instance variables rather than a __dict__ -- using slots makes for
more space-efficient instances. (Having a __dict__ is still the
default, and an instance can have both slots and a __dict__.)

Fortunately, the type/class unification has a different idiom
available to avoid __init_: __new__. There are two parts to object
construction, __new__ and __init__. __new__ is a class method, and
constructs a minimal object. After __new__ has returned an
instance,__init__ is called to further initialize the instance. At
least that's what the normal constructor (calling the class) does.
You can call __new__ directly to construct an instance bypassing
__init__.

--Guido van Rossum (home page: http://www.python.org/~guido/)

Alex Martelli

unread,
Aug 11, 2001, 5:37:36 AM8/11/01
to
"Guido van Rossum" <gu...@python.org> wrote in message
news:cp66bv4...@cj20424-a.reston1.va.home.com...
...

> While I probably introduced this myself (pickle uses it), I have one
> reservation. Assignment to self.__class__ is unique to Python -- it's
> not an idiom one can easily translate to other languages. It's also a
> relatively new Python feature (I don't even know if Jython supports

Yep, no problems:

D:\jython>jython
Jython 2.0 on java1.3.0_02 (JIT: null)
Type "copyright", "credits" or "license" for more information.
>>> class X: pass
...
>>> x=X()
>>> x
<__main__.X instance at 4848023>
>>> class Y: pass
...
>>> x.__class__=Y
>>> x
<__main__.Y instance at 4848023>
>>>

I believe Ruby and Perl both have the powerful (if rarely used)
"change class dynamically" feature. It would be a pity if Python,
which introduced it, were now to lose it. A similar feature was a
*hair-breadth away* from making it into early C++, according
to Stroustrup's "Design and evolution" book -- he seriously
considered the possibility of an object referring to its own
'base-object' via a pointer (as is done in virtual inheritance)
with the ability to change that pointer in the fly (leading to a
recomputation of virtual tables) -- he eventually gave it up,
with some regrets, for performance considerations (possibility
of 100%-performance being always a high priority in C++'s
evolution -- specifically, zero mandatory overhead being imposed
on code _not_ using some advanced feature).

I call the feature 'rarely used', but others may use it less
rarely. Moshe Zadka, for example, has advocated that a
class-change is the right way to 'customize' a specific
instance, rather than adding a method to the instance itself
on the fly (I'm not quite sure why, and wouldn't want to
misrepresent his position, but I believe it has to do with a
wish to see behavior as associated with _classes_, not with
_instances_).


> But maybe more importantly, I don't know how to support this esoteric
> feature after the type/class unification is complete. Under the new
> system, not all instances are born the same: instances may have slots
> for instance variables rather than a __dict__ -- using slots makes for
> more space-efficient instances. (Having a __dict__ is still the
> default, and an instance can have both slots and a __dict__.)

So presumably not just ANY class-object may be assigned as
the __class__ attribute of a given instance: there will be a type
error if the class and the instance don't agree on slots/dictionary
use. Can't this be checked at runtime when some
instance.__class__ = anewclass
is attempted? If this check slows down this specific esoteric
feature this shouldn't be a problem. I'm not sure if the exact
needed constraint is _identity_ of slots in the original and new
class objects (I realize the slots are in the *instance* object,
but the _code_ to set or access them is in the class object,
right?) -- offhand it would seem so (if the slot assignment is
identical surely there should be no problems -- if the instance
has extra slots the new class would not know how to access
them, if the instance lacks some slots the class thinks are
there it should be even worse). It seems to me it would be
better to keep the possibility of assigning __class__ even just
for old/new classes with identical slots/dictionary use, with
a TypeError (or some new specific subclass of it) in case of
any difference, rather than losing such assignment altogether.

Of course, such a change would still invalidate the specific
use of class assignment that I was making here, since the
'empty instance' I started with was of a class with no slots.
But, as you say, fortunately this specific idiom is being
replaced by a more direct new one:

> Fortunately, the type/class unification has a different idiom
> available to avoid __init_: __new__. There are two parts to object
> construction, __new__ and __init__. __new__ is a class method, and
> constructs a minimal object. After __new__ has returned an
> instance,__init__ is called to further initialize the instance. At
> least that's what the normal constructor (calling the class) does.
> You can call __new__ directly to construct an instance bypassing
> __init__.

An excellent alternative indeed!


Alex

Guido van Rossum

unread,
Aug 11, 2001, 9:06:57 AM8/11/01
to
"Alex Martelli" <ale...@yahoo.com> writes:

> "Guido van Rossum" <gu...@python.org> wrote in message
> news:cp66bv4...@cj20424-a.reston1.va.home.com...
> ...
> > While I probably introduced this myself (pickle uses it), I have one
> > reservation. Assignment to self.__class__ is unique to Python -- it's
> > not an idiom one can easily translate to other languages. It's also a
> > relatively new Python feature (I don't even know if Jython supports
>

> I believe Ruby and Perl both have the powerful (if rarely used)
> "change class dynamically" feature. It would be a pity if Python,
> which introduced it, were now to lose it. A similar feature was a
> *hair-breadth away* from making it into early C++, according
> to Stroustrup's "Design and evolution" book -- he seriously
> considered the possibility of an object referring to its own
> 'base-object' via a pointer (as is done in virtual inheritance)
> with the ability to change that pointer in the fly (leading to a
> recomputation of virtual tables) -- he eventually gave it up,
> with some regrets, for performance considerations (possibility
> of 100%-performance being always a high priority in C++'s
> evolution -- specifically, zero mandatory overhead being imposed
> on code _not_ using some advanced feature).
>
> I call the feature 'rarely used', but others may use it less
> rarely. Moshe Zadka, for example, has advocated that a
> class-change is the right way to 'customize' a specific
> instance, rather than adding a method to the instance itself
> on the fly (I'm not quite sure why, and wouldn't want to
> misrepresent his position, but I believe it has to do with a
> wish to see behavior as associated with _classes_, not with
> _instances_).

Let's just say that use of this feature is at your own risk. It was
an experiment. I *could* restore it partially (see below) but I'd
rather not, given that better alternatives are available.

> > But maybe more importantly, I don't know how to support this esoteric
> > feature after the type/class unification is complete. Under the new
> > system, not all instances are born the same: instances may have slots
> > for instance variables rather than a __dict__ -- using slots makes for
> > more space-efficient instances. (Having a __dict__ is still the
> > default, and an instance can have both slots and a __dict__.)
>
> So presumably not just ANY class-object may be assigned as
> the __class__ attribute of a given instance: there will be a type
> error if the class and the instance don't agree on slots/dictionary
> use. Can't this be checked at runtime when some
> instance.__class__ = anewclass
> is attempted?

I'm sure I'd be able to come up with some kind of check that works.
It would probably be very similar to the check I already use to
determine whether two base classes are compatible -- the check that
stops you from doing "class C(list, dictionary)". But I repeat: I'd
rather not.

> If this check slows down this specific esoteric
> feature this shouldn't be a problem.

That depends on how much of a slowdown it is. :) The start of this
thread was about avoiding work done by __init__.

Guido van Rossum

unread,
Aug 11, 2001, 9:11:28 AM8/11/01
to Roman Suzi, pytho...@python.org
[me]

> >But maybe more importantly, I don't know how to support this esoteric
> >feature after the type/class unification is complete. Under the new
> >system, not all instances are born the same: instances may have slots
> >for instance variables rather than a __dict__ -- using slots makes for
> >more space-efficient instances. (Having a __dict__ is still the
> >default, and an instance can have both slots and a __dict__.)

[Roman Suzi]

> Couldn't slots just be mapped to __dict__ in case somebody accesses it?
> Will __dict__ reflect available slots?

I wasn't clear. It's not about whether or not something called
__dict__ is visible. It's about whether the object lay-out of the old
and new class are compatible.

> I know that Python docs never guarateed __dict__ to be the same forever,
> but it is used in classes to do attribute assignments in __setattr__ and
> similar circumstances...

Incidentally, the new system doesn't require manipulating __dict__ for
this purpose. First of all, you can use "getset attributes" to trap
specific attributes (see http://www.python.org/2.2/descrintro.html).
Second, a typical __getattr__ override can now use object.__getattr__
to access the default implementation. This is important given that
you don't always have a __dict__. :)

> Then slots are statically-named attributes (if I understood correctly).
> Maybe it will be honest to give some kind of __attrlist__ attribute which
> will be similar to __dict__ but devoted to those static slots?

Static slots are described by class attributes. Read PEP 252.

Glyph Lefkowitz

unread,
Aug 11, 2001, 4:27:19 PM8/11/01
to pytho...@python.org

On Sat, 11 Aug 2001, Guido van Rossum wrote:

> "Alex Martelli" <ale...@yahoo.com> writes:
>
> > I call the feature 'rarely used', but others may use it less
> > rarely.

For example, me.

Am I correct in understanding from this thread that there is an intent to
remove the ability to assign an instance's __class__ attribute?

> Let's just say that use of this feature is at your own risk. It was
> an experiment. I *could* restore it partially (see below) but I'd
> rather not, given that better alternatives are available.

Are there any features that we can use any longer that are not 'at our own
risk'? Division will be changed, __class__ assignment is going away,
type() means something different, even the rules of scope...

Also, as far as I know, better alternatives do not exist; for example,
"promise" objects from a database which are latently initialized when they
are accessed. One module that I use *all the time*;
twisted.python.rebuild.rebuild, is based entirely upon this "trick". One
of the eye-popping cool features of Python is the ability to change code
and have existing instances update to use the new methods automatically.

Smalltalk's Object>>become: is highly useful for similiar reasons; is
there a new way to emulate this without __class__ assignment?

> I'm sure I'd be able to come up with some kind of check that works. It
> would probably be very similar to the check I already use to determine
> whether two base classes are compatible -- the check that stops you
> from doing "class C(list, dictionary)". But I repeat: I'd rather not.

Is there a discussion somewhere of why you'd rather not? This is
*essential* functionality for me, especially since it sounds like it won't
be able to be completely replicated through some other mechanism. (If you
implement "reference.become(other)", for example, I won't mind nearly so
much <0.5 wink>)

> That depends on how much of a slowdown it is. :)

I don't know about others who use this feature, but the way I use it I
could afford to wait 0.5 seconds for each __class__ assignment and not be
too upset about it. Losing this ability entirely, however, would remove a
significant feature from Twisted when used with a later python version.
I implore you not to remove it.

maybe-we-ought-to-stop-pretending-that-this-new-language-with-
different-syntax,-idioms,-and-semantics-for-everything-
is-really-still-python-ly-y'rs,

______ __ __ _____ _ _
| ____ | \_/ |_____] |_____|
|_____| |_____ | | | |
@ t w i s t e d m a t r i x . c o m
http://twistedmatrix.com/users/glyph

Paul Prescod

unread,
Aug 11, 2001, 5:34:50 PM8/11/01
to Joal Heagney, pytho...@python.org
Joal Heagney wrote:
>
>...

>
> I LOVE this language. Alex, you're empty class trick just made it into
> my private python scrap-book.

Why not also add it to a public scrapbook? (see the sig)
--
Take a recipe. Leave a recipe.
Python Cookbook! http://www.ActiveState.com/pythoncookbook

Alex Martelli

unread,
Aug 11, 2001, 5:58:10 PM8/11/01
to
"Glyph Lefkowitz" <gl...@twistedmatrix.com> wrote in message
news:mailman.997561650...@python.org...
...

> Am I correct in understanding from this thread that there is an intent to
> remove the ability to assign an instance's __class__ attribute?

That's how I'm reading Guido's messages in this thread, too.


> > Let's just say that use of this feature is at your own risk. It was
> > an experiment. I *could* restore it partially (see below) but I'd
> > rather not, given that better alternatives are available.

...
> Also, as far as I know, better alternatives do not exist; for example,
> "promise" objects from a database which are latently initialized when they
> are accessed. One module that I use *all the time*;

I don't know of any better way to handle the Promise design pattern,
either. In C++, I'm having to kludge around it all the time via
letter/envelope idioms -- the possibility of changing classes on
the fly makes it much simpler and more direct. Hadn't thought
of that earlier...

> twisted.python.rebuild.rebuild, is based entirely upon this "trick". One
> of the eye-popping cool features of Python is the ability to change code
> and have existing instances update to use the new methods automatically.

That one seems to be related to _another_ capability (of which I've
seen no indication that it's also going away) -- keeping the same
class object. but changing that object (rebinding method attributes
thereof). Or maybe I don't understand exactly what you mean?


> Smalltalk's Object>>become: is highly useful for similiar reasons; is
> there a new way to emulate this without __class__ assignment?

It seems to me the functionality of become is homomorphic to that
of class-assignment, unless I'm truly missing something major.


Alex

Glyph Lefkowitz

unread,
Aug 11, 2001, 7:28:47 PM8/11/01
to pytho...@python.org

On Sat, 11 Aug 2001, Alex Martelli wrote:

> "Glyph Lefkowitz" <gl...@twistedmatrix.com> wrote in message
> news:mailman.997561650...@python.org...
> ...
> > Am I correct in understanding from this thread that there is an intent to
> > remove the ability to assign an instance's __class__ attribute?
>
> That's how I'm reading Guido's messages in this thread, too.

OK. Then I don't regret expressing alarm and causing rioting in the
streets <0.9 wink>.

> I don't know of any better way to handle the Promise design pattern,
> either. In C++, I'm having to kludge around it all the time via
> letter/envelope idioms -- the possibility of changing classes on the
> fly makes it much simpler and more direct. Hadn't thought of that
> earlier...

I think the ZODB uses something like this, but it's implemented using
ExtensionClasses. I'd hate to see it become a requirement to muck around
in native code.

> > twisted.python.rebuild.rebuild, is based entirely upon this "trick". One
> > of the eye-popping cool features of Python is the ability to change code
> > and have existing instances update to use the new methods automatically.
>
> That one seems to be related to _another_ capability (of which I've
> seen no indication that it's also going away) -- keeping the same
> class object. but changing that object (rebinding method attributes
> thereof). Or maybe I don't understand exactly what you mean?

Hmm, I suppose that could be done as well; I'd have to do some other
shuffling around of class references in order to keep isinstance() working
like it should, but you're right: it's not impossible.

> It seems to me the functionality of become is homomorphic to that
> of class-assignment, unless I'm truly missing something major.

Class-assignment + dict-assignment is nearly homomorphic. Of course, if we
could do *real* reference replacement, then you could have a Promise for a
python list :-).

Guido van Rossum

unread,
Aug 11, 2001, 9:35:16 PM8/11/01
to
Glyph Lefkowitz <gl...@twistedmatrix.com> writes:

> Am I correct in understanding from this thread that there is an intent to
> remove the ability to assign an instance's __class__ attribute?

Yes, I'd like to remove this. See my previous post in this thread for
more of an explanation of the problem. But I haven't decided yet!
This thread will help me figure out how big of a deal it will be.

Also note that nothing will change (yet) for classic classes -- in
2.2, classic classes will use a different metaclass from "new-style"
classes, and the classic metaclass will provide the same semantics
for classes and instance as before. The class statement creates a
classic class by default -- unless you explicitly subclass from a
built-in type or a new-style class.

In a sense, in 2.2 the new-style classes will still be experimental,
and it's quite likely that based upon feedback from users they will
change (for the better) in later versions.

> > Let's just say that use of this feature is at your own risk. It was
> > an experiment. I *could* restore it partially (see below) but I'd
> > rather not, given that better alternatives are available.
>
> Are there any features that we can use any longer that are not 'at our own
> risk'? Division will be changed, __class__ assignment is going away,
> type() means something different, even the rules of scope...

But you're getting so much in return! Subclassing built-in types,
get/set methods, class and static methods, uniform introspection...

> Also, as far as I know, better alternatives do not exist; for example,
> "promise" objects from a database which are latently initialized when they
> are accessed.

If you know the type it's going to be eventually, you can use
C.__new__() to create an uninitialized C instance.

> One module that I use *all the time*;
> twisted.python.rebuild.rebuild, is based entirely upon this "trick".

I guess the name ("twisted") says it all. :-)

> One
> of the eye-popping cool features of Python is the ability to change code
> and have existing instances update to use the new methods automatically.

You will still be able to modify *classes* dynamically -- although you
have to declare this option by putting __dynamic__ = 1 in your class
statement.

> Smalltalk's Object>>become: is highly useful for similiar reasons; is
> there a new way to emulate this without __class__ assignment?

Probably not -- although I don't know what that does.

> > I'm sure I'd be able to come up with some kind of check that works. It
> > would probably be very similar to the check I already use to determine
> > whether two base classes are compatible -- the check that stops you
> > from doing "class C(list, dictionary)". But I repeat: I'd rather not.
>
> Is there a discussion somewhere of why you'd rather not? This is
> *essential* functionality for me, especially since it sounds like it won't
> be able to be completely replicated through some other mechanism.

I'd rather not because it's a complicated check to write, and it may
be difficult to explain the restrictions. Here's an example of the
kind of restriction that is unavoidable.

I don't know how familiar you are with Python's C-level internals. If
you are, you'll appreciate the problem if I took a list object and
changed its type pointer to the dictionary type -- the instance
lay-out of a dictionary is different, and all the methods would be
using the list data as if it were dictionary data. Recipe for
disaster. Likewise, changing a featureless object into a list or dict
would at the very least require growing the size of the instance; this
would require a realloc(), which may move the object in memory. But
if there are other references to the object, these would all have to
be updated. Python's run-time architecture just doesn't support that.

> (If you implement "reference.become(other)", for example, I won't
> mind nearly so much <0.5 wink>)

I'm not sure what that means, but if you could live with weak
references, we could easily add a way to change the referent of a weak
reference object.

> > That depends on how much of a slowdown it is. :)
>
> I don't know about others who use this feature, but the way I use it I
> could afford to wait 0.5 seconds for each __class__ assignment and not be
> too upset about it. Losing this ability entirely, however, would remove a
> significant feature from Twisted when used with a later python version.
> I implore you not to remove it.

Understood. Nevertheless, all evidence suggests that Twisted is not
typical Python code. :-)

I guess I have a bit of a hidden agenda: Python is more dynamic than
the language I *wanted* to design. Some of the dynamicism was simply
a implementation trick. Some of the dynamicism is getting in the way
of optimizing code, because the optimizer can never prove that certain
variables won't be changed. So I'm trying to look for ways that pin
down things a bit more. I'm making assumptions about how "typical"
Python code uses the dynamic features, and I'm slowly trying to
introduce restrictions in the language that make the optimizer's life
easier without affecting "typical" code.

For example, we're looking into optimizing access to builtins. For
this, we need to assume that the __builtin__ module is immutable; in
addition, if a module doesn't have a global 'len', for example, we
have to assume that such a global won't be inserted into the module
dynamically. I'm only aware of a very small number of applications
that violate this constraint; I'd rather provide a separate explicit
mechanism to override built-in functions so that the optimizer can be
aware of a potential change and avoid it.

Guido van Rossum

unread,
Aug 11, 2001, 9:39:58 PM8/11/01
to
Glyph Lefkowitz <gl...@twistedmatrix.com> writes:

> > I don't know of any better way to handle the Promise design pattern,
> > either. In C++, I'm having to kludge around it all the time via
> > letter/envelope idioms -- the possibility of changing classes on the
> > fly makes it much simpler and more direct. Hadn't thought of that
> > earlier...
>
> I think the ZODB uses something like this, but it's implemented using
> ExtensionClasses. I'd hate to see it become a requirement to muck around
> in native code.

Note: an explicit goal of the typeclass unification is that it should
become a *better* mechanism than ExtensionClasses, and eventually Zope
will be rewritten to use the new mechanisms. We expect that this
solution will actually be cleaner than the existing Zope
implementation.

So if you're relying on something that Zope needs to do, rest assured,
we'll find a way to do it under the new system too!

> > That one seems to be related to _another_ capability (of which I've
> > seen no indication that it's also going away) -- keeping the same
> > class object. but changing that object (rebinding method attributes
> > thereof). Or maybe I don't understand exactly what you mean?

As I just wrote, there will be two kinds of classes: immutable
(static) classes and dynamic classes. The latter can be modified just
like classic ones. But since this makes them less efficient, you have
to be explicit about this (when using the new class mechanism, which
is not the default in 2.2).

> Hmm, I suppose that could be done as well; I'd have to do some other
> shuffling around of class references in order to keep isinstance() working
> like it should, but you're right: it's not impossible.
>
> > It seems to me the functionality of become is homomorphic to that
> > of class-assignment, unless I'm truly missing something major.
>
> Class-assignment + dict-assignment is nearly homomorphic. Of course, if we
> could do *real* reference replacement, then you could have a Promise for a
> python list :-).

What do you mean by "real reference replacement"?

Roeland Rengelink

unread,
Aug 12, 2001, 3:30:11 AM8/12/01
to

Guido van Rossum wrote:
>
> Glyph Lefkowitz <gl...@twistedmatrix.com> writes:
>
> > Am I correct in understanding from this thread that there is an intent to
> > remove the ability to assign an instance's __class__ attribute?
>
> Yes, I'd like to remove this. See my previous post in this thread for
> more of an explanation of the problem. But I haven't decided yet!
> This thread will help me figure out how big of a deal it will be.
>

One idiom where I use __class__ assignment is the following

class State:
def __init__(self):
self.state = 1
def do_something(self):
if self.state:
...do something...
else:
...do something else...
def change_state(self):
if self.state:
self.state = 0
else:
self.state = 1

refactor:

class State1:
def do_something(self):
...do something...
def change_state(self):
self.__class__ = State2
class State2:
def do_something(self):
...do something else...
def change_state(self):
self.__class__ = State1


Now, I can't imagine wanting to use this with something like:

class State1(list):
...
class State2(dict):
...

But I can imagine wanting to use this with:

class State1(object):
class State2(object):

I could imagine one of several restrictions on assignment to __class__,
all based on comparing the object that is currently assigned to
__class__
(old) with the one that is going to replace it (new).

1. Make it illegal to assign to class if either old or new defines a
__slots__
attribute
2. Make it illegal to assign to class if old.__slots__ != new.__slots__
3. Make it illegal to assign to class if old.__bases__ != new.__bases__


[snip]

>
> If you know the type it's going to be eventually, you can use
> C.__new__() to create an uninitialized C instance.
>

Speaking of __new__. Would it be an idea to give __new__() the
responsibility
for calling __init__ on the new instance. Now we have:

class Singleton(object):
_instances = {}
def __new__(object_type, *args, **kwargs):
return Singleton._instances.setdefault(object_type,

object.__new__(object_type))

class A(Singleton):
def __init__(self):
print 'A.__init__', self

a, b = A(), A()

giving:
A.__init__ <A object at 0x810f6b0>
A.__init__ <A object at 0x810f6b0>

While I would have loved to be able to do (and it was I mild surprise
that I couldn't):

class Singleton(object):
_instances = {}
def __new__(object_type, *args, **kwargs):
if object_type in Singleton._instances:
return Singleton._instances[object_type]
new_instance = object.__new__(object_type)
try:
new_instance.__init__(*args, **kwargs)
except AttributeError:
pass
return new_instance

With object.__new__ defined something like:

class object:
def __new__(cls, *args, **kwargs):
if cls.__new__ == object.__new__:
# if I have responsibility I'll call __init__
new_instance = create_new_instance(cls)
new_instance.__init__(*args, **kwargs) # but catch AttrErr
else:
# If you have your own new, you take responsibility
new_instance = create_new_instance(cls)
return new_instance

and the instantiation process only calling cls.__new__()

BTW, I managed to build a Singleton class, using metaclasses, that gave
me the right behaviour. This process has become slightly less painfull
in 2.2, but only slightly ;)

The Singleton pattern is a rather trivial example of course. I think one
of things I'm looking for here is the ability to fold functionality,
that I would traditionally put in factory functions, into a base class.
__new__ seems to be ideally suited for that, but I would need to have
control over calling __init__ too. Having said that, being able to play
these tricks with metaclasses, is fun too. In a perverse sort of way...

Anyway,

Thanks for another set of great improvements to Python

Groeten,

Roeland
--
r.b.ri...@chello.nl

"Half of what I say is nonsense. Unfortunately I don't know which half"

Glyph Lefkowitz

unread,
Aug 12, 2001, 2:22:52 AM8/12/01
to pytho...@python.org

On Sun, 12 Aug 2001, Guido van Rossum wrote:

> Glyph Lefkowitz <gl...@twistedmatrix.com> writes:
>
> > Am I correct in understanding from this thread that there is an intent to
> > remove the ability to assign an instance's __class__ attribute?
>
> Yes, I'd like to remove this. See my previous post in this thread for
> more of an explanation of the problem. But I haven't decided yet!
> This thread will help me figure out how big of a deal it will be.

It's a pretty big deal for me, but it sounds like we should be able to
reach some agreement, as the exact semantics of it aren't too important to
me, rather the loss of the features it provides. (see below for a
suggestion on how to do the check...)

> Also note that nothing will change (yet) for classic classes -- in
> 2.2, classic classes will use a different metaclass from "new-style"
> classes, and the classic metaclass will provide the same semantics for
> classes and instance as before. The class statement creates a classic
> class by default -- unless you explicitly subclass from a built-in
> type or a new-style class.

Can you point me (and other readers just coming to this discussion) to a
few URLs illuminating the key differences between 'new-style' and
'old-style' classes? I'm going to do some reading up on the various PEPs,
but if there are any posts on python-dev I can refer to...

> In a sense, in 2.2 the new-style classes will still be experimental,
> and it's quite likely that based upon feedback from users they will
> change (for the better) in later versions.

Good to know. I have to say that although I am among the biggest
detractors to change, I can appreciate the difficulty of what you're
doing; python is the first language I know of that has ever gone through
significant *refactoring* (not whole-scale rewriting or just adding
things) at both the implementation and design level. I wish you good
luck, and I hope that the voices from the "loyal opposition" are more of a
help than a hindrance.

[ (snip) so much is changing ]

> But you're getting so much in return!

> Subclassing built-in types,

Operator overloading got me 90% of the way there, and that was really the
only 90% I care about. aside from isinstance() working on instances that
are 'like' integers now, what have I gained?

> get/set methods,

I already *have* get/set methods, in 1.5.2; see
twisted.python.reflect.Accessor :-)

> class and static methods,

Those could be easily faked before, for the OO zealots; but I actually
*like* the idea of using functions for things like that. I prefer to
organize my code at the module level, and I find the additional option of
these method types just clutter.

> uniform introspection...

Aye, now there's the rub. If we have uniform introspection, there's a
certain expectation that features like this become _easier_ to use, not
harder. Introspection is a powerful feature, all the more powerful if
it's uniform and systematic.

[ (snip) promise pattern is easy with class assignment ]

> If you know the type it's going to be eventually, you can use
> C.__new__() to create an uninitialized C instance.

The point is that sometimes you don't...

Of course, you *could* create a class of which all your 'promised' objects
are instances, and do specialization by having a reference in each
instance to its 'real' class, but doesn't that seem a little silly given
that we don't have to do it now? :-)

> I guess the name ("twisted") says it all. :-)

Just wait until you get into the subproject naming scheme :)

> > One of the eye-popping cool features of Python is the ability to change code
> > and have existing instances update to use the new methods automatically.
>
> You will still be able to modify *classes* dynamically -- although you
> have to declare this option by putting __dynamic__ = 1 in your class
> statement.

Hmm. This seems like _less_ uniform introspection to me. I do have to
note that if this is a requirement, then a part of the Twisted coding
standard will be to have all classes have __dynamic__ = 1; one rarely
knows what code is going to have a bug _before_ the server is started :)

Also, as an aside: one of my favorite things about Python is the ability
to fix bugs in a library you're using without having to modify the source
to that library (if libmodule.version == '0.6.0': fix_libmodule_bug()).
This greatly eases deployment. Fixing bugs in a running server is also
pretty important if you don't have the option to take the server down...

Could we have the default be the other way 'round? (Has there already
been a discussion of that?)

[ Object>>become: ]


> Probably not -- although I don't know what that does.

Since smalltalk objects are effectively handles, you can replace all
references to one object with references to another. I really don't
expect python to ever be able to do this. (Although currently it can fake
it _really_ convincingly -- everything but 'is', pretty much --
instance-to-instance, by replacing __dict__ and __class__)

[ (snip) Can't you write a check to validate __class__ assignment? ]

> I'd rather not because it's a complicated check to write, and it may
> be difficult to explain the restrictions. Here's an example of the
> kind of restriction that is unavoidable.
>
> I don't know how familiar you are with Python's C-level internals.
> If you are, you'll appreciate the problem if I took a list object and
> changed its type pointer to the dictionary type -- the instance
> lay-out of a dictionary is different, and all the methods would be
> using the list data as if it were dictionary data. Recipe for
> disaster. Likewise, changing a featureless object into a list or dict
> would at the very least require growing the size of the instance; this
> would require a realloc(), which may move the object in memory. But
> if there are other references to the object, these would all have to
> be updated. Python's run-time architecture just doesn't support that.

Yes. At least, I *think* I'm familiar with that.

The obvious solution, if I understand python's internals correctly, is to
make a type method slot, "tp_change_class". Most types would just raise
an exception; instances would continue to work the way they have been.
Aside from the fact that it's icky syntactically, is there anything that
this would break or make unpleasant?

And if the syntax is not dealable, it would be a perfectly reasonable
transition to have

def change_class(obj, newclass):
obj.__class__ = newclass

be the current implementation of a function that would be implemented as a
builtin in the future...

> Nevertheless, all evidence suggests that Twisted is not typical Python
> code. :-)

It may not be typical, but I feel that the introspection features that
Twisted makes use of should not be considered hacks or kludges; they are
_very_ powerful tools. I realize that some of them are the result of
implementation tricks but that does not make them any less useful,
powerful, or profound. (More on this below.)

> Python is more dynamic than the language I *wanted* to design.

Thank you for screwing up, guido! :-D

Seriously, the dynamic nature of Python is what makes it cool. I can
understand removing meaningless dynamic features in order to make it
faster (write access to locals() as a dictionary, for example) but I
suspect that many more dynamic features than you like are really really
useful.

If you want to design a better language in this regard, it may be
worthwhile to consider writing a language specifically designed for
producing more static python-like code, that uses the Python runtime to
present an external interface; (each module 'compiles' to a python-C API
file) especially if there are efficiency concerns. I'd *really* like a
way to push my code to a "lower level" without resorting to C; such a
language ("Cleese" maybe?) having _some_ dynamic characteristics of python
but lacking others would be fine.

This makes the 'parrot' proposal sound even better; if the more worrisome
developments of python could take place in a radical fork (Py3K), but
remain compatible on a module-binding level with existing Python...

I could imagine an effort to make Python less dynamic could well end up
like the ill-fated project to make C more dynamic (C++).

> Some of the dynamicism was simply a implementation trick. Some of the
> dynamicism is getting in the way of optimizing code, because the
> optimizer can never prove that certain variables won't be changed.

I want my Python code to execute faster, sure. But before you start
eliminating features for the sake of speed, ask yourself -- is anyone who
is really concerned with efficiency writing code in *python*? The speed
freaks have long since moved over to ADA or C++ or some other similiarly
torturous language to hate themselves at the speed of light while we're
having fun slowly. :-)

> So I'm trying to look for ways that pin down things a bit more. I'm
> making assumptions about how "typical" Python code uses the dynamic
> features, and I'm slowly trying to introduce restrictions in the
> language that make the optimizer's life easier without affecting
> "typical" code.

Allow me to be skeptical of the fact that there is such a thing as
"typical" code :). Most 'typical' python code could probably be written
in Java without much difference except a little more typing. It's when you
get to the boundary conditions -- adding attributes dynamically,
reflection, reloading, deploying, porting -- that's where Python starts to
shine.

Twisted is code which will probably only be written once, at least by me.
It _should_ only be written once. However, just because I hide the call
to socket.setblocking(0); socket.recv() behind a wrapper, and only call it
once (one line out of tens of thousands -- *highly* atypical!) does not
mean I'd be happy if nonblocking I/O went away. Similiarly, if you remove
the ability to dynamically reconstruct a module (and patch up existing
instances of classes in that module in at most O(k*n) time, even if
there's a big k), there is only maybe 30 or 40 lines of my code which will
be affected; but it will change my entire development style. It would
make me significantly less happy with python.

If only 1 function in my entire application has to be written in C, but I
can take advantage of the super-dynamic nature of Python elsewhere -- that
does not mean that it is a proportionally less important feature of python
that it can call C code.

And finally, at the risk of beating a dead horse to its second demise, as
I once heard a wise man say, "It is bad to use eval; it is worse not to
have it."

From my perspective, usually the least typical code is the most important.

> For example, we're looking into optimizing access to builtins. For
> this, we need to assume that the __builtin__ module is immutable; in
> addition, if a module doesn't have a global 'len', for example, we
> have to assume that such a global won't be inserted into the module
> dynamically. I'm only aware of a very small number of applications
> that violate this constraint; I'd rather provide a separate explicit
> mechanism to override built-in functions so that the optimizer can be
> aware of a potential change and avoid it.

You won't get an argument from me there, at least; __builtins__ always
struck me as weird anyway ^_^.

but-making-overridingly-global-variables-is-not-quite-the-same-as-
making-your-code-dynamic-and-introspective-ly-y'rs,

Glyph Lefkowitz

unread,
Aug 12, 2001, 2:28:08 AM8/12/01
to pytho...@python.org

On Sun, 12 Aug 2001, Guido van Rossum wrote:

> Glyph Lefkowitz <gl...@twistedmatrix.com> writes:

> > I think the ZODB uses something like this, but it's implemented using
> > ExtensionClasses. I'd hate to see it become a requirement to muck around
> > in native code.

[...]

> So if you're relying on something that Zope needs to do, rest assured,
> we'll find a way to do it under the new system too!

Yes. My one hope for keeping all of Twisted's features in tact through
all these changes is the fact that it has a few things in common with
Zope... I plan to milk that for all it's worth ;-)

> As I just wrote, there will be two kinds of classes: immutable
> (static) classes and dynamic classes. The latter can be modified just
> like classic ones. But since this makes them less efficient, you have
> to be explicit about this (when using the new class mechanism, which
> is not the default in 2.2).

As I mentioned in my other email, I think that less efficient should be
the default. _Usually_ you need dynamism, although it would certainly be
a good thing to have a more efficient way in some cases!

> What do you mean by "real reference replacement"?

I think I meant "wishful thinking about python being more like smalltalk" :-)

______ __ __ _____ _ _
| ____ | \_/ |_____] |_____|
|_____| |_____ | | | |
@ t w i s t e d m a t r i x . c o m

[[[[ http://twistedmatrix.com ]]]]


Alex Martelli

unread,
Aug 12, 2001, 5:06:52 AM8/12/01
to
"Guido van Rossum" <gu...@python.org> wrote in message
news:cpwv4a9...@cj20424-a.reston1.va.home.com...
...

> > > That one seems to be related to _another_ capability (of which I've
> > > seen no indication that it's also going away) -- keeping the same
> > > class object. but changing that object (rebinding method attributes
> > > thereof). Or maybe I don't understand exactly what you mean?
>
> As I just wrote, there will be two kinds of classes: immutable
> (static) classes and dynamic classes. The latter can be modified just
> like classic ones. But since this makes them less efficient, you have
> to be explicit about this (when using the new class mechanism, which
> is not the default in 2.2).

May I humbly suggest that this seems to be the same kind of design
error another great language designer, B. Stroustrup, made when he
decided that virtual methods (since they are less efficient) must be
explicitly specified as such, i.e. that the default for a C++ method
would be "non-overridable" (non-virtual). Actually, I think Stroustrup
had more justification -- he was coming from a preliminary language
that had no virtual (no override) at all, and being able to generate very
fast code from a simple compiler was his number two design priority
(right after backwards compatibility with C, priority number one). I
still believe that, despite this historical justification, in C++ as it
stands
today, having the optimization as the DEFAULT and having to be
explicit to TURN OFF the optimization is the wrong way 'round. It
requires far too much foresight of the library designer to guesstimate
which methods may need to be overridden by library users, and the
need to explicitly say 'virtual' each and every time not-so-subtly
pushes AGAINST letting methods be overridable.

"Premature optimization is the root of all evil" (Kernighan, I believe).
Having generality as the default, and optimization explicitly requested,
seems to be a way to avoid *way-premature* optimization. I do
realize that this, like all other generic considerations, needs to be
tempered with common sense, good taste, and design flair -- and
that you're past master at this and have repeatedly exhibited each
and every one of these admirable qualities. I can think of cases in
Python where optimization is the default and things work fine, e.g.
local variables (although the optimization is _implicitly_ removed
by the presence of an exec statement, I guess you could classify
that as 'explicit' -- from the POV of the compiler it is, although from
that of a programmer, it 'just works'). But it seems to me that the
ability to change a class on the fly (and change an instance's class
to something different although "compatible") is very fundamental,
way more than the dirty tricks that would be enabled if local vars
weren't optimized, and quite comparable to a method being virtual.

COM Automation (MIDL) defaults any interface to being 'extensible'
(not a MIDL keyword, but you do have to explicitly specify the
'nonextensible' keyword for interfaces you want to be non-
dynamic). This is typically surprising to programmers using it
with statically type-checked languages -- but Python's not going
down THAT route, is it? With dynamic languages, extensibility of
an interface as the default is SO natural... the description of the
interface becomes a description of that subset of the interface that
is known at compile-time, so that optimization is possible for that
subset, but by default it remains possible to change it at runtime
(although methods outside of the compile-time subset will not be
anywhere as fast to call, since a general search-for-method becomes
necessary, that's OK). If and when one needs absolute optimization
for an interface, THEN one explicitly optimizes by adding the
nonextensible attribute, so compilers can just insert the fast
method-lookup and give compile-time errors for methods that
are not listed in the now-known-to-be-immutable MIDL interface
description. This allows natural-order development -- start with
a prototype of maximally dynamic nature, THEN, when the prototype
is working but too slow, get into the hairy issues of optimization,
including adding 'nonextensible' at suitable interfaces used in
hot-spots as shown by profiling (among many other optimization
techniques, of course). Don't you want Python to support "get it
working, THEN make it fast" just as well in the future as it has in
the past...?


Alex

Alex Martelli

unread,
Aug 12, 2001, 5:43:00 AM8/12/01
to
"Guido van Rossum" <gu...@python.org> wrote in message
news:cpzo969...@cj20424-a.reston1.va.home.com...
...

> > Are there any features that we can use any longer that are not 'at our
own
> > risk'? Division will be changed, __class__ assignment is going away,
> > type() means something different, even the rules of scope...
>
> But you're getting so much in return! Subclassing built-in types,
> get/set methods, class and static methods, uniform introspection...

Absolutely!!! The prospects are WONDERFUL. We're just quibbling
about the details: my deepest apologies if that makes us seem
ungrateful for the huge bounties you keep delivering.


> > Also, as far as I know, better alternatives do not exist; for example,
> > "promise" objects from a database which are latently initialized when
they
> > are accessed.
>
> If you know the type it's going to be eventually, you can use
> C.__new__() to create an uninitialized C instance.

I don't see how that would help -- maybe I'm being thick. How would
the uninitialized C instance automatically initialize itself at need?

Let me give a real-life example (it was not done in Python, but it would
have been so much simpler if it had). We have a "UI-engine" component
that our applications use for user-interfaces. A typical application we
write has a few hundreds GUI-dialogs, each described in a file (or more
often in a DB entry). One of the tasks of the UI engine is making the
dialogs transparent to the application-engine (AE). The AE holds
references to dialog-objects and calls methods on them when it needs
to, either to make a dialog actually appear and let the user tweak
things (non-modally -- it's typical that the user wants to leave the
dialog half-finished, go back to checking other stuff with the app,
then come back and finish the dialog), or to change some details of
a dialog dynamically.

Originally we read all of the dialog-descriptions at start-up, but that
meant the start-up was far too slow. Then we added an 'initialized'
status bit to the dialog -- but EVERY method of the dialog had to call
an 'initialize myself in case I'm not yet initialized' method at the
start... well not every method, we soon found out, because methods
that 'change details' (but don't show the dialog) should rather queue
up the change requests if the dialog is not yet initialized, and the
'actually inizialize' method must apply the pending changes -- but
if a 'change details' method is called when the dialog is already
initialized, it must immediately apply the change rather than queue
the change-request. And there are methods which access some
dialog property (without showing the dialog) -- those need not
trigger initialization either, IF the property is set in a pending
(queued) change-request for the not-yet-initialized dialog...

The dialog class becomes far too large and unwieldy with this
darned status-bit. It doesn't take much to realize that this is
badly factored; the way this WANTS to be is:
-- Dialog is an abstract class (pure interface and a few
template-methods and utility-methods),
-- UninitializedDialog is a concrete subclass of Dialog whose
methods do certain things and which holds a certain
kind of state (queue of change-requests, filename or
other info needed to find initialization information),
-- TrueDialog is another concrete subclass of Dialog whose
methods do very different things and hold a totally
different kind of state
The only hitch is -- the SAME object needs to be born as
an UninitializedDialog and then BECOME a TrueDialog when
it's about to be shown (or when the transition can't be
avoided for other reasons). The AE may be holding a lot
of references to 'yonder dialog', and is entitled to consider
a dialog as a unitary object -- it's the UI server's business
to handle these optimizations transparently.

The obviously right approach would be to have the Dialog
object *change class*, of course. But we were working in
C++. So, we ended up with a typical letter/envelope
idiom, LOTS of boilerplate so that the envelope can delegate
a zillion methods to the letter, and so on.

THIS is what I think Glyph has in mind when he talks of
Promise objects. Now -- this falls in the case you describe:
I *do* know the object is eventually going to be a
TrueDialog (if the promise needs to be kept -- out of
many hundreds dialogs, a typical run of the application
may well only need to actually-show a dozen or two).
But how would TrueDialog.__new__() help me obviate
the lack of a way to change someDialog.__class__? It
would give me an empty TrueDialog, but how would I
use it to solve the above-sketched problem...?


> I'd rather not because it's a complicated check to write, and it may
> be difficult to explain the restrictions. Here's an example of the
> kind of restriction that is unavoidable.

Doesn't this fall into an "identity of slots" case? If the __class__
can be changed only when the slots of the old and new class
are identical, isn't this decently easy to check and explain?


> > (If you implement "reference.become(other)", for example, I won't
> > mind nearly so much <0.5 wink>)
>
> I'm not sure what that means, but if you could live with weak
> references, we could easily add a way to change the referent of a weak
> reference object.

Now THAT might help in my case -- the UI server would hand out
to the AE *weak* references to the Dialogs, and change the
referent when needed. The only issue would seem to be the
very fact that the reference is weak -- what is going to hold the
dialog object[s] alive just as long as the AE has some reference[s]
to that object and then garbage-collect them? Maybe, as well as
changing the weak reference's referent, we need a way to ask
for a weak reference that isn't really weak -- one that DOES keep
the referent alive -- but IS 'reseatable'. If such not-really-weak
references (plus maybe some indeed-weak ones) were all the
extant references to the object, then 'becomes' could be
implemented for that case, and I believe this might be OK
for the cases I have in mind.


> For example, we're looking into optimizing access to builtins. For
> this, we need to assume that the __builtin__ module is immutable; in
> addition, if a module doesn't have a global 'len', for example, we
> have to assume that such a global won't be inserted into the module
> dynamically. I'm only aware of a very small number of applications
> that violate this constraint; I'd rather provide a separate explicit
> mechanism to override built-in functions so that the optimizer can be
> aware of a potential change and avoid it.

Absolutely no objection to THIS: if a mechanism is provided, then
having it be more explicit to afford greater performance in the
normal case seems quite OK to me. What I'm afraid of (regarding
the change of __class__) is the risk of being left without ANY
simple and workable mechanism for tasks that, while perhaps
rare, are very important . (Regarding the __dynamic__=1, as
discussed on another post, I have a different issue, namely, what
should the default be; it seems to me that builtins being immutable
as a default, and only changed via explicit mechanisms, is a fine
match for the patterns of normal code, but *classes* being
immutable by default and needing explicit turning off of this
optimization isn't quite as smooth).


Alex

Guido van Rossum

unread,
Aug 12, 2001, 1:26:49 PM8/12/01
to
Roeland Rengelink <r.b.ri...@chello.nl> writes:

> One idiom where I use __class__ assignment is the following
>

> class State1:
> def do_something(self):
> ...do something...
> def change_state(self):
> self.__class__ = State2
> class State2:
> def do_something(self):
> ...do something else...
> def change_state(self):
> self.__class__ = State1

But you can easily do this differently -- e.g. you could do a method
assignment, or you could represent the state by an object.

> Speaking of __new__. Would it be an idea to give __new__() the
> responsibility for calling __init__ on the new instance.

I have thought about this. It would be less flexible, so I think not.
For example, with the current set-up you can use __new__ to create an
uninitialized instance, bypassing __init__. If __new__ called
__init__, you couldn't do that.

> BTW, I managed to build a Singleton class, using metaclasses, that gave
> me the right behaviour. This process has become slightly less painfull
> in 2.2, but only slightly ;)

A new metaclasses is a good way to implement a singleton pattern. A
factory function is also a good way.

> The Singleton pattern is a rather trivial example of course.

(I think the singleton pattern has received way too much exposure for
such a trivial idea.)

> I think one of things I'm looking for here is the ability to fold
> functionality, that I would traditionally put in factory functions,
> into a base class.

To some extent, you can do that -- but you'll have to forego __init__
of you want __new__ to return old instances (or write __init__ so that
it can be called more than once).

> __new__ seems to be ideally suited for that, but I would need to
> have control over calling __init__ too. Having said that, being able
> to play these tricks with metaclasses, is fun too. In a perverse
> sort of way...

--Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum

unread,
Aug 12, 2001, 3:11:39 PM8/12/01
to
"Alex Martelli" <ale...@yahoo.com> writes:

> I don't see how that would help -- maybe I'm being thick. How would
> the uninitialized C instance automatically initialize itself at need?

For example, you could have a __getattr__ that initialized the thing
as soon as it is touched.

> Let me give a real-life example (it was not done in Python, but it would
> have been so much simpler if it had).

I see your reputation for being verbose is not a fable. ;-)

This pattern can be programmed in lots of ways -- changing class is
just one way, and not necessarily the best.

Note that Python's dynamicism also allows other solutions to be coded
more efficiently than in C++ -- e.g. coding a proxy in Python is a
breeze using dynamic method lookup.

> > I'd rather not because it's a complicated check to write, and it may
> > be difficult to explain the restrictions. Here's an example of the
> > kind of restriction that is unavoidable.
>
> Doesn't this fall into an "identity of slots" case? If the __class__
> can be changed only when the slots of the old and new class
> are identical, isn't this decently easy to check and explain?

That's the check I meant. The problem is that the info about which
slots are defined is spread all over the place, and some of it
(e.g. slots accessed only in C) is not available for introspection.

> > I'm not sure what that means, but if you could live with weak
> > references, we could easily add a way to change the referent of a weak
> > reference object.
>
> Now THAT might help in my case -- the UI server would hand out
> to the AE *weak* references to the Dialogs, and change the
> referent when needed. The only issue would seem to be the
> very fact that the reference is weak -- what is going to hold the
> dialog object[s] alive just as long as the AE has some reference[s]
> to that object and then garbage-collect them? Maybe, as well as
> changing the weak reference's referent, we need a way to ask
> for a weak reference that isn't really weak -- one that DOES keep
> the referent alive -- but IS 'reseatable'. If such not-really-weak
> references (plus maybe some indeed-weak ones) were all the
> extant references to the object, then 'becomes' could be
> implemented for that case, and I believe this might be OK
> for the cases I have in mind.

I imagine you could probably do this the way persistency is typically
done, but I haven't thought about it that much.

> Absolutely no objection to THIS: if a mechanism is provided, then
> having it be more explicit to afford greater performance in the
> normal case seems quite OK to me. What I'm afraid of (regarding
> the change of __class__) is the risk of being left without ANY
> simple and workable mechanism for tasks that, while perhaps
> rare, are very important.

It was introduced in Python 1.5, and I never really missed it before
-- I just added it because I noticed it was easy to add. I think it's
not even documented, or if it is (I couldn't find it) it's marked
experimental. I still think there are other ways of accomplishing the
same effect, maybe using a proxy pattern.

> (Regarding the __dynamic__=1, as discussed on another post, I have a
> different issue, namely, what should the default be; it seems to me
> that builtins being immutable as a default, and only changed via
> explicit mechanisms, is a fine match for the patterns of normal
> code, but *classes* being immutable by default and needing explicit
> turning off of this optimization isn't quite as smooth).

A dynamic class will be slower than a static class, so I prefer having
to request this explicitly. But we can certainly quibble about that!
I expect that the new facilities will be seriously user-tested only
after 2.2 is released, and experience will learn how often people are
changing class variables.

Guido van Rossum

unread,
Aug 12, 2001, 3:21:03 PM8/12/01
to
Glyph Lefkowitz <gl...@twistedmatrix.com> writes:

> As I mentioned in my other email, I think that less efficient should be
> the default. _Usually_ you need dynamism, although it would certainly be
> a good thing to have a more efficient way in some cases!

That may be usually for *your* classes :-). Most of *my* classes
don't need this. If the default is static, most users will not have
to change it (if my expectation is right that most classes don't need
to be dynamic), and so most classes will benefit from the speed-up of
static classes. Since classes that need to be dynamic are naturally
discovered during coding or testing (they give errors when trying to
change a class), selectively turning on the dynamics is easy -- and it
requires an explicit decision on the programmer's behalf.

On the other hand, if the default were dynamic (as it is for classic
classes), it will be very tedious to tune an application for speed --
because all classes are dynamic by default, use of dynamicism may have
crept into the design that requires expensive refactoring before a
class can be made static.

Maybe I'll make this a module-global default, like __metaclass__.
Also note that if a base class is dynamic, its subclasses inherit the
dynamic property by default. (You can still derive a static class
from a dynamic base by saying __dynamic__ = 0.)

> > What do you mean by "real reference replacement"?
>
> I think I meant "wishful thinking about python being more like
> smalltalk" :-)

But there already *is* a Smalltalk. There's no point in trying to
compete for another language's niche.

Guido van Rossum

unread,
Aug 12, 2001, 3:22:54 PM8/12/01
to
[Guido van Rossum]

> > As I just wrote, there will be two kinds of classes: immutable
> > (static) classes and dynamic classes. The latter can be modified just
> > like classic ones. But since this makes them less efficient, you have
> > to be explicit about this (when using the new class mechanism, which
> > is not the default in 2.2).

[Alex Martelli]


> May I humbly suggest that this seems to be the same kind of design
> error another great language designer, B. Stroustrup, made when he
> decided that virtual methods (since they are less efficient) must be
> explicitly specified as such, i.e. that the default for a C++ method

> would be "non-overridable" (non-virtual). [...]

In this case, see my response to Glyph in this thread.

Alex Martelli

unread,
Aug 12, 2001, 4:03:22 PM8/12/01
to
"Guido van Rossum" <gu...@python.org> wrote in message
news:cpelqhf...@cj20424-a.reston1.va.home.com...

> "Alex Martelli" <ale...@yahoo.com> writes:
>
> > I don't see how that would help -- maybe I'm being thick. How would
> > the uninitialized C instance automatically initialize itself at need?
>
> For example, you could have a __getattr__ that initialized the thing
> as soon as it is touched.

But then *none* of the methods could belong to the class?! Each and
every one to be inserted afresh in every instance's __dict__?! But then
what's the use of "making the object the right class from the start",
as that class C HAS to be empty to let __getattr__ do its job?!

Of course, if the initialization is done by __getattr__ anyway, it's
quite futile to use __new__ to "return an empty instance of the
class" -- the necessarily-empty __init__ does just as well, no? So,
again, I ask -- how is knowing from the start the class C to which
the instance will belong, when its Promise is fulfilled, and exploiting
the __new__ feature to make an empty C.__new__ -- how does
this *HELP AT ALL*? Your alleged "example" is NOT an example of
all of this glittering technology helping at all with this problem.

Which is not a problem in Python as it stands -- becomes a problem
if and when you implement the "enhancement" (?) of forbidding the
change-class-of-object idiom that Python now supports.


> > Let me give a real-life example (it was not done in Python, but it would
> > have been so much simpler if it had).
>
> I see your reputation for being verbose is not a fable. ;-)
>
> This pattern can be programmed in lots of ways -- changing class is
> just one way, and not necessarily the best.

It's patently obvious that the 'becomes'/class-change way is not the
only way to implement this, considering I described two others. It's
also pretty obvious to me that class-change is by far the most natural
implementation -- except when targeting a crippled language that
doesn't support class-change.

We need polymorphic behavior along a given interface (mostly
abstract, though it can supply some Template methods [in the
Gof4 terminology] and utilities). There are several implementations
of this interface, each requiring distinct code and state. A very
obvious case for placing each implementation in its own class.

Except we need this dual behavior, and switch from one to the
other, to be present in "the same object" from the point of view
of client-code. So, if a language forbids an object's class to
change, it can't "really" be the same object -- we're forced to
add a layer of indirectness, one way or another. I still think
letter-envelope is the best idiom for this in C++. Automatic
delegation does ease the pain where present, as you suggest:

> Note that Python's dynamicism also allows other solutions to be coded
> more efficiently than in C++ -- e.g. coding a proxy in Python is a
> breeze using dynamic method lookup.

Again a class with just a __getattr__ -- an "indirector". Pretty
powerful idiom, of course, as per the recipe I posted to:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52295.
But it's ironic -- there, I used an indirector to work around the
inability of inheriting from a built-in object. Now, I look forward
to pensioning off that idiom... and I may have to resurrect it to work
around the inability of changing an object's __class__...!-)


> > > I'd rather not because it's a complicated check to write, and it may
> > > be difficult to explain the restrictions. Here's an example of the
> > > kind of restriction that is unavoidable.
> >
> > Doesn't this fall into an "identity of slots" case? If the __class__
> > can be changed only when the slots of the old and new class
> > are identical, isn't this decently easy to check and explain?
>
> That's the check I meant. The problem is that the info about which
> slots are defined is spread all over the place, and some of it
> (e.g. slots accessed only in C) is not available for introspection.

I guess I'll have to look at the source to understand why that's
so inevitable -- right now it escapes me. (Is it going to be part
of 2.2alpha2, or do I have to get a CVS tree?)


> > > I'm not sure what that means, but if you could live with weak
> > > references, we could easily add a way to change the referent of a weak
> > > reference object.
> >
> > Now THAT might help in my case -- the UI server would hand out

Thinking about it, it help's because it's an indirector -- a built-in
one. It would similarly help in other cases where indirectors help,
even when changing-class is anyway inapplicable.

> > references (plus maybe some indeed-weak ones) were all the
> > extant references to the object, then 'becomes' could be
> > implemented for that case, and I believe this might be OK
> > for the cases I have in mind.
>
> I imagine you could probably do this the way persistency is typically
> done, but I haven't thought about it that much.

I think it would suffice to add an option to the existing weak
references to let them incref/decref their referent (in addition
to letting them be reseated, which you say will be easy).

A kludge that's come to mind since, usable if weak refs were
reseatable but still weak, is to give the client a *pair* of
references -- one normal, one weak, initially both to the
fake-object. The client's tasked to only use the second item
of the pair to address the object, but warned that it is letting
go of the FIRST item that may invalidate the object. When
the fake object needs to mutate to a true one, it reseats the
weak-reference and sets in itself a normal-reference to the
true object it's generating. Peculiar indeed, but...

> > Absolutely no objection to THIS: if a mechanism is provided, then
> > having it be more explicit to afford greater performance in the
> > normal case seems quite OK to me. What I'm afraid of (regarding
> > the change of __class__) is the risk of being left without ANY
> > simple and workable mechanism for tasks that, while perhaps
> > rare, are very important.
>
> It was introduced in Python 1.5, and I never really missed it before
> -- I just added it because I noticed it was easy to add. I think it's
> not even documented, or if it is (I couldn't find it) it's marked
> experimental. I still think there are other ways of accomplishing the
> same effect, maybe using a proxy pattern.

Of course, just as there are other ways of accomplishing raising-to-
power, besides having it as a built-in -- it IS just repeated
multiplication, after all. That's no argument for removing **
from the language, it seems to me. It IS no doubt rarely used
AND easily mis-used [code such as x**3+2.2*x**2+1.4*x+3.6
rather than 3.6+x*(1.4+x*(2.2+x)) is most likely slower] -- but
that's no real argument for taking ** away, as it may well be the
most natural/clear/readable way to express some problems, and
also (if and when used appropriately) faster than alternatives.


> A dynamic class will be slower than a static class, so I prefer having
> to request this explicitly. But we can certainly quibble about that!

As I said, exactly Stroustrup's reasoning in C++'s design -- a non
overridable method is faster than a virtual one, so he preferred to
make users request virtuality explicitly. Like anybody who's ever
taught C++ or helped beginners in that language fix mistakes, I've
had occasion to bemoan that design choice repeatedly -- even though,
with performance such an important priority for C++, and historical
compatibility reasons, it's quite understandable. It appears to me
making static classes the default is a similar choice, requiring lots of
up-front design (dynamic classes are used to change behavior of
a running program, for example -- to refactor, unless dynamic
classes have been specified beforehand for all important places,
the program needs to be stopped), facilitating premature optimization,
and without Stroustrup's particular excuses (surely performance is not
as key a criterion for Python as for C++, and in this case we'd be
going _against_ historical compatibility).

> I expect that the new facilities will be seriously user-tested only
> after 2.2 is released, and experience will learn how often people are
> changing class variables.

You think people will be changing class variables _less_ often than
now because of 2.2's new facilities? I don't see the relevance, but
I guess I must have missed something. The new facilities appear
to be very deep and important, but in areas quite different from
those in which class-object changes are useful, it seems to me.


Alex

Roman Suzi

unread,
Aug 12, 2001, 4:44:58 PM8/12/01
to pytho...@python.org
On Sun, 12 Aug 2001, Guido van Rossum wrote:

>> (Regarding the __dynamic__=1, as discussed on another post, I have a
>> different issue, namely, what should the default be; it seems to me
>> that builtins being immutable as a default, and only changed via
>> explicit mechanisms, is a fine match for the patterns of normal
>> code, but *classes* being immutable by default and needing explicit
>> turning off of this optimization isn't quite as smooth).
>
>A dynamic class will be slower than a static class, so I prefer having
>to request this explicitly. But we can certainly quibble about that!
>I expect that the new facilities will be seriously user-tested only
>after 2.2 is released, and experience will learn how often people are
>changing class variables.

<< dynamic-on-demand >>

I think that only builtin (written in C) classes must be static by
default. The inability to change class's variables will remove some of the
Python niceness, when it is possible to _simply_ add/remove attributes
to/from classes and instances. This adds a lot of freedom.

Maybe it is possible to inherit builtin methods from builtin classes
without so restricting measures? (When dynamics is turned on not by some
__dynamic__ = 1, but by the fact of inheritance from builtin class.
This why one can still inherit, but this will cause some slowdown.

If I understand correctly, C-functions are speedy, because they find atts
in the static places and user-defined classes use costly lookups.

Maybe it was already suggested, but probably __dynamic__ could be changed
implicitely: when access which need dynamics occurs. Before that, class is
speedy and static (but potentially dynamic - dynamic-on-demand ;-)

This way most of classes will remain speedy (if they do not override but
add) but those which chose to make deeper changes will pay by longer
lookups after the first access swithes class __dynamic__ on.

And it will all be implicit and nobody notices!

(I hope it not total nonsense)


Sincerely yours, Roman Suzi
--
_/ Russia _/ Karelia _/ Petrozavodsk _/ r...@onego.ru _/
_/ Sunday, August 12, 2001 _/ Powered by Linux RedHat 6.2 _/
_/ "Is it possible to feel gruntled?" _/


Roman Suzi

unread,
Aug 12, 2001, 4:55:43 PM8/12/01
to Guido van Rossum, pytho...@python.org
On Sun, 12 Aug 2001, Guido van Rossum wrote:

>That may be usually for *your* classes :-). Most of *my* classes
>don't need this. If the default is static, most users will not have
>to change it (if my expectation is right that most classes don't need
>to be dynamic), and so most classes will benefit from the speed-up of
>static classes. Since classes that need to be dynamic are naturally
>discovered during coding or testing (they give errors when trying to
>change a class), selectively turning on the dynamics is easy -- and it
>requires an explicit decision on the programmer's behalf.

Well, it seems appropriate to repeat my thought about
dynamic-on-demand here.

The thought is simple: let all classes be static UNTIL operation is
requested which need them to be dynamic. Then, __dynamic__ attribute
appears = 1 and everybody is happy: those, who want to sacrifice speed for
flexibility just let that operations happen. Those who use static classes
just have them.

Adding EXPLICIT __dynamic__ makes things hairy and no better
than adding "static" before "class".

class SleepyStatic:
def __init__(self, x):
if x:
SleepyStatic.newattr = x # now __dynamic__ == 1
else:
self.newattr = x # __dynamic__ remains not defined or 0


- this allows Python to be flexible (when needed) and
more speedy (when this is wanted more).

Glyph Lefkowitz

unread,
Aug 12, 2001, 10:57:34 PM8/12/01
to pytho...@python.org

On Sun, 12 Aug 2001, Alex Martelli wrote:

> > A dynamic class will be slower than a static class, so I prefer having
> > to request this explicitly. But we can certainly quibble about that!
>
> As I said, exactly Stroustrup's reasoning in C++'s design -- a non
> overridable method is faster than a virtual one, so he preferred to
> make users request virtuality explicitly. Like anybody who's ever
> taught C++ or helped beginners in that language fix mistakes, I've
> had occasion to bemoan that design choice repeatedly -- even though,
> with performance such an important priority for C++, and historical
> compatibility reasons, it's quite understandable. It appears to me
> making static classes the default is a similar choice, requiring lots of
> up-front design (dynamic classes are used to change behavior of
> a running program, for example -- to refactor, unless dynamic
> classes have been specified beforehand for all important places,
> the program needs to be stopped), facilitating premature optimization,
> and without Stroustrup's particular excuses (surely performance is not
> as key a criterion for Python as for C++, and in this case we'd be
> going _against_ historical compatibility).

Let me underscore this once, and put it in the terms I think of it in:
with these changes, you will be introducing the worst design error of C++
into Python. Indeed, the _central_ problem with C++ is that it always
prefers efficiency over dynamism, even in its so-called "high level"
features.

If python were not as dynamic as it is, it would be _completely_
uninteresting to me; especially given that these changes are not going to
propel python into the running as even a mildly fast programming language
(do you believe that you can approach the performance of HotSpot or GCJ by
making classes static? The speed of the VM for Squeak with its JIT and
massively optimized GC?). Features like class assignment constitute not
only a convenience for me, but in many ways a near religious experience
about how important making one's code dynamic is.

(I tried to think of a nicer way to say this; and I apologise in advance
for failing to do so, but...) This exchange confirms my suspicion, that
I'd seen voiced once before on this mailing list, that Guido doesn't
really know what it is that makes Python cool. To me, it's obvious that
the _point_ of the whole thing is dynamicism; of course, as Travis
Hartwell said of Twisted, "... it seems like you guys program a different
Python than I do. You use its dynamic nature and all of that so much more
than I have ever thought of." Maybe it's only a matter of time before we
really _are_ using a different language. :-(

> > I expect that the new facilities will be seriously user-tested only
> > after 2.2 is released, and experience will learn how often people are
> > changing class variables.
>
> You think people will be changing class variables _less_ often than
> now because of 2.2's new facilities? I don't see the relevance, but I
> guess I must have missed something. The new facilities appear to be
> very deep and important, but in areas quite different from those in
> which class-object changes are useful, it seems to me.

It is an act of supreme arrogance -- which all programmers commit, on
frequent occasions, myself included -- to assume that the users of your
code will not want to change it. Slightly less arrogant, but perhaps more
annoying, is the assumption that the users of your code will not want to
change it at run-time. Every time I have made this assumption, I have
been wrong. (Even when I was the only "client" of my code! ^_^)
Sometimes that wrongness is worth paying for, because of size or speed
considerations -- but that event is sufficiently rare that I'd much rather
have to explicitly say "__be_fast_not_nice__ = 1".

I've heard it said that the art of software engineering is the art of
deferring binding until later and later. Python defers it as long as
possible, with really excellent results. To make it do so any sooner
would be to make it less artful :-)

______ __ __ _____ _ _
| ____ | \_/ |_____] |_____|
|_____| |_____ | | | |
@ t w i s t e d m a t r i x . c o m

http://twistedmatrix.com/users/glyph


Bengt Richter

unread,
Aug 12, 2001, 11:20:10 PM8/12/01
to
On Sun, 12 Aug 2001 01:35:16 GMT, Guido van Rossum <gu...@python.org> wrote:

>Glyph Lefkowitz <gl...@twistedmatrix.com> writes:
>
>> Am I correct in understanding from this thread that there is an intent to
>> remove the ability to assign an instance's __class__ attribute?
>
>Yes, I'd like to remove this. See my previous post in this thread for
>more of an explanation of the problem. But I haven't decided yet!
>This thread will help me figure out how big of a deal it will be.
>

Assigning to an instance's __class__ attribute strikes me as a kind of in-place
coercion of the instance to a new type/class (hopefully compatible ;-).

Could it be handled as a standard coercion (which might fail safely on its own
terms) and assignment, optimized down to rebinding __class__ when some kind of
fast check revealed that that was safe?

I guess it comes down to what assumptions you want to be able to preserve (and/or
provide checks and alternate handling for) through read/compile/load/execute phases
and their permutations -- but whatever is allowed, the result must be blamable
on the programmer ;-)

Aahz Maruch

unread,
Aug 13, 2001, 12:24:37 AM8/13/01
to
In article <9l5h6...@enews4.newsguy.com>,

Alex Martelli <ale...@yahoo.com> wrote:
>
>"Premature optimization is the root of all evil" (Kernighan, I believe).

Knuth.

In return for this little correction, would you please explain in simple
words exactly what it is that you're talking about? I'm reasonably
adept at understanding plain classes, but you've been going way over my
head here. I suspect other readers of c.l.py would like to understand,
too.


--
--- Aahz <*> (Copyright 2001 by aa...@pobox.com)

Hugs and backrubs -- I break Rule 6 http://www.rahul.net/aahz/
Androgynous poly kinky vanilla queer het Pythonista

Internet $tartup$: Arbeit ueber alles

Rainer Deyke

unread,
Aug 13, 2001, 12:52:38 AM8/13/01
to
"Glyph Lefkowitz" <gl...@twistedmatrix.com> wrote in message
news:mailman.997671441...@python.org...

> Let me underscore this once, and put it in the terms I think of it in:
> with these changes, you will be introducing the worst design error of C++
> into Python. Indeed, the _central_ problem with C++ is that it always
> prefers efficiency over dynamism, even in its so-called "high level"
> features.

I strongly disagree here. Quite aside from the efficiency issue, C++'s lack
of dynamism is a feature, not a bug. (This does not change the fact that
C++ is an unholy mess.) Such a feature would not be appropriate for Python,
however.


--
Rainer Deyke (ro...@rainerdeyke.com)
Shareware computer games - http://rainerdeyke.com
"In ihren Reihen zu stehen heisst unter Feinden zu kaempfen" - Abigor


Christian Tanzer

unread,
Aug 13, 2001, 1:41:43 AM8/13/01
to Guido van Rossum, pytho...@python.org

Guido van Rossum <gu...@python.org> wrote:

> Roeland Rengelink <r.b.ri...@chello.nl> writes:
>
> > One idiom where I use __class__ assignment is the following
> >
> > class State1:
> > def do_something(self):
> > ...do something...
> > def change_state(self):
> > self.__class__ = State2
> > class State2:
> > def do_something(self):
> > ...do something else...
> > def change_state(self):
> > self.__class__ = State1
>
> But you can easily do this differently -- e.g. you could do a method
> assignment, or you could represent the state by an object.

For some definition of easy...

If the change affects several methods instead of one you could still
use method assignments but that's not easy compared to a simple
__class__ assignment.

A proxy object is a clean solution but introduces additional
complexity.

Disclaimer: I currently don't use __class__ assignment (except once in
very experimental code). Yet I know several design patterns where it
provides a convenient solution --

--
Christian Tanzer tan...@swing.co.at
Glasauergasse 32 Tel: +43 1 876 62 36
A-1130 Vienna, Austria Fax: +43 1 877 66 92


Roeland Rengelink

unread,
Aug 13, 2001, 2:44:51 AM8/13/01
to

Guido van Rossum wrote:
>
> Roeland Rengelink <r.b.ri...@chello.nl> writes:
>
> > One idiom where I use __class__ assignment is the following
> >
> > class State1:
> > def do_something(self):
> > ...do something...
> > def change_state(self):
> > self.__class__ = State2
> > class State2:
> > def do_something(self):
> > ...do something else...
> > def change_state(self):
> > self.__class__ = State1
>
> But you can easily do this differently -- e.g. you could do a method
> assignment, or you could represent the state by an object.
>

Sure, and if...else worked fine too ;)

I have only two arguments for using this solution, in favor of the
others. First, I think class assignment expresses the intent of the code
rather elegantly. Second, --and this is an even weaker argument-- it was
the most efficient solution in my particular application. (speed-wise
when compared with if..else or method-dispatch to a state-object,
memory-wise when compared to method assignments to all instances)

I am, of course, not qualified to make the trade-off between these
benefits, and the cost of making class-assignment available.

> > Speaking of __new__. Would it be an idea to give __new__() the
> > responsibility for calling __init__ on the new instance.
>
> I have thought about this. It would be less flexible, so I think not.
> For example, with the current set-up you can use __new__ to create an
> uninitialized instance, bypassing __init__. If __new__ called
> __init__, you couldn't do that.
>

I didn't mean to let object.__new__ call the init, which indeed looses
flexibility
What I did mean is the following change to type_call. I Have no idea
what side effects
I may have missed here, but it does exactly what I want.

staticforward PyObject *
slot_tp_new(PyTypeObject *type, PyObject *args, PyObject *kwds);

static PyObject *
type_call(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
PyObject *obj;

if (type->tp_new == NULL) {
PyErr_Format(PyExc_TypeError,
"cannot create '%.100s' instances",
type->tp_name);
return NULL;
}

obj = type->tp_new(type, args, NULL);
if (obj != NULL) {
type = obj->ob_type;
/* if the user defines his own __new__,
let him call __init_ explicitly */
if (type->tp_new != slot_tp_new){
if (type->tp_init != NULL &&
type->tp_init(obj, args, kwds) < 0) {
Py_DECREF(obj);
obj = NULL;
}
}
}
return obj;
}

Usage:

class A(object):
def __new__(object_type, *args, **kwargs):
new_inst = object.__new__(object_type)
new_inst.__init__(*args, **kwargs)
return new_inst
def new(object_type):
return object.__new__(object_type)
new = classmethod(new)
def __init__(self, *args, **kwargs):
print 'Hi...'


>>> a = A()
Hi
>>> b = A.new()
>>> a
<A object at 0x81807b8>
>>> b
<A object at 0x817dd60>
>>> type(a)
<type 'A'>
>>> type(b)
<type 'A'>


I.e. the programmer can define __new__, to determine what happens with
A(), and define A.new() to give bare instances. A gain in flexibility (I
think)

Cheers,

Alex Martelli

unread,
Aug 13, 2001, 2:56:22 AM8/13/01
to
"Aahz Maruch" <aa...@panix.com> wrote in message
news:9l7kq5$6fn$1...@panix2.panix.com...

> In article <9l5h6...@enews4.newsguy.com>,
> Alex Martelli <ale...@yahoo.com> wrote:
> >
> >"Premature optimization is the root of all evil" (Kernighan, I believe).
>
> Knuth.
>
> In return for this little correction, would you please explain in simple
> words exactly what it is that you're talking about? I'm reasonably
> adept at understanding plain classes, but you've been going way over my
> head here. I suspect other readers of c.l.py would like to understand,

I had posted a simple solution to a problem "how do I build an _empty_
object of some class bypassing the class's __init__ (which does a lot of
work) so I can then manually copy the relevant parts of the state":

def empty_copy(object):
class Temp: pass
result = Temp()
result.__class__ = object.__class__
return result

Guido explained that this would not work any more in 2.2 or a bit later,
because the assignment to result.__class__ would be eventually
disallowed. Now, this is not a big problem for this particular case: not
only will there be a new class special method called __new__ that makes
an empty instance, but even today there are workable alternatives, such
as "return new.instance(object.__class__)" (it has not been discussed
what happens to module new in 2.2 and later, but it's a rather "deep
internals" module, so nobody will be surprised if it changes). However,
there are other use-cases for "changing the __class__ of instance x" --
none used very frequently, but, some of us believe, rather a significant
Python feature (Guido isn't very convinced of the latter point).

In the ensuing discussion, another related issue emerged. Today, a
Python class is a mutable object; if and when you need to change it,
it's as easy as changing any other object -- and all instances of the
class are implicitly "updated" when the class object is. This is not
used very often, either, but the fact that any class object CAN be so
updated is also significant: it may be used to fix a running program
with new code without huge investment in infrastructure as would be
needed to perform similar tasks in a less-dynamic language.

However, some classes will be unchangeable in 2.2 and/or later, and
Guido plans to make unchangeability the *default*, requiring users
to explicitly set class attribute __dynamic__ to 1 to make a class
changeable (there may be some help: the default __dynamic__ for
classes in a module may be a module attribute, and a class may
inherit it from ancestors). Having optimization as the default, with
dynamicity a special-case requiring explicit request, feels like more
of a C++ attitude than a Python one to several of us (although there
are Pythonic precedents, such as local variables of a function). Such
an optimization may be deemed "way-premature", as it occurs at
language-design time:-). If dynamism is used to fix potential bugs
in a running server, for example, it's hard to see how the user may
be required to predict in advance which classes he's writing have
bugs and thus may need to be updated on the fly. Having the
possibility of turning class changeability explicitly *off* sounds like
a great optimization -- but, some of us feel, it should be the
general case that's the default, optimization working when explicitly
requested, rather than vice versa. (I.e., NOT as in C++'s "virtual"
for methods and repeated inheritance, which many, though not all,
think of as a C++ misfeature).

Guido's counter-argument is that classes that need to be dynamic
will emerge in testing, while having dynamism as the default might
lead to it being used wantonly and making later optimization hard.
On this issue as well as unchangeability of __class__, he seems to
be very motivated towards performance/optimization possibilities,
which may explain the eerie C++ parallels (performance was always
paramount in C++'s design, while, so far, it doesn't seem to have
had all that strong an influence on Python).

He may be right, of course (he generally is) -- perhaps we've all
grown used to a Python that's in good part defined by dynamic
possibilities that "just happened" to fall out of implementation
techniques and were never a design-intent (if somebody knows
the design-intent, it should be him:-), and giving up on those
will have a general benefit in terms of performance increase. I
still believe that when I need performance I code or recode in
C++, or in C, and I'd like Python to stay simple and wonderful.
But many (particularly those who have not exploited the dynamic
possibilities, are not familiar with C++, etc) may agree with Guido.

It will surely be a very different language when this type/class
unification thing gets finished...!


Alex

Roman Suzi

unread,
Aug 13, 2001, 3:45:38 AM8/13/01
to pytho...@python.org
On Mon, 13 Aug 2001, Roman Suzi wrote:

> For example, by using __builtin__ trick:
>
> class MyNumber(integer):
>
> def myadd(self, x, y):
> return x*y
>
> __builtin__.add = myadd
>
>
> The essence of the idea is to gather builtin (C) methods into special
> namespace: __builtin__, the same way it is done with builtin fuctions.
> This will allow Python programmers to remember less details and
> will give shadowing for free!!!
>
> The summary of proposal is to leave things as they are, but to introduce
> __builtin__ namespace for optimized methods.
>
> (Name __builtin__ could be different)

I forgot to mention that inheritance rules for __builtin__ could
be made explicit, if needed:


class MyNumber(no_builtins(type(1))):
...

or

class MyNumber(type(1)):
...

where no_builtins() is function on classes, making changes
to, say, __builtin__.

Or even:

class MyNumber(inline(type(1))):
...

Which will bring type(1).__builtin__ directly into
MyNumber namespace, wrapping raw C-methods if necessary.

I think, this is good idea.

And also I think class definitions must allow
to use sequences, so, some filter function could be applied:

class C(filter(myrule, [A,B])):
pass

Sincerely yours, Roman A.Suzi
--
- Petrozavodsk - Karelia - Russia - mailto:r...@onego.ru -


Roman Suzi

unread,
Aug 13, 2001, 3:27:42 AM8/13/01
to pytho...@python.org, Guido van Rossum

Hello,

I am against PEP 252 in its present form!

It makes OO in Python too complex without obvious necessity.

One of the cool features of Python I am glad it has is the ability to
change class attributes on the fly (I've already posted it to Guido some
time ago (maybe a year ago), specifically asking NOT TO CHANGE this Python
ability.

No API will make things as simple as they are shown below:

This is my test if the changes to the Python are appropriate
for me:

--------------------------------------------------------

def do_fly(self):
return "I am flying."
def do_swim(self):
return "I am swiming."
def do_sing(self):
return "I am singing."

class Everybody:
def i_am(self):
return "I am %s." % self.__class__.__name__
def __getattr__(self, attname):
"""This is added only for the text below to be "nice",
in reality it is not needed"""
def cant_do(self=None, action=attname):
return "I can't %s." % action
return cant_do

class Fish(Everybody):
swim = do_swim

class Mermaid(Everybody):
sing = do_sing
swim = do_swim

class Bird(Everybody):
fly = do_fly
sing = do_sing

class Man(Everybody):
sing = do_sing


f = Fish(); r = Mermaid(); b = Bird(); m = Man()
print f.i_am(), f.swim(), f.sing(), f.fly()
print r.i_am(), r.swim(), r.sing(), r.fly()
print b.i_am(), b.swim(), b.sing(), b.fly()
print m.i_am(), m.swim(), m.sing(), m.fly()
print "Man learned to swim."
Man.swim = do_swim
print m.i_am(), m.swim(), m.sing(), m.fly()

-----------------------------------------------------------

I am Fish. I am swiming. I can't sing. I can't fly.
I am Mermaid. I am swiming. I am singing. I can't fly.
I am Bird. I can't swim. I am singing. I am flying.
I am Man. I can't swim. I am singing. I can't fly.
Man learned to swim.
I am Man. I am swiming. I am singing. I can't fly.

-----------------------------------------------------------

So I fully understand Glyph's concerns!

As I said already, PEP252 makes things too complex. Probably, there is a
more obvious way to make builtin classes inheritable and type==class
things.

For example, by using __builtin__ trick:

class MyNumber(integer):

def myadd(self, x, y):
return x*y

__builtin__.add = myadd


The essence of the idea is to gather builtin (C) methods into special
namespace: __builtin__, the same way it is done with builtin fuctions.
This will allow Python programmers to remember less details and
will give shadowing for free!!!

The summary of proposal is to leave things as they are, but to introduce
__builtin__ namespace for optimized methods.

(Name __builtin__ could be different)

gzeljko

unread,
Aug 13, 2001, 5:39:44 AM8/13/01
to pytho...@python.org
From: Guido van Rossum <gu...@python.org>

> I don't know how familiar you are with Python's C-level internals. If
> you are, you'll appreciate the problem if I took a list object and
> changed its type pointer to the dictionary type -- the instance
> lay-out of a dictionary is different, and all the methods would be
> using the list data as if it were dictionary data. Recipe for
> disaster. Likewise, changing a featureless object into a list or dict
> would at the very least require growing the size of the instance; this
> would require a realloc(), which may move the object in memory. But
> if there are other references to the object, these would all have to
> be updated. Python's run-time architecture just doesn't support that.
>

Is pointer indirection here to expensive ?

exuse-me-for-this-kind-of-post-ly-y'rs,
gzeljko

Glyph Lefkowitz

unread,
Aug 13, 2001, 5:58:22 AM8/13/01
to pytho...@python.org

On Mon, 13 Aug 2001, Rainer Deyke wrote:

> "Glyph Lefkowitz" <gl...@twistedmatrix.com> wrote in message
> news:mailman.997671441...@python.org...
> > Let me underscore this once, and put it in the terms I think of it in:
> > with these changes, you will be introducing the worst design error of C++
> > into Python. Indeed, the _central_ problem with C++ is that it always
> > prefers efficiency over dynamism, even in its so-called "high level"
> > features.
>
> I strongly disagree here. Quite aside from the efficiency issue, C++'s lack
> of dynamism is a feature, not a bug. (This does not change the fact that
> C++ is an unholy mess.) Such a feature would not be appropriate for Python,
> however.

Perhaps I should have put it differently -- the central design error of
C++ *as a high level language*. (The design failures as a low-level
language are quite discrete, of course.) Python is certainly not and will
never be suitable for low-level, let alone "systems" programming, so yes,
such a feature is inappropriate.

Guido van Rossum

unread,
Aug 13, 2001, 7:18:01 AM8/13/01
to Roman Suzi, pytho...@python.org
> I am against PEP 252 in its present form!

The problem you indicate below has to do with PEP 253, not PEP 252.

> It makes OO in Python too complex without obvious necessity.

Huh? I've always thought that the inability to subclass built-in
classes made OO more complex than necessary, and that's what these
PEPs (together with PEP 254, which won't be part of Python 2.2) are
removing.

> One of the cool features of Python I am glad it has is the ability
> to change class attributes on the fly (I've already posted it to
> Guido some time ago (maybe a year ago), specifically asking NOT TO
> CHANGE this Python ability.

You will still be able to do that, by putting __dynamic__ = 1 in your
base class (it is automatically inherited).

> No API will make things as simple as they are shown below:
>
> This is my test if the changes to the Python are appropriate
> for me:

[snip]

Here's a version that works with Python 2.2. I've commented lines I
changed or added. The "__dynamic__ = 1" line must be added because
new-style classes will be immutable by default (note: classic classes
are not affected in Python 2.2). The try/except statement with the
call to object.__getattr__() is because new-style classes allow you to
overload __getattr__ for all attribute accesses -- classic classes
only call __getattr__ when "normal" attribute access fails, which is
less flexible.

----------------------------------------------------------------------

def do_fly(self):
return "I am flying."
def do_swim(self):

return "I am swimming."


def do_sing(self):
return "I am singing."

class Everybody(object):
__dynamic__ = 1 # Added


def i_am(self):
return "I am %s." % self.__class__.__name__
def __getattr__(self, attname):
"""This is added only for the text below to be "nice",
in reality it is not needed"""

try: # Added
return object.__getattr__(self, attname) # Added
except AttributeError: # Added

The above code gives the identical same output under Python 2.2. Try
it!

> So I fully understand Glyph's concerns!
>
> As I said already, PEP252 makes things too complex. Probably, there is a
> more obvious way to make builtin classes inheritable and type==class
> things.
>
> For example, by using __builtin__ trick:
>
> class MyNumber(integer):
>
> def myadd(self, x, y):
> return x*y
>
> __builtin__.add = myadd

Sorry -- I have no idea what you are proposing here. What is
'integer'? What is the meaning of '__builtin__'? It can't be the
__builtin__ module. But then what is it?

> The essence of the idea is to gather builtin (C) methods into special
> namespace: __builtin__, the same way it is done with builtin fuctions.
> This will allow Python programmers to remember less details and
> will give shadowing for free!!!

Three exclamation points do not an argument make.

> The summary of proposal is to leave things as they are, but to introduce
> __builtin__ namespace for optimized methods.
>
> (Name __builtin__ could be different)

Roman, forgive me for saying so, but I think you're way out of your
league here. Rather than trying to propose your own design, please
study my example above and see if you're still unhappy.

Guido van Rossum

unread,
Aug 13, 2001, 7:28:31 AM8/13/01
to Roman Suzi, pytho...@python.org
> The thought is simple: let all classes be static UNTIL operation is
> requested which need them to be dynamic. Then, __dynamic__ attribute
> appears = 1 and everybody is happy: those, who want to sacrifice
> speed for flexibility just let that operations happen. Those who use
> static classes just have them.

I wish it were so easy. Consider this example:

class C(object):
def meth(self): return "hello world"

class D(C):
pass

x = D()

C.meth = lambda: "goodbye world"

print x.meth()

Obviously you want this to print "goodbye world". In order to do
this, the implementation must know that if C becomes a dynamic class
(by the assignment to C.meth), it must also make D (and all other
classes derived from C) dynamic. This would require C to keep track
of all its derived classes. That's not easy: it would require the use
of weak references.

Your proposal would also make it impossible to implement instances of
static classes different than instances of dynamic classes. For a
static class, compile-time analysis (well, at least
class-definition-time analysis) can determine the exact set of
instance variables used by the methods defined by the class, and this
can be used to allocate the instance variables more efficiently than
using a dictionary. But this can only be done when we know the class
will remain static over its lifetime.

Guido van Rossum

unread,
Aug 13, 2001, 7:37:46 AM8/13/01
to tan...@swing.co.at, pytho...@python.org
[Christian Tanzer]
> Would it be possible to possible to write a meta-class allowing the
> dynamic change of __class__? And if so, how difficult would that be?

That would be possible if you're willing to write it in C. It would
be somewhat difficult if you wanted to make sure that abuse couldn't
lead to core dumps, because you'd have to implement a hairy check
(which I haven't figured out yet but which I believe is possible in
principle) that determines whether the old and the new __class__ have
compatible instance structure.

> I'd like to support the proposal of Glyph and Alex to make 0 the
> default for `__dynamic__`. IMHO, optimization should be restricted to
> those few modules where it is really necessary.

See my response earlier in this thread.

> Dynamicity is one of the really strong points of Python -- eye popping
> as Glyph just called it.

Eye popping can be a negative point too. I'd prefer a warning before
my eyes are popped.

> > Understood. Nevertheless, all evidence suggests that Twisted is not
> > typical Python code. :-)
>
> What is typical? I'd assume that a tiny percentage of Python code uses
> the really dynamic features.

Tiny indeed.

> OTOH, this tiny percentage might itself be used by a lot of other
> code. At least in my applications some modules do make use of some
> eye-popping features and they tend to be used all over the place.

Because __dynamic__ is inherited, if you inherit from an eye-popping
class, your class will automatically be eye-popping too.

> Please be careful in this crusade... The language you *wanted* to design
> might have been not quite as
> excellent/wonderful/<insert-your-favorite-exclamation-of-insane-
> greatness-here> as the one you came up instead :-)

Please trust me... The changes I have in mind might not be so
devastating for Python's beauty as you seem to be thinking. :-)

> I wouldn't mind if I had to ask for some dynamic features more
> explicitly than now but I'd really be hard hit if they went away
> entirely. I'd love to get better performance but not at the price of
> loosing all this dynamicism. I if wanted to use a non-dynamic language
> I'd know way too many candidates vying to make my life unhappy <0.1
> wink>

Some people like Python for its extreme dynamicism. But there are
other languages in that niche (like Lisp). Most people like Python
because it's so darn readable and maintainable. Unbridled dynamicism
goes against that. I am striving for a balance that allows most forms
of dynamicism, but requires a declaration in advanced for the more
extreme kinds.

> Separate explicit mechanisms would be much better than just dropping
> dynamicism. In fact, despite the need for changing working code
> <sigh>, making an external change of a module explicit would probably
> be a good thing.

Exactly. I'm not planning to drop dynamicism -- I'm planning to make
it more explicit.

> Maybe module objects could even grow `get-set` magic in the process
> <duck>.

You probably don't realize it, but you can write modules with get/set
magic now, by stuffing an instance in sys.modules[__name__].

Andrew Kuchling

unread,
Aug 13, 2001, 8:19:32 AM8/13/01
to
I have an example where being unable to assign __class__ is a problem.
ExtensionClass already has this limitation today; you can assign
extclass.__class__, but that just adds an __class__ key to the
instance dictionary and doesn't really affect the object's class.

Now, consider a collection of instances in a ZODB. You redesign your
objects a little, and want to change instances of class A into class B
or C. If you can assign to __class__, this is easily done, just a
matter of converting each instances as you hit it. I imagine this
could even be done in a rather sneaky __setstate__.

Without __class__ assignment, the job is much messier because you have
to create a new instance that will get a new OID. Any object in the
database may have a reference to a given instance of A and such
references need to be preserved, so now you have to walk all the
objects to find and fix all the references to A instances. Lack of
dynamicism and persistent databases don't mix very well.

--amk

Edward C. Jones

unread,
Aug 13, 2001, 8:30:05 AM8/13/01
to
Guido van Rossum wrote:
> ...
> I guess I have a bit of a hidden agenda: Python is more dynamic than
> the language I *wanted* to design. Some of the dynamicism was simply
> a implementation trick. Some of the dynamicism is getting in the way
> of optimizing code, because the optimizer can never prove that certain
> variables won't be changed. So I'm trying to look for ways that pin
> down things a bit more. I'm making assumptions about how "typical"
> Python code uses the dynamic features, and I'm slowly trying to
> introduce restrictions in the language that make the optimizer's life
> easier without affecting "typical" code.
> ...

I hope you are evolving Python into a language that can be compiled into
good, fast, native object code. Then maybe I can abandon C, C++, SWIG,
and the Python API. Life would be much easier.

Ed Jones

Guido van Rossum

unread,
Aug 13, 2001, 8:35:24 AM8/13/01
to
Roeland Rengelink <r.b.ri...@chello.nl> writes:

> I didn't mean to let object.__new__ call the init, which indeed
> looses flexibility What I did mean is the following change to
> type_call. I Have no idea what side effects I may have missed here,
> but it does exactly what I want.

[...]


> /* if the user defines his own __new__,
> let him call __init_ explicitly */

[...]

The problem with this is that *all* overrides of __new__ would get the
new behavior.

If you want an initializer that's called by your class's __new__, just
give it a different name than __init__.

Guido van Rossum

unread,
Aug 13, 2001, 9:03:31 AM8/13/01
to
"Alex Martelli" <ale...@yahoo.com> writes:

> > For example, you could have a __getattr__ that initialized the thing
> > as soon as it is touched.
>
> But then *none* of the methods could belong to the class?! Each and
> every one to be inserted afresh in every instance's __dict__?!

No, in the new system, __getattr__ is called for all attributes. See
my response to Roman.

> But then
> what's the use of "making the object the right class from the start",
> as that class C HAS to be empty to let __getattr__ do its job?!
>
> Of course, if the initialization is done by __getattr__ anyway, it's
> quite futile to use __new__ to "return an empty instance of the
> class" -- the necessarily-empty __init__ does just as well, no?

Sorry, I'm not following you at all here. I suggest that you try
again.

The specific example *I* was thinking of is in pickle.py, which wants
to create an instance of a class without calling __init__. It
currently uses this:

value = _EmptyClass()
value.__class__ = klass # The class it should be

and then it initializes the instance variables.

I claim that it's more elegant to write

value = klass.__new__()

instead.

> So, again, I ask -- how is knowing from the start the class C to
> which the instance will belong, when its Promise is fulfilled, and
> exploiting the __new__ feature to make an empty C.__new__ -- how
> does this *HELP AT ALL*? Your alleged "example" is NOT an example
> of all of this glittering technology helping at all with this
> problem.

Zope's persistency support manages this without assignment to
__class__. A persistent object has three states: ghost, valid, and
modified (I think the names are different). A ghost object has the
right class, but no instance variables; these are loaded on demand,
and then the state changes to valid. On the first change to an
instance variable, the state changes again to modified. When a
modified object is written to the store, it is changed back to valid;
when a valid object is no longer needed, its instance variables are
deleted and the state changed back to ghost (usually it will be
garbage collected next, but if there is still a reference to it, it
may be resurrected from the store later).

> Which is not a problem in Python as it stands -- becomes a problem
> if and when you implement the "enhancement" (?) of forbidding the
> change-class-of-object idiom that Python now supports.

I really doubt that Python 1.4 was a much worse language than 1.5, and
I doubt that Python's popularity has much to do with the ability to
assign to __class__.

Besides, I'm not so much forbidding it as resisting the work it takes
to implement safely. You can write a metaclass that allows __class__
assignment in C, and then its safety is your own responsibility.

[snip]


> But it's ironic -- there, I used an indirector to work around the
> inability of inheriting from a built-in object. Now, I look forward
> to pensioning off that idiom... and I may have to resurrect it to work
> around the inability of changing an object's __class__...!-)

I'd like to see which is more common -- the need to change __class__
or the need to inherit from a built-in type. I'm betting the latter,
by a large amount.

> I guess I'll have to look at the source to understand why that's
> so inevitable -- right now it escapes me. (Is it going to be part
> of 2.2alpha2, or do I have to get a CVS tree?)

The architectural restrictions that make it hard to change __class__
in some situations have been part of Python since it was first
released. Unless I misunderstand what you're asking for, 2.2a1 should
exhibit this just fine.

> Of course, just as there are other ways of accomplishing raising-to-
> power, besides having it as a built-in -- it IS just repeated
> multiplication, after all. That's no argument for removing **
> from the language, it seems to me. It IS no doubt rarely used
> AND easily mis-used [code such as x**3+2.2*x**2+1.4*x+3.6
> rather than 3.6+x*(1.4+x*(2.2+x)) is most likely slower] -- but
> that's no real argument for taking ** away, as it may well be the
> most natural/clear/readable way to express some problems, and
> also (if and when used appropriately) faster than alternatives.

This is the most bizarre argument for __class__ asignment that I have
seen so far.

> As I said, exactly Stroustrup's reasoning in C++'s design -- a non
> overridable method is faster than a virtual one, so he preferred to
> make users request virtuality explicitly. Like anybody who's ever
> taught C++ or helped beginners in that language fix mistakes, I've
> had occasion to bemoan that design choice repeatedly -- even though,
> with performance such an important priority for C++, and historical
> compatibility reasons, it's quite understandable.

Since you're invoking beginners here, let me bounce that argument
right back. Do you seriously believe that assignment to __class__ is
something you would want beginners to know about?

> > I expect that the new facilities will be seriously user-tested only
> > after 2.2 is released, and experience will learn how often people are
> > changing class variables.
>
> You think people will be changing class variables _less_ often than
> now because of 2.2's new facilities? I don't see the relevance, but
> I guess I must have missed something. The new facilities appear
> to be very deep and important, but in areas quite different from
> those in which class-object changes are useful, it seems to me.

Your mind works so different from mine... It's frustrating for me to
try and understand what you mean, and no doubt it's the same for you.

I strongly believe that most Python users have no need for assignment
to class attributes most of the time. Class attributes are mainly
used for two purposes: (1) methods and (2) default initializations of
instance variables. Neither use requires changing the class
attribute. Overrides are done using subclassing or assignment to
instance variables. I know some people like to use class variables
instead of module globals, mimicking Java/C++ static class variables,
but I believe that's mistaken -- those languages don't have a module
namespace like Python, so the convention of putting globals in a class
makes sense from a naming perspective there.

Every time someone uses the dynamism to patch a standard library
class, I cringe -- it's asking for trouble.

Guido van Rossum

unread,
Aug 13, 2001, 9:37:06 AM8/13/01
to
Glyph Lefkowitz <gl...@twistedmatrix.com> writes:

> Can you point me (and other readers just coming to this discussion) to a
> few URLs illuminating the key differences between 'new-style' and
> 'old-style' classes? I'm going to do some reading up on the various PEPs,
> but if there are any posts on python-dev I can refer to...

The PEPs, the source code, and the unification tutorial at

http://www.python.org/2.2/descrintro.html

> > In a sense, in 2.2 the new-style classes will still be experimental,
> > and it's quite likely that based upon feedback from users they will
> > change (for the better) in later versions.
>
> Good to know. I have to say that although I am among the biggest
> detractors to change, I can appreciate the difficulty of what you're
> doing; python is the first language I know of that has ever gone through
> significant *refactoring* (not whole-scale rewriting or just adding
> things) at both the implementation and design level. I wish you good
> luck, and I hope that the voices from the "loyal opposition" are more of a
> help than a hindrance.
>
> > Subclassing built-in types,
>
> Operator overloading got me 90% of the way there, and that was really the
> only 90% I care about. aside from isinstance() working on instances that
> are 'like' integers now, what have I gained?

You can subclass dictionary, and the resulting object will be
acceptable as a dictionary where the runtime requires a "real"
dictionary. Ditto for e.g. files (once I make them subclassable).

> > get/set methods,
>
> I already *have* get/set methods, in 1.5.2; see
> twisted.python.reflect.Accessor :-)

Now everybody else gets them without having to buy into your twisted
philosophy. :-)

> > class and static methods,
>
> Those could be easily faked before, for the OO zealots; but I actually
> *like* the idea of using functions for things like that. I prefer to
> organize my code at the module level, and I find the additional option of
> these method types just clutter.

Yet, they are amongst the most frequently requested features.

> > uniform introspection...
>
> Aye, now there's the rub. If we have uniform introspection, there's a
> certain expectation that features like this become _easier_ to use, not
> harder. Introspection is a powerful feature, all the more powerful if
> it's uniform and systematic.

To me, introspection means being able to look at yourself, not
necessarily being able to modify yourself. The new scheme definitely
makes discovery of features of objects easier -- you can *always* just
look at __class__.__dict__ etc.

> [ (snip) promise pattern is easy with class assignment ]
>
> > If you know the type it's going to be eventually, you can use
> > C.__new__() to create an uninitialized C instance.
>
> The point is that sometimes you don't...
>
> Of course, you *could* create a class of which all your 'promised' objects
> are instances, and do specialization by having a reference in each
> instance to its 'real' class, but doesn't that seem a little silly given
> that we don't have to do it now? :-)

It would be relatively easy to allow __class__ assignment only if (a)
the new class is a subclass of the old class, and (b) the size of the
new instance is the same as the old instance. Would this be sufficient?

> > You will still be able to modify *classes* dynamically -- although you
> > have to declare this option by putting __dynamic__ = 1 in your class
> > statement.
>
> Hmm. This seems like _less_ uniform introspection to me. I do have to
> note that if this is a requirement, then a part of the Twisted coding
> standard will be to have all classes have __dynamic__ = 1; one rarely
> knows what code is going to have a bug _before_ the server is started :)

Assuming you have a few base classes from which everything else
builds, you only need to add __dynamic__ = 1 to those base classes.
You can also write a metaclass (inheriting from type) that changes the
default for __dynamic__ to 1, but it's probably easier to just seed
your base classes.

> Also, as an aside: one of my favorite things about Python is the ability
> to fix bugs in a library you're using without having to modify the source
> to that library (if libmodule.version == '0.6.0': fix_libmodule_bug()).
> This greatly eases deployment. Fixing bugs in a running server is also
> pretty important if you don't have the option to take the server down...

Hm, I happen to hate it when people do this, because it's so fragile:
there are lots of things that could break. It reminds me of the
horrible things people used to do in DOS all the time (and to some
extent still do in Windows) that cause endless mysterious
incompatibilities between applications.

Really, I understand you're in love with this stuff, but I feel it as
my responsibility to protect typical Python users from burning
themselves. That's why I want to require explicit declaration of
certain forms of dynamicism.

> Could we have the default be the other way 'round? (Has there already
> been a discussion of that?)

You bet. See several of my previous posts.

> The obvious solution, if I understand python's internals correctly, is to
> make a type method slot, "tp_change_class". Most types would just raise
> an exception; instances would continue to work the way they have been.
> Aside from the fact that it's icky syntactically, is there anything that
> this would break or make unpleasant?

You don't need a new slot in the type method.

You just need to declare an object type whose descriptor for the
__class__ attribute is a get/set descriptor, where the set function
implements the proper restrictions.

> And if the syntax is not dealable, it would be a perfectly reasonable
> transition to have
>
> def change_class(obj, newclass):
> obj.__class__ = newclass
>
> be the current implementation of a function that would be implemented as a
> builtin in the future...

No need: you will be able to continue to use "obj.__class__ = newclass".

> Seriously, the dynamic nature of Python is what makes it cool. I can
> understand removing meaningless dynamic features in order to make it
> faster (write access to locals() as a dictionary, for example) but I
> suspect that many more dynamic features than you like are really really
> useful.

In the hands of a select few, yes. For most people, the dynamicism is
just more rope to hang themselves (just as the fact that variable
references aren't checked until you use them).

> I could imagine an effort to make Python less dynamic could well end up
> like the ill-fated project to make C more dynamic (C++).

The fact that you seem to believe this funny suggests that your
worldview is quite eccentric. :-)

> I want my Python code to execute faster, sure. But before you start
> eliminating features for the sake of speed, ask yourself -- is anyone who
> is really concerned with efficiency writing code in *python*? The speed
> freaks have long since moved over to ADA or C++ or some other similiarly
> torturous language to hate themselves at the speed of light while we're
> having fun slowly. :-)

Actually, speeding up Python is a very common request. You probably
ignored those threads because *you* don't need it. That is, until
more people use your code. :-)

> Allow me to be skeptical of the fact that there is such a thing as
> "typical" code :). Most 'typical' python code could probably be written
> in Java without much difference except a little more typing.

Exactly. But people still choose Python because there's less typing!

> It's when you get to the boundary conditions -- adding attributes
> dynamically, reflection, reloading, deploying, porting -- that's
> where Python starts to shine.

And all I'm trying to do is to make those things safer. A chainsaw is
a great tool -- and also very dangerous. Should we forbid the use of
chainsaws? Of course not. Should we try to make them safer by adding
protective devices? Of course we should!

Kirby Urner

unread,
Aug 13, 2001, 11:29:41 AM8/13/01
to

Would it make any sense for dynamic classes to be a subclass
of some new kind of Object, e.g. a class of DynamicObjectType?
Ordinary classes, as subclasses of Object, would be assumed to
be static.

This would be like making tuples a root type, with lists a subclass
of tuple -- they inherit the functionality of tuples, plus add mutability
and a host of other things.

In this sense classes would come in two variaties, immutable (default)
and mutable (different type of class).

When you want a dynamic class, you would subclass DynaObject or
something like that, plus have other parents if you needed them.

Kirby

Ng Pheng Siong

unread,
Aug 13, 2001, 11:39:06 AM8/13/01
to
According to Roeland Rengelink <r.b.ri...@chello.nl>:
> if self.state:
> self.state = 0
> else:
> self.state = 1

self.state = 1 - self.state

;-)

--
Ng Pheng Siong <ng...@post1.com> * http://www.post1.com/home/ngps

Quidquid latine dictum sit, altum viditur.

Toby Dickenson

unread,
Aug 13, 2001, 11:43:39 AM8/13/01
to
Guido van Rossum <gu...@python.org> wrote:

>It would be relatively easy to allow __class__ assignment only if (a)
>the new class is a subclass of the old class, and (b) the size of the
>new instance is the same as the old instance. Would this be sufficient?

class A(bases...):
....

class B(A):
....

class C(A):
....

Under this rule I could change class from A to B, and from A to C. Is
there an easy rule that allows changing from B to C too? Perhaps, only
if the object was originally created as an A?

(I think this would satisfy Alex's UI-Engine example)

Toby Dickenson
tdick...@geminidataloggers.com

Roman Suzi

unread,
Aug 13, 2001, 10:23:15 AM8/13/01
to Guido van Rossum, pytho...@python.org
On Mon, 13 Aug 2001, Guido van Rossum wrote:

> Glyph Lefkowitz <gl...@twistedmatrix.com> writes:
>
> > Can you point me (and other readers just coming to this discussion) to a
> > few URLs illuminating the key differences between 'new-style' and
> > 'old-style' classes? I'm going to do some reading up on the various PEPs,
> > but if there are any posts on python-dev I can refer to...
>
> The PEPs, the source code, and the unification tutorial at
>
> http://www.python.org/2.2/descrintro.html

I even think that maybe day comes when Python will not need SWIG to use C
(and maybe C++) libraries directly and efficiently.

> > Aye, now there's the rub. If we have uniform introspection, there's a
> > certain expectation that features like this become _easier_ to use, not
> > harder. Introspection is a powerful feature, all the more powerful if
> > it's uniform and systematic.
>
> To me, introspection means being able to look at yourself, not
> necessarily being able to modify yourself. The new scheme definitely
> makes discovery of features of objects easier -- you can *always* just
> look at __class__.__dict__ etc.

> Actually, speeding up Python is a very common request. You probably


> ignored those threads because *you* don't need it. That is, until
> more people use your code. :-)

But will PEP25{2,3,4} really make Python faster?

Erik Max Francis

unread,
Aug 13, 2001, 12:23:30 PM8/13/01
to
Ng Pheng Siong wrote:

> self.state = 1 - self.state
>
> ;-)

Why not just

self.state = not self.state

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, US / 37 20 N 121 53 W / ICQ16063900 / &tSftDotIotE
/ \ I am the essence of overconfidence!
\__/ Capt. Benjamin "Hawkeye" Pierce
Maths reference / http://www.alcyone.com/max/reference/maths/
A mathematics reference.

Terry Reedy

unread,
Aug 13, 2001, 1:01:10 PM8/13/01
to

"Guido van Rossum" <gu...@python.org> wrote in message
news:cpzo94d...@cj20424-a.reston1.va.home.com...
[snip]

> Actually, speeding up Python is a very common request

If a class has base class[es], it would seem that multiple lookups
would slow name resolution. If all base classes are static, so that
method references are fixed in name and target, then the class
hierarchy could be 'flattened' by putting references to all base class
methods in the local __dict__ for a time/space tradeoff. Maybe make
optional with __flatten__ attribute.

Terry J. Reedy


Roman Suzi

unread,
Aug 13, 2001, 12:14:53 PM8/13/01
to pytho...@python.org
On Mon, 13 Aug 2001, Kirby Urner wrote:

> When you want a dynamic class, you would subclass DynaObject or
> something like that, plus have other parents if you needed them.

I thought of this too (even started to write email (have I posted it
too?)), but then I thought that it is overkill. GvR is right about having
it in __dynamic__ form... It's property.

James_...@i2.com

unread,
Aug 13, 2001, 2:31:05 PM8/13/01
to pytho...@python.org

and when using 2.2

class Temp: pass
newbie = Temp()

can be replaced with

from types import ObjectType
newbie = ObjectType()

Jim



"Alex Martelli"
<ale...@yahoo.com To: pytho...@python.org
> cc:
Sent by: Subject: Re: Copy constructors
python-list-admin@
python.org


08/09/01 01:14 PM


"David Smith" <d...@labs.agilent.com> wrote in message
news:3B72DCBA...@labs.agilent.com...
...
> class I have at hand, __init__ does some real work, which I want to
> bypass -- I want to clone the results of that work. I don't want to
...
> Is there a way for __copy__ to create a bare object of the same class,
> which it can proceed to populate?

Piece of cake:

class Fleep:
def __init__(self, x, y, z):
print 'lots',x,'of',y,'work',z
def __copy__(self):
class Temp: pass
newbie = Temp()
newbie.__class__=self.__class__
print "very little work"


Alex

--
http://mail.python.org/mailman/listinfo/python-list

Glyph Lefkowitz

unread,
Aug 13, 2001, 2:48:37 PM8/13/01
to Guido van Rossum, tan...@swing.co.at, pytho...@python.org

On Mon, 13 Aug 2001, Guido van Rossum wrote:

> [Christian Tanzer]
> > Would it be possible to possible to write a meta-class allowing the
> > dynamic change of __class__? And if so, how difficult would that be?
>
> That would be possible if you're willing to write it in C.

I thought we said that things were going to get _better_ than
ExtensionClass, not worse?

> > I'd like to support the proposal of Glyph and Alex to make 0 the
> > default for `__dynamic__`. IMHO, optimization should be restricted to
> > those few modules where it is really necessary.
>
> See my response earlier in this thread.

I've read several responses and none of them are really acceptable.

> > Dynamicity is one of the really strong points of Python -- eye popping
> > as Glyph just called it.
>
> Eye popping can be a negative point too. I'd prefer a warning before
> my eyes are popped.

If _you_ get a 'warning', _I_ can't do eye-popping things with your code.
Then python becomes just like Java but slower, for me.

> > > Understood. Nevertheless, all evidence suggests that Twisted is not
> > > typical Python code. :-)
> >
> > What is typical? I'd assume that a tiny percentage of Python code uses
> > the really dynamic features.
>
> Tiny indeed.

A tiny percentage of Python code uses division, (FWIW, I use dynamic
features a lot more than division) but we've seen the furor over _that_
one for months. A "tiny percentage of python code" also probably takes up
90% of the execution time of "typical" python programs.

> Because __dynamic__ is inherited, if you inherit from an eye-popping
> class, your class will automatically be eye-popping too.

Every class in twisted will then have to inherit from twisted.base.Object
in order to circumvent this silliness. (That in itself makes it remind me
of C++; when the first thing I'll do on a project is import
workarounds_for_language_design_errors.) But we still won't be able to
assign to __class__...

> > Please be careful in this crusade... The language you *wanted* to design
> > might have been not quite as
> > excellent/wonderful/<insert-your-favorite-exclamation-of-insane-
> > greatness-here> as the one you came up instead :-)
>
> Please trust me... The changes I have in mind might not be so
> devastating for Python's beauty as you seem to be thinking. :-)

Sorry, Guido, but my trust for you goes about as far as I believe the
features of Python I like were intentional. In this thread, you're
talking about breaking or perverting quite a few of them. I'm just
waiting for the other shoe to drop and you to decide we really need
variable declarations and braces.

I have a great deal of respect for you and I thank you for giving us
Python, but I am serious when I say that you should develop this new
non-python language elsewhere; it sounds like your (surprising and new?)
design aims are contrary to what many of us are using python for in the
first place.

> > I wouldn't mind if I had to ask for some dynamic features more
> > explicitly than now but I'd really be hard hit if they went away
> > entirely. I'd love to get better performance but not at the price of
> > loosing all this dynamicism. I if wanted to use a non-dynamic language
> > I'd know way too many candidates vying to make my life unhappy <0.1
> > wink>
>
> Some people like Python for its extreme dynamicism. But there are
> other languages in that niche (like Lisp).

Yes. And many lispers have come to Python because of the similiarities.
Python has a better, more standard, more friendly implementation than most
lisps. If there were a good, free, UNIX-friendly lisp implementation with
good support for AIO or microthreads, I would not be using Python right
now... except for the fact that it's much _more_ dynamic than lisp.

Consider:

* lisp is far more efficient than python can ever hope to be, at least in
its good implementations (Franz cites numbers that exceed the speed of
"typical" C++ code; its design is such that it's easier to optimize

* lisp has a better defined object model than Python does

* lisp has a dynamic reader. I could (and will, when I start using
lisp regularly) implement a Python-like syntax in the space of a few
days.

* lisp has macros, where python doesn't.

* lisp can really treat code as an object, making metaprogramming and
certain kinds of system architecture much more feasible.

Why would anyone in their right mind use python instead of lisp? Well,
it's super-dynamic. It has a large and friendly (if somewhat ad-hoc)
standard library. It plays nice with UNIX as well as Windows. It has
sane and comprehensible FFI. The implementation is unified, well-tested,
and cheap.

If you take away reason #1, then a lot of Lisp's features can compensate
for the other implementation issues, and issues like "no macros" and
"static syntax" become instantly crippling instead of mildly
inconvenient... *ESPECIALLY* if you take away reason #1 when it was
already working fine, and there is no indication (as far as I can tell)
that anyone else is unhappy with this state of affairs!

The one thing that Python will have is "consistent syntax and
implementation strategy", but that's a mostly a cultural issue, not a
technical one. Adding macros as the way of doing things rather than
dynamicism would be more efficient, but less conducive to python's
excellent culture :-D.

> Most people like Python because it's so darn readable and
> maintainable. Unbridled dynamicism goes against that.

This is of course the reason why C++ and Java programs are so much easier
to debug than python. All those spelling errors on attribute names -- and
the compiler can't even check them for you! Sheesh. Good thing that
python's at least more efficient than those static languages, or there'd
be no reason to use it. :-P

> I am striving for a balance that allows most forms of dynamicism, but
> requires a declaration in advanced for the more extreme kinds.

Personal experience leads me to believe that the more extreme your
dynamicism, the more problems you can worm your way out of down the line,
and the more "bridled" it is, the easier it is to become locked into a
particularly bad design decision.

The excellent thing about python's dynamic nature, much like the excellent
thing about its syntax, is that it's _always_ there. I can _always_
assign to the __class__ of an instance, always stick in a __getattr__ of
its class that does what I want; always poke __del__ so that I can track
garbage collection *even if it's not my code*. (have I emphasized this
enough yet? Sometimes there are bugs in code written by programmers who
are _not_ me! ^_^) Yes, it would be more efficient to not do so; yes, it
would be more flexible if we made the syntax dynamic, or added macros.
This kind of efficiency or flexibility would make python worse for me (and
I see this sentiment echoed by everyone else who's posted to this thread.
Does anyone agree with guido? Tim? ^_^)

> > Separate explicit mechanisms would be much better than just dropping
> > dynamicism. In fact, despite the need for changing working code
> > <sigh>, making an external change of a module explicit would probably
> > be a good thing.
>
> Exactly. I'm not planning to drop dynamicism -- I'm planning to make
> it more explicit.

Making an external change of a module variable or class variable already
_is_ explicit.

import foo; foo.bar = baz

class Foo: pass
Foo.x = 1

What's implicit about either of these things?

the-dutch-might-be-blunt-but-nobody-ever-accused-them-
of-being-obvious-or-even-comprehensible-ly y'rs,

Jeff Shannon

unread,
Aug 13, 2001, 4:39:10 PM8/13/01
to

Erik Max Francis wrote:

> Ng Pheng Siong wrote:
>
> > self.state = 1 - self.state
> >
> > ;-)
>
> Why not just
>
> self.state = not self.state
>

Because these constructs are limited to representing only two states (0
and 1), whereas the original construct was easily expandable to an
arbitrary number of states, even though only two were shown for example
purposes.

Jeff Shannon
Technician/Programmer
Credit International


Michael Robin

unread,
Aug 13, 2001, 4:57:15 PM8/13/01
to
"Alex Martelli" <ale...@yahoo.com> wrote in message news:<9l4a9...@enews4.newsguy.com>...

> "Glyph Lefkowitz" <gl...@twistedmatrix.com> wrote in message
> news:mailman.997561650...@python.org...
> ...
> > Also, as far as I know, better alternatives do not exist; for example,
> > "promise" objects from a database which are latently initialized when they
> > are accessed. One module that I use *all the time*;
>
> I don't know of any better way to handle the Promise design pattern,
> either. In C++, I'm having to kludge around it all the time via
> letter/envelope idioms -- the possibility of changing classes on
> the fly makes it much simpler and more direct. Hadn't thought
> of that earlier...

Can't you use a proxy object and override __call__ and friends?
(In a sense, that's what Smalltalk does with the Object Table,
vs. Python which uses a non-changing address for the object.)
Or am I missing something...

thanks,
mike

>
> > twisted.python.rebuild.rebuild, is based entirely upon this "trick". One
> > of the eye-popping cool features of Python is the ability to change code
> > and have existing instances update to use the new methods automatically.
>
> That one seems to be related to _another_ capability (of which I've
> seen no indication that it's also going away) -- keeping the same
> class object. but changing that object (rebinding method attributes
> thereof). Or maybe I don't understand exactly what you mean?
>
>
> > Smalltalk's Object>>become: is highly useful for similiar reasons; is
> > there a new way to emulate this without __class__ assignment?
>
> It seems to me the functionality of become is homomorphic to that
> of class-assignment, unless I'm truly missing something major.
>
>
> Alex

Alex Martelli

unread,
Aug 13, 2001, 4:25:45 PM8/13/01
to
"Guido van Rossum" <gu...@python.org> wrote in message
news:cpzo94d...@cj20424-a.reston1.va.home.com...
...
> http://www.python.org/2.2/descrintro.html

I thought I had understood it, but still don't see where it mentions
"x.__class__ = whatever" now being forbidden or restricted?


> You can subclass dictionary, and the resulting object will be
> acceptable as a dictionary where the runtime requires a "real"
> dictionary. Ditto for e.g. files (once I make them subclassable).

This is huge (for dictionaries, definitely -- I've never yet run
into a place where the runtime smacked me because it typechecked
for a "real" file, but I can see it would be just as important).


> > > get/set methods,
> >
> > I already *have* get/set methods, in 1.5.2; see
> > twisted.python.reflect.Accessor :-)
>
> Now everybody else gets them without having to buy into your twisted
> philosophy. :-)

This is also important, for a peculiar reason -- having built-in ways
to do "the right thing" even though it wasn't all that hard to do it
before. The peculiar reason is: in a multi-person project, when the
Language blesses some idiom (such as implicit getter/setter, aka
"the property idiom", versus explicit getFoo()/setFoo() calls), it
becomes that much easier to get group consensus that, yes, this
idiom IS the right thing to adopt as the style for a certain library
or framework.


> To me, introspection means being able to look at yourself, not
> necessarily being able to modify yourself. The new scheme definitely

My extant English dictionaries support your interpretation
of this word.


> It would be relatively easy to allow __class__ assignment only if (a)
> the new class is a subclass of the old class, and (b) the size of the
> new instance is the same as the old instance. Would this be sufficient?

Does 'size' in this context mean 'number of slots'? In this case, albeit
with somewhat peculiar contortions (requiring new.classobj or the
equivalent), I think it would be sufficient for all cases that come to
my mind -- I'd just have to put any extra attributes in the __dict__
(which I do today for *every* attribute anyway:-).

The "generate empty object through __class__ assignment" trick
would also become sort-of-possible again (although of no practical
interest whatsoever:-) -- to wit:

def make_empty_copy(any_object):
klass = any_object.__class__
class Empty(klass):
def __init__(self, *args, **kwds): pass
newcopy = Empty()
class Full(Empty):
__init__ = klass.__init__
__name__ = klass.__name__
newcopy.__class__ = Full
return newcopy

the Full class is not really the same as any_object.__class__, but
nobody's gonna find out (presumably) since it's undistinguishable
under normal use of isinstance or any behavior-test whatsoever
(or have I forgotten to copy some needed attribute for that?).

These classes Empty and Full are examples of what I mean above
as roughly 'equivalent' to new.classobj calls:-). I think I could
handle 'real' cases of class-change (worst case) through similar
means, i.e. generating on the fly a class that's basically what I
need but formally inherits from the object's original class so it
can be assigned. I realize this will not support _deleting_ any
method wrt the original class, but that's not a need in any case
that easily comes to mind.


> Assuming you have a few base classes from which everything else
> builds, you only need to add __dynamic__ = 1 to those base classes.
> You can also write a metaclass (inheriting from type) that changes the
> default for __dynamic__ to 1, but it's probably easier to just seed
> your base classes.

Good point. Yes, specifying that every class that participates in the
Super Duper Framework must directly or indirectly extend SupDupObj,
while not _quite_ as handy as not even having to specify it, seems
basically OK and pretty typical of many framework. I may cringe a
bit seeing Python gradually moving away from the utter simplicity
of signature-based polymorphism, but that's more of an instinctive
reaction than a reasoned one based on actual use-cases.


> > This greatly eases deployment. Fixing bugs in a running server is also
> > pretty important if you don't have the option to take the server down...
>
> Hm, I happen to hate it when people do this, because it's so fragile:
> there are lots of things that could break. It reminds me of the

Yes, but, when servers DO have to stay up, fixing them on the fly,
albeit indeed fragile, is a specified constraint. I guess in some cases
one could devise alternate strategies: putting up a new fixed server
process on a different machine, port-redirecting all new requests to
the new machine, and finally pulling down the old buggy server when
the conversations for the requests it was serving at fix-time are gone.

But that constrains your deployment possibilities enormously, and it
has its own huge fragilities (e.g., the server must be architected so
that multiple instances, old and new, can update the same database
without tripping on each other's feet -- just for starters).

> Really, I understand you're in love with this stuff, but I feel it as
> my responsibility to protect typical Python users from burning
> themselves. That's why I want to require explicit declaration of
> certain forms of dynamicism.

*blink* I had never thought Python's philosophy was protecting
"typical users" from themselves -- I thought that was the idea
of Pascal, Modula-2, &c, to Eiffel, the languages that know what's
good for you better than you know yourself, so they'll force you
to program the way Wirth (or Meyer) KNOWS is the one right
way to program. As I previously read your recent posts, I thought
that the __dynamic__ thing was about performance instead...?


> > suspect that many more dynamic features than you like are really really
> > useful.
>
> In the hands of a select few, yes. For most people, the dynamicism is
> just more rope to hang themselves (just as the fact that variable
> references aren't checked until you use them).

Can we expect variable declarations a few minor releases from
now, then? That would presumably be consistent with the new
focus on protecting typical users from themselves.


> > "typical" code :). Most 'typical' python code could probably be written
> > in Java without much difference except a little more typing.
>
> Exactly. But people still choose Python because there's less typing!

In two senses of the word 'typing', too -- both the one that's about
keyboards, and the one that's about declaring variable's types:-).


> > It's when you get to the boundary conditions -- adding attributes
> > dynamically, reflection, reloading, deploying, porting -- that's
> > where Python starts to shine.
>
> And all I'm trying to do is to make those things safer. A chainsaw is
> a great tool -- and also very dangerous. Should we forbid the use of
> chainsaws? Of course not. Should we try to make them safer by adding
> protective devices? Of course we should!

"Of course" as long as the extra devices don't significantly interfere
with the tools' previous strengths in terms of cost and power. When
the significant interference is there, there is no "of course" about it --
it becomes a highly problematical trade-off.


Alex

Roman Suzi

unread,
Aug 13, 2001, 4:17:34 PM8/13/01
to Guido van Rossum, pytho...@python.org

One more question.

Python programmers use the following idiom:

class A:
# ...
def __mul__(self, other):
...
__rmul__ = __mul__


If I understood correctly, __rmul__ will be of different
category from __mul__? This causes asymmetry...

Sincerely yours, Roman Suzi
--
_/ Russia _/ Karelia _/ Petrozavodsk _/ r...@onego.ru _/
_/ Monday, August 13, 2001 _/ Powered by Linux RedHat 6.2 _/
_/ "I distinctly remember forgetting that." _/


Andrew Kuchling

unread,
Aug 13, 2001, 5:14:39 PM8/13/01
to
"Alex Martelli" <ale...@yahoo.com> quoted:

> Can we expect variable declarations a few minor releases from
> now, then? That would presumably be consistent with the new
> focus on protecting typical users from themselves.

It also seems unlikely that newbies run into this problem much,
because assigning __class__ is unlikely to be done by accident, and
because people coming from C++ don't expect that to even be possible.
I can't recall very many people asking about class reassignment tricks
on c.l.py.

> "Guido van Rossum" <gu...@python.org> wrote in message
> news:cpzo94d...@cj20424-a.reston1.va.home.com...

> > And all I'm trying to do is to make those things safer. A chainsaw is
> > a great tool -- and also very dangerous. Should we forbid the use of
> > chainsaws? Of course not. Should we try to make them safer by adding
> > protective devices? Of course we should!

A few weeks ago there was an article in the _Washington Post_ about
someone who's invented a safety device for table saws. It somehow
detects if the user is touching the blade (the article implies it was
through the conductivity of human flesh -- perhaps a current flows
when a finger completes the circuit) and stops the blade within a few
milliseconds, faster than the pain would travel from your finger to
your brain. A standard demonstration is to stick a hot dog into the
table saw's blade; the blade stops so quickly that the hot dog is only
nicked.

The problem is that he can't find manufacturers who want to include
it. The device costs $150, which is unacceptably high for a low-end
table saw that costs $300 now. Manufacturers could add it to their
high-end $3000 models, but then they might be sued by someone injured
on a low-end saw because a safety device existed but wasn't available
to them. So they're simply not adding the device at all.

Moral: Additional safety has costs, and sometimes that cost is too
high to pay. :)

--amk

Michael Robin

unread,
Aug 13, 2001, 5:48:41 PM8/13/01
to
I think this is a good tradeoff.
I use Python becuase of its source availibility, clarity, and strong
run-time support. (And the possibility of using JPython in the
future.)
All IMHO, of course, but the two things that I don't like the extreme
dynamicism and (lack of) run-time speed. Yes, I can write a C
extension, but as fast as it is to code in Python, its all that much
faster if I don't have to.
I like to think that *most of the time* obj.attr gets a obj reference
from a slot (if possible) or a dict-like object (at worst), and that
obj.fn() does something more. I think counter examples should be made
explicit by something like __dynamic__=1. (I know that in this case
we're dealing with attr names - similar issue.) Personally, I like to
use an explict fn call syntax for *anything* that activates user-level
code for this reason.
Judicious use of __getattr__s and __call__s, et. al., can made code
pretty unreadable when x += 1 can potentially re-format your hard
drive.
In "mini-language building languages" such as Prolog and various
LISPs, this extreme dynamic nature is more expected, and they also
have the benefits of re-configurable parsers and extensive macro
capibilities, respectivily, so that your app-specific semantics also
have their own syntax, as apposed to "overloading" Python's. In the
future more meta-language (different than meta-classes which are *in*
the language) features may also be added to python.
Python (Mr. van Rossum, tell me if I'm wrong) was never billed as a
laguage construction kit, but a "get it done" language. (Insofaras
these features do help you "get it done", that's good - but there's
usually a more explict way to do things.) The fact that you may be
able to emulate Actor or CLOS/Loops/Flavors or Scheme/Dylan or
Smalltalk, etc., may be intersting, but was never garanteed. There is
a tradeoff between referential transparency/dynamicism and
understandable code. Another issue is the ability to translate to
other languages (including a C extension).

Anyhoo, this "power" isn't being taken away, just made explicit, which
seems in-line with Python's meta-goals from the start.

I understand that the reasons that people use languages is personal
and fluid - just my current 2 cents for this moment...

mike

Guido van Rossum <gu...@python.org> wrote in message news:<mailman.997702822...@python.org>...

Guido van Rossum

unread,
Aug 13, 2001, 6:32:53 PM8/13/01
to Roman Suzi, pytho...@python.org
> Python programmers use the following idiom:
>
> class A:
> # ...
> def __mul__(self, other):
> ...
> __rmul__ = __mul__
>
>
> If I understood correctly, __rmul__ will be of different
> category from __mul__? This causes asymmetry...

Sorry, you must have misunderstood. They will be the same.

Jason Asbahr

unread,
Aug 13, 2001, 7:37:03 PM8/13/01
to
Guido,

I agree with the general opinion here that having virtual
as default feels like the more 'Pythonic' solution.

Also class assignment is a very useful feature. Glyph
mentioned this in one of his posts, reassigning instances
and modifying class attributes at runtime is a key feature
of certain flexible systems (for example, massively multiplayer
virtual worlds :-) (1) For all the reasons mentioned, adding
attributes dynamically, reflection, reloading, deploying, and
porting, the flexibility of Python is an enormous benefit.

Jason Asbahr

1. That's one of the key applications for Twisted, btw.

Allen Short

unread,
Aug 13, 2001, 8:04:14 PM8/13/01
to
>>>>> "Glyph" == Glyph Lefkowitz <gl...@twistedmatrix.com> writes:

> On Mon, 13 Aug 2001, Guido van Rossum wrote:

>> Some people like Python for its extreme dynamicism. But there
>> are other languages in that niche (like Lisp).

> Yes. And many lispers have come to Python because of the
> similiarities. Python has a better, more standard, more
> friendly implementation than most lisps. If there were a good,
> free, UNIX-friendly lisp implementation with good support for
> AIO or microthreads, I would not be using Python right
> now... except for the fact that it's much _more_ dynamic than
> lisp.

As one of the aforementioned Lispers that's come to Python, I'd be
rather saddened if it started to go the way of Java. While I'm not
sure I agree that Python is more dynamic than Lisp (it _does_ have
change-class after all =), it _certainly_ has better OS
integration. (not to mention that most of the higher-quality Common
Lisp implementations aren't free...)

Python is the main reason I haven't started working on a new
(un-Common) Lisp. The main thing that attracted me to Python was that
it shared a major characteristic with Lisp -- it trusts the
programmer, as opposed to the Java/Pascal view that the programmer is
a malignant influence on the system and must be prevented from messing
things up too badly. ;) I understand the desire for efficient
execution and agree that the language should be extended to promote it
-- but please, _please_, make these optimisations *optional*, and have
them default to being disabled.

meanwhile-perl6-is-looking-better-all-the-time'ly-yrs,

Allen

--
Allen Short Programmer-Archaeologist sho...@auburn.edu
And it should be the law: If you use the word `paradigm' without knowing what
the dictionary says it means, you go to jail. No exceptions. -- David Jones

Guido van Rossum

unread,
Aug 13, 2001, 10:31:23 PM8/13/01
to
> From: Guido van Rossum <gu...@python.org>
> > I don't know how familiar you are with Python's C-level internals. If
> > you are, you'll appreciate the problem if I took a list object and
> > changed its type pointer to the dictionary type -- the instance
> > lay-out of a dictionary is different, and all the methods would be
> > using the list data as if it were dictionary data. Recipe for
> > disaster. Likewise, changing a featureless object into a list or dict
> > would at the very least require growing the size of the instance; this
> > would require a realloc(), which may move the object in memory. But
> > if there are other references to the object, these would all have to
> > be updated. Python's run-time architecture just doesn't support that.

"gzeljko" <gze...@sezampro.yu> writes:

> Is pointer indirection here to expensive ?

Yes, since it would require totally rewriting all existing Python
object types -- both built-in and 3rd party extension modules -- and
that's exactly what I am trying to avoid.

Peter Hansen

unread,
Aug 13, 2001, 10:54:46 PM8/13/01
to
Jeff Shannon wrote:
>
> Erik Max Francis wrote:
>
> > Ng Pheng Siong wrote:
> >
> > > self.state = 1 - self.state
> > >
> > > ;-)
> >
> > Why not just
> >
> > self.state = not self.state
> >
>
> Because these constructs are limited to representing only two states (0
> and 1), whereas the original construct was easily expandable to an
> arbitrary number of states, even though only two were shown for example
> purposes.

Good point, although I think Erik was suggesting a
better alternative to the x = 1 - x idiom. Since it
is conceivable that x begins as a value other than
0 or 1, the x = not x idiom immediately brings it
into the 0 or 1 range, whereas the other way just
has it alternate between two values, which *might*
be zero or one...

--
----------------------
Peter Hansen, P.Eng.
pe...@engcorp.com

Guido van Rossum

unread,
Aug 13, 2001, 10:46:48 PM8/13/01
to
jas...@onebox.com (Jason Asbahr) writes:

> I agree with the general opinion here that having virtual
> as default feels like the more 'Pythonic' solution.

There must be a misunderstanding. Alex used C++ virtual functions as
an example where C++ went wrong, as an argument for why dynamicism
should be the default. In Python, all functions are virtual: you can
always override them in a subclass, and I am not going to change this!

All I want is to disable changes to *existing* classes by default.
You can write your own metaclass that changes the default, or you can
inherit from a dynamic base class -- thus, with very little effort,
you can make all your classes dynamic, if you want to.

If it turns out that this is not sufficient for a wide range of
applications, I'll reconsider the default; but I'd rather not, because
there is a severe run-time penalty for the generally unneeded
dynamicism.

> Also class assignment is a very useful feature.

You will be able to write a metaclass (in C) that allows __class__
assignment, when safe. If there's enough demand this metaclass will
be built-in.

Guido van Rossum

unread,
Aug 13, 2001, 11:03:30 PM8/13/01
to
Glyph, give it a rest. I hear loud and clear what you are saying. At
this point, stepping up the rhetoric will have an adverse effect on
your case. We may always disagree in how important we find the
various aspects of Python, but I know what you think is important.
Please understand what I think is important, even if you disagree -- I
don't expect to change your mind, so please do me the same favor and
stop trying to change my mind.

That said, the decisions I am making for the new type system in 2.2
are not cast in stone. I consider the new class mechanisms added to
2.2 the first experimental release of a very far-reaching new feature.

Some important things to keep in mind:

(a) Your existing dynamic classes won't break in 2.2, *unless* you
explicitly choose to enable the new class mechanisms (e.g. by
inheriting from 'object' or using an explicit __metaclass__).

(b) Based on the experiences with use of the new class mechanisms as
deployed in 2.2, I'll refine the design for 2.3.

(c) I expect that "classic" classes will remain the default until
Python 3.0, where they will disappear -- by then, almost everybody
will be using the new classes, though we may have a backwards
compatible metaclass for code that connot be converted completely.

Guido van Rossum

unread,
Aug 13, 2001, 11:34:07 PM8/13/01
to
"Alex Martelli" <ale...@yahoo.com> writes:

> "Guido van Rossum" <gu...@python.org> wrote:
> > http://www.python.org/2.2/descrintro.html
>
> I thought I had understood it, but still don't see where it mentions
> "x.__class__ = whatever" now being forbidden or restricted?

That one's easy. :-) Assignment to __class__ is not documented for
classic classes either, so I didn't think it was necessary to mention
it.

> > It would be relatively easy to allow __class__ assignment only if (a)
> > the new class is a subclass of the old class, and (b) the size of the
> > new instance is the same as the old instance. Would this be sufficient?
>
> Does 'size' in this context mean 'number of slots'?

Close enough. In combination with the subclass requirement, the
"equal size" requirement means that the subclass doesn't add any
slots, and that guarantees the safety requirement.

But without the subclass requirement, in order for the __class__
assignment to be safe, things are more difficult. E.g. a list
instance has two "slots" beyond the basic object header: a size and a
pointer. Now this class:

class C(object): __slots__ = ['a', 'b']

also has two slots following the basic object header. But switching a
C instance to a list or back would be a disaster, because the
interpretation of the slots is different.

> In this case, albeit
> with somewhat peculiar contortions (requiring new.classobj or the
> equivalent), I think it would be sufficient for all cases that come to
> my mind -- I'd just have to put any extra attributes in the __dict__
> (which I do today for *every* attribute anyway:-).

Sorry, I think you're off on the wrong foot there.

> The "generate empty object through __class__ assignment" trick
> would also become sort-of-possible again (although of no practical
> interest whatsoever:-) -- to wit:
>
> def make_empty_copy(any_object):
> klass = any_object.__class__
> class Empty(klass):
> def __init__(self, *args, **kwds): pass
> newcopy = Empty()
> class Full(Empty):
> __init__ = klass.__init__
> __name__ = klass.__name__
> newcopy.__class__ = Full
> return newcopy
>
> the Full class is not really the same as any_object.__class__, but
> nobody's gonna find out (presumably) since it's undistinguishable
> under normal use of isinstance or any behavior-test whatsoever
> (or have I forgotten to copy some needed attribute for that?).

I don't think this is a useful kludge. And how is newcopy going to
acquire all the othe rattributes of any_object.__class__?

> These classes Empty and Full are examples of what I mean above
> as roughly 'equivalent' to new.classobj calls:-). I think I could
> handle 'real' cases of class-change (worst case) through similar
> means, i.e. generating on the fly a class that's basically what I
> need but formally inherits from the object's original class so it
> can be assigned. I realize this will not support _deleting_ any
> method wrt the original class, but that's not a need in any case
> that easily comes to mind.

It would be easier to bite the bullet and write the metaclass (in C)
that does the proper safety check.

> Yes, but, when servers DO have to stay up, fixing them on the fly,
> albeit indeed fragile, is a specified constraint. I guess in some cases
> one could devise alternate strategies: putting up a new fixed server
> process on a different machine, port-redirecting all new requests to
> the new machine, and finally pulling down the old buggy server when
> the conversations for the requests it was serving at fix-time are gone.

I'm assuming you have to plan for this anyway, since you'll need a way
to doctor the server in the first place. So why not make your
planning easy by having __dynamic__ = 1 somewhere?

> *blink* I had never thought Python's philosophy was protecting
> "typical users" from themselves -- I thought that was the idea
> of Pascal, Modula-2, &c, to Eiffel, the languages that know what's
> good for you better than you know yourself, so they'll force you
> to program the way Wirth (or Meyer) KNOWS is the one right
> way to program. As I previously read your recent posts, I thought
> that the __dynamic__ thing was about performance instead...?

Here we go again. I mention one reason and it is assumed that this is
the only reason. As I've said before, my mind works a lot faster than
my fingers can type (it seems it's the opposite for you :-), and
sometimes the rationalization for an idea comes only gradually.

If you think about it, Python does a lot to protect typical users from
themselves (why otherwise do you think it's gaining success as an
educational language)? For example, an input line of a million bytes
won't cause a buffer overflow. For example, arithmetic overflow is
not silently truncated. For example, almost anything that could cause
a core dump is caught before it does (and we fix the remaining core
dumps in real time :-). For example, mixing incompatible types in
expressions causes a TypeError rather than having a random undefined
side effect. For example, using an undefined variable name raises a
NameError rather than silently being equal to zero. For example, the
whole division thing. And so on.

> Can we expect variable declarations a few minor releases from
> now, then? That would presumably be consistent with the new
> focus on protecting typical users from themselves.

I'm presuming you're being sarcastic. We may indeed see *optional*
variable declarations -- but not required ones. But realistically,
even the optional declarations seem far away -- the types-sig is only
active for about 6 weeks per year, and most of that time is used
rediscovering where we were a year ago... :-(.

> "Of course" as long as the extra devices don't significantly
> interfere with the tools' previous strengths in terms of cost and
> power. When the significant interference is there, there is no "of
> course" about it -- it becomes a highly problematical trade-off.

Exactly. What's going on here is that I'm trying to tease out how far
I should go with the safety device without making the tool unusable.
The new classes in Python 2.2 (which are entirely optional -- by
default you get the same classic classes as in 2.1 and before) are one
step of a new design. We'll see how it needs to be tweaked.

Terry Reedy

unread,
Aug 13, 2001, 11:35:31 PM8/13/01
to

"Alex Martelli" <ale...@yahoo.com> wrote in message
news:9l9d4...@enews4.newsguy.com...

> "Guido van Rossum" <gu...@python.org> wrote in message
> news:cpzo94d...@cj20424-a.reston1.va.home.com...
> ...
> > http://www.python.org/2.2/descrintro.html
>
> I thought I had understood it, but still don't see where it mentions
> "x.__class__ = whatever" now being forbidden or restricted?

I rechecked. It does not. Did find
"For instance of built-in types, x.__class__ is now the same as
type(x): "
which does hint that it might be problematical for subtypes.

Is the proposed restriction for subtype insstances only or also for
subclass instances?

> > It would be relatively easy to allow __class__ assignment only if
(a)
> > the new class is a subclass of the old class, and (b) the size of
the
> > new instance is the same as the old instance. Would this be
sufficient?

Given that there are sensible reason to mutate instances (to reflect
real or virtual mutations - see Another two examples of using changing
classes by Itamar Shtull-Trauring and my reply) this would seem
desirable.

Terry J. Reedy

Guido van Rossum

unread,
Aug 13, 2001, 11:54:03 PM8/13/01
to
[Alex]

> > I thought I had understood it, but still don't see where it mentions
> > "x.__class__ = whatever" now being forbidden or restricted?

[Terry]


> I rechecked. It does not. Did find "For instance of built-in
> types, x.__class__ is now the same as type(x): " which does hint
> that it might be problematical for subtypes.
>
> Is the proposed restriction for subtype insstances only or also for
> subclass instances?

If you mean the distinction between "classic classes" and new-style
classes (those derived from 'object' or using an explicit metaclass),
the restriction does not apply to classic classes. Classic classes
don't change in 2.2 -- they are slated for change much later (around
3.0).

Roman Suzi

unread,
Aug 14, 2001, 12:31:44 AM8/14/01
to Guido van Rossum, pytho...@python.org
On Tue, 14 Aug 2001, Guido van Rossum wrote:

>jas...@onebox.com (Jason Asbahr) writes:
>
>> I agree with the general opinion here that having virtual
>> as default feels like the more 'Pythonic' solution.
>
>There must be a misunderstanding. Alex used C++ virtual functions as
>an example where C++ went wrong, as an argument for why dynamicism
>should be the default. In Python, all functions are virtual: you can
>always override them in a subclass, and I am not going to change this!
>
>All I want is to disable changes to *existing* classes by default.
>You can write your own metaclass that changes the default, or you can
>inherit from a dynamic base class -- thus, with very little effort,
>you can make all your classes dynamic, if you want to.

But not at run-time? Can I make

cls.__dynamic__ = 1

at runtime and then, for example, patch the classes of running system? Or
do I need to stop the system, make __dynamic__ changes and restart it
again?

>If it turns out that this is not sufficient for a wide range of
>applications, I'll reconsider the default; but I'd rather not, because
>there is a severe run-time penalty for the generally unneeded
>dynamicism.

Sincerely yours, Roman Suzi

Guido van Rossum

unread,
Aug 14, 2001, 12:45:00 AM8/14/01
to Roman Suzi, pytho...@python.org
[me]

> >All I want is to disable changes to *existing* classes by default.
> >You can write your own metaclass that changes the default, or you can
> >inherit from a dynamic base class -- thus, with very little effort,
> >you can make all your classes dynamic, if you want to.

[Roman]


> But not at run-time? Can I make
>
> cls.__dynamic__ = 1
>
> at runtime and then, for example, patch the classes of running
> system?

No. If the class is immutable, you can't change its __dynamic__
attribute either, of course. :-)

> Or do I need to stop the system, make __dynamic__ changes and
> restart it again?

Yes.

Alex Martelli

unread,
Aug 14, 2001, 4:15:10 AM8/14/01
to
"Guido van Rossum" <gu...@python.org> wrote in message
news:cp3d6wf...@cj20424-a.reston1.va.home.com...
...
> No, in the new system, __getattr__ is called for all attributes. See

Oh! OK, that *does* change everything (and I *DO* mean
everything). If most of what I wrote in the post you
were answering didn't make sense to you, it may be simply
because it was based on how __getattr__ works today, as
I was considering the case of a "normal", "classic" class.
http://www.python.org/2.2/descrintro.html says the new
__getattr__ behavior only works for subtypes of built-in
types, not for normal classes...

For example:

> > Of course, if the initialization is done by __getattr__ anyway, it's
> > quite futile to use __new__ to "return an empty instance of the
> > class" -- the necessarily-empty __init__ does just as well, no?
>
> Sorry, I'm not following you at all here. I suggest that you try
> again.

Re-read this under the hypothesis that __getattr__ is only called
for attributes not found by other means (like today), and it
should be clearer: under this hypothesis, to ensure __getattr__
can trap every access, there can be no other attributes in the
class (if there were, __getattr__ wouldn't be called when those
attributes are accessed).


> value = _EmptyClass()
> value.__class__ = klass # The class it should be
>
> and then it initializes the instance variables.
>
> I claim that it's more elegant to write
>
> value = klass.__new__()

Sure, or value = new.instance(klass) today -- one line
is "more elegant" than two. (Avoiding module new was
in my case primarily motivated by the stern warnings in
the doc "care must be exercised when using this module"
etc -- not sure what klass.__new__() is supposed to
provide better than new.instance(klass), although it
does save a whopping 4 characters:-).


> Zope's persistency support manages this without assignment to
> __class__. A persistent object has three states: ghost, valid, and

Yes, and AMK has, it seems to me, posted eloquently as to
how and why this is a kludge (or at least that's how I
read AMK's posts on this thread), made necessary by the
extension-class disabling (effectively) __class__ changes.


> I really doubt that Python 1.4 was a much worse language than 1.5, and

I wouldn't know first-hand, not having used Python 1.4 myself.
But if 1.5 was so feeble an improvement, why did you bother
creating it at all...?-)

> I doubt that Python's popularity has much to do with the ability to
> assign to __class__.

It's one of, say, 100 things that are different in Python compared
to less-dynamic languages, so it may account for (null hypothesis)
1% of somebody choosing Python rather than something less dynamic.
(Similarly, since, e.g., Ruby does have the equivalent feature,
then, if it's taken away from Python, it may make Ruby preferable
where now Python would be chosen -- here the differences are just,
say, 10, so the null hypothesis here would be to have the lack
account for 10% of the migration).

> Besides, I'm not so much forbidding it as resisting the work it takes
> to implement safely. You can write a metaclass that allows __class__
> assignment in C, and then its safety is your own responsibility.

I understand I can program everything and a half via metaclasses,
but that's nowhere near having a feature as a built-in part of
the language. I made just the same argument about the get/set
property-idioms you've now added: they could easily be implemented
in the past (with __getattr__ or metaclasses), but having them
readily available in the language itself *IS* still important --
I'm not going to repeat that part, but it applies just as well
to other features such as change-of-__class__.

> > But it's ironic -- there, I used an indirector to work around the
> > inability of inheriting from a built-in object. Now, I look forward
> > to pensioning off that idiom... and I may have to resurrect it to work
> > around the inability of changing an object's __class__...!-)
>
> I'd like to see which is more common -- the need to change __class__
> or the need to inherit from a built-in type. I'm betting the latter,
> by a large amount.

Most likely, yes. People with significant Ruby or Smalltalk experience,
having both features available, could perhaps offer opinions with
a more solid, first-hand basis.


> > I guess I'll have to look at the source to understand why that's
> > so inevitable -- right now it escapes me. (Is it going to be part
> > of 2.2alpha2, or do I have to get a CVS tree?)
>
> The architectural restrictions that make it hard to change __class__
> in some situations have been part of Python since it was first
> released. Unless I misunderstand what you're asking for, 2.2a1 should
> exhibit this just fine.

But 2.2a1 lets me change __class__ "in some situations" -- just
not in all of them. What you're now proposing (or so it appears
to me, and, it seems, to other discussants) is to take those "some
situations" away -- forbidding __class__ change except for the
last-ditch escape route of having a very special metaclass. And
I still don't see why this should be inevitable from the 2.2a1
sources. Even the restrictions on __class__ change you proposed
on another message would be better from my POV than a prohibition --
forcing the new class to be a subclass of the old one that adds
no slots (not so hard to check, not so hard to explain either).

> > Of course, just as there are other ways of accomplishing raising-to-
> > power, besides having it as a built-in -- it IS just repeated
> > multiplication, after all. That's no argument for removing **
> > from the language, it seems to me. It IS no doubt rarely used
> > AND easily mis-used [code such as x**3+2.2*x**2+1.4*x+3.6
> > rather than 3.6+x*(1.4+x*(2.2+x)) is most likely slower] -- but
> > that's no real argument for taking ** away, as it may well be the
> > most natural/clear/readable way to express some problems, and
> > also (if and when used appropriately) faster than alternatives.
>
> This is the most bizarre argument for __class__ asignment that I have
> seen so far.

It's called an "analogy", and it's not so much an argument for
allowing class change as a demolition of your "argument" that
class change can be removed with no worries because "there are
other ways of accomplishing" similar goals.

Having shown that "there being other ways" is not a good reason
for taking the direct-way away from the language (even if the
direct-way may have performance impact for naive users that
don't understand things too well -- just as ** typically tricks
naive users away from the more-efficient Horner polynomial:-) --
I haven't made an argument FOR retaining the existing feature,
but I _have_ removed an argument for _removing_ it.


> > make users request virtuality explicitly. Like anybody who's ever
> > taught C++ or helped beginners in that language fix mistakes, I've
...
> Since you're invoking beginners here, let me bounce that argument
> right back. Do you seriously believe that assignment to __class__ is
> something you would want beginners to know about?

Programs who are experienced in certain other languages but beginners
to Python -- sure! In particular, I've been evangelizing Python to
reasonably-experienced C++ programmers, and the ability to do away
with letter/envelope idioms has invariably drawn a LOT of interest.


> > > I expect that the new facilities will be seriously user-tested only
> > > after 2.2 is released, and experience will learn how often people are
> > > changing class variables.
> >
> > You think people will be changing class variables _less_ often than
> > now because of 2.2's new facilities? I don't see the relevance, but
> > I guess I must have missed something. The new facilities appear
> > to be very deep and important, but in areas quite different from
> > those in which class-object changes are useful, it seems to me.
>
> Your mind works so different from mine... It's frustrating for me to
> try and understand what you mean, and no doubt it's the same for you.

Yep -- when I look at the language you've designed and the code
you've written, I feel I understand you well and admire you, but
when I read the English discourse with which you explain things,
I often find it infuriating -- not so much ununderstandable, but
apparently irrelevant, or restating the obvious, or ignoring what
you're seemingly answering to, for example. [Good thing there's
Tim to channel you for the rest of us, sometimes!-)]


> I strongly believe that most Python users have no need for assignment
> to class attributes most of the time. Class attributes are mainly
> used for two purposes: (1) methods and (2) default initializations of
> instance variables. Neither use requires changing the class
> attribute. Overrides are done using subclassing or assignment to
> instance variables.

But subclassing gives me new and different objects -- it does not
change the behavior of all *existing* instances of the class. To
change the behavior of every instance by assignment to instance
variables, besides the circular-reference issue that Moshe Zadka
mentioned, I have to locate every instance of the class, which
requires substantial "plumbing" (a list of weak references, say),
needing design-foresight (and thus front-loading the design effort,
a very unpleasant need in methodologies such as XP) as well as
expending substantial amounts of memory (for the plumbing, AND
for the duplicated __dict__ entries in every instance).

I agree -- there's no need to assign to class attributes *most*
of the time. But knowing whether this is one such time requires
lots of foresight, a front-loaded design effort, designing-in
infrastructure in my application for potential future needs, and
other such costly/anti-XP practices. A smoothly-dynamic language
such as Python used to be is a great facilitator for lightweight
just-in-time-design methodologies such as XP -- I'd refactor
when needed but meanwhile I could try out new ideas *RIGHT NOW*...
because dynamic-configurability is the DEFAULT.

Another analogy: most system administrators don't need to take
network interfaces up and down dynamically, or reconfigure their
parameters, most of the time. But when the need arises, it's
*WONDERFUL* to be able to do so without having to reboot each
and every time. Having ifconfig and route and iptables &c thus
makes operating systems such as Linux vastly superior to ones
not _allowing_ for such dynamic reconfiguration -- because you
know you'll have the tools at hand on those rare but possible
situations where they ARE needed. Sure, a typical administrator
may need to use those tools dynamically and interactively maybe
once in a few months, just for emergencies, once system
configuration is well-tuned -- but they also make it much
easier and faster to REACH a well-tuned configuration at the
start, as well as providing a safety net just by existing.

Most pilots of personal airplanes have no need for parachutes
most of the time. It's still a good idea to have them around.

> I know some people like to use class variables
> instead of module globals, mimicking Java/C++ static class variables,
> but I believe that's mistaken -- those languages don't have a module
> namespace like Python, so the convention of putting globals in a class
> makes sense from a naming perspective there.

I'm not entering this specific debate -- I do agree with the
general sense that classes are overrated/overused as the basic
unit of organization, where modules might make more sense. Do
note, however, that C++ *DOES* have an excellent namespace
system -- Java, like Eiffel, may be totally class-centered,
but that's one criticism [or praise:-)] that you cannot fairly
level to C++ as a language.


> Every time someone uses the dynamism to patch a standard library
> class, I cringe -- it's asking for trouble.

I think I understand this specific concern -- in classic Python
terms, the class might become a type (for speed reasons) in some
new release tomorrow, and then the patching would stop working
(if what was an attribute held in the class's __dict__ becomes
a hard-wired 'slot' of the type).

But you don't need to have __dynamic__ *DEFAULT* to 0 to ward
off THIS concern, Guido: since YOU control the standard library
classes, you may choose to put __dynamic__=0 directly or
indirectly into each and every one of them if you wish. This
would be a close parallel to what the designers of Java did,
by tagging as "final" the standard library classes where they
did not want users to be able to extend them -- they didn't
need to have each and every class *DEFAULT* to 'final'!-)

You might also get as many grumblings for this as the Java
guys do from people who'd really love to extend, say, String,
but that's another issue -- they just forgot to add for each
'final' class an immediate ancestor, e.g. ExtendableString,
*without* the 'final'... similarly I'd hope Python would keep
allowing dynamic-versions of standard classes which have a
__dynamic__ of 0, although, if I understand you correctly,
this would need no special effort in the library -- users
can still subclass standard classes and redefine __dynamic__
in their own subclasses thereof.


Alex

Alex Martelli

unread,
Aug 14, 2001, 6:54:16 AM8/14/01
to
"Guido van Rossum" <gu...@python.org> wrote in message
news:cpk807e...@cj20424-a.reston1.va.home.com...
...

> "Alex Martelli" <ale...@yahoo.com> writes:
>
> > "Guido van Rossum" <gu...@python.org> wrote:
> > > http://www.python.org/2.2/descrintro.html
> >
> > I thought I had understood it, but still don't see where it mentions
> > "x.__class__ = whatever" now being forbidden or restricted?
>
> That one's easy. :-) Assignment to __class__ is not documented for
> classic classes either, so I didn't think it was necessary to mention
> it.

True. Indeed, __class__ is specifically (and falsely) mentioned
as being read-only (ever since 1.5). Wish I had noticed that much
earlier and submitted a doc bug about it -- oh well!-)


> > > It would be relatively easy to allow __class__ assignment only if (a)
> > > the new class is a subclass of the old class, and (b) the size of the
> > > new instance is the same as the old instance. Would this be
sufficient?
> >
> > Does 'size' in this context mean 'number of slots'?
>
> Close enough. In combination with the subclass requirement, the
> "equal size" requirement means that the subclass doesn't add any
> slots, and that guarantees the safety requirement.

Which is what I assumed in the rest of my post.

> But without the subclass requirement, in order for the __class__
> assignment to be safe, things are more difficult. E.g. a list
> instance has two "slots" beyond the basic object header: a size and a
> pointer. Now this class:
>
> class C(object): __slots__ = ['a', 'b']
>
> also has two slots following the basic object header. But switching a
> C instance to a list or back would be a disaster, because the
> interpretation of the slots is different.

Yes, that's quite clear.


> > In this case, albeit
> > with somewhat peculiar contortions (requiring new.classobj or the
> > equivalent), I think it would be sufficient for all cases that come to
> > my mind -- I'd just have to put any extra attributes in the __dict__
> > (which I do today for *every* attribute anyway:-).
>
> Sorry, I think you're off on the wrong foot there.

Care to explain why? You don't, in the following.


> > The "generate empty object through __class__ assignment" trick
> > would also become sort-of-possible again (although of no practical

***************


> > interest whatsoever:-) -- to wit:

*******************


> >
> > def make_empty_copy(any_object):
> > klass = any_object.__class__
> > class Empty(klass):
> > def __init__(self, *args, **kwds): pass
> > newcopy = Empty()
> > class Full(Empty):
> > __init__ = klass.__init__
> > __name__ = klass.__name__
> > newcopy.__class__ = Full
> > return newcopy
> >
> > the Full class is not really the same as any_object.__class__, but
> > nobody's gonna find out (presumably) since it's undistinguishable
> > under normal use of isinstance or any behavior-test whatsoever
> > (or have I forgotten to copy some needed attribute for that?).
>
> I don't think this is a useful kludge.

Do you READ the posts you're answering? I *did* say and you quoted
(and I've underlined it with asterisks above) that this specific
trick is "of no practical interest whatsoever" -- so why are you
paraphrasing what I just wrote as if you were saying something
different?

> And how is newcopy going to
> acquire all the othe rattributes of any_object.__class__?

That's up to the caller, just like it's up to the caller of
any_object.__class__.__new__() # in the new Python
or
new.instance(any_object.__class__) # in good old Python
which this useless trick sort-of-mimics. Isn't this obvious?


> > These classes Empty and Full are examples of what I mean above
> > as roughly 'equivalent' to new.classobj calls:-). I think I could
> > handle 'real' cases of class-change (worst case) through similar
> > means, i.e. generating on the fly a class that's basically what I
> > need but formally inherits from the object's original class so it
> > can be assigned. I realize this will not support _deleting_ any
> > method wrt the original class, but that's not a need in any case
> > that easily comes to mind.
>
> It would be easier to bite the bullet and write the metaclass (in C)
> that does the proper safety check.

Except that this requires ahead-of-need design of infrastructure,
or deep refactoring even just to try something out. Having a
working kludge to try things out (refactoring later if need be)
is much better for incremental, just-in-time design.


> > Yes, but, when servers DO have to stay up, fixing them on the fly,
> > albeit indeed fragile, is a specified constraint. I guess in some cases
> > one could devise alternate strategies: putting up a new fixed server
> > process on a different machine, port-redirecting all new requests to
> > the new machine, and finally pulling down the old buggy server when
> > the conversations for the requests it was serving at fix-time are gone.
>
> I'm assuming you have to plan for this anyway, since you'll need a way
> to doctor the server in the first place. So why not make your
> planning easy by having __dynamic__ = 1 somewhere?

No, I don't have to plan *in advance* for this functionality, *in
Python as it stands today*. This is thanks to Python's rich and
powerful dynamic features, of course.

Consider. I release PlikServerFramework (PSW) 1.0 -- programmers
who program on top of PSW can write their own plug-in classes and
register with a factory-registrar. I do not constrain plug-in
classes so they have to inherit from PSWobject -- I just specify
they have to provide a certain signature (certain methods with
certain signatures), since signature-based polymorphism is much
more general and handier.

Then later on I release PSW 1.5, which adds features (rarely do
new releases REMOVE features, though Python's an exception:-0).
[Note: this IS a quip, but actually I consider Python's removal
of 'wrong' old features a peculiar strength -- it's just when
the features slated for removal are 'good' ones that I worry:-)].

One of the new features is, a running server can be upgraded.
PSW-using programmers don't have to change any of their existing
PSW plug-ins -- they can just specify a few new flags, if they
wish, when they submit a plug-in to the factory-registrar, to
specify whether the new plug-in must override an old one and
to which extent (only for new connections, or even for ones
that already exist). Beautiful, isn't it? A programmer X may
be using some PSW plug-ins that were separately released by
programmer Y (and *of course* X doesn't want to fork Y's code,
that's a seriously dangerous anti-pattern) and get the new
PSW 1.5 benefits without worries. Cool!

Only, I can't do it any more in Python 3.0 (or whenever it is
that __dynamic__=0 becomes the default) -- I have to have the
foresight to specify infrastructure, that may come in useful
in future framework releases, right from the very first 1.0
release -- or else I can't offer these new functionalities
without requiring PSW users to modify existing plug-ins (which
might be a serious problem for plug-ins written by others).

This lowers Python's utility for such tasks to closer to that
of (e.g.) Java, and makes other languages (competing for the
same niche as Python, but clever enough to retain a richer
dynamic behavior, such as Ruby) more relatively attractive
compared to Python.


> > *blink* I had never thought Python's philosophy was protecting
> > "typical users" from themselves -- I thought that was the idea
> > of Pascal, Modula-2, &c, to Eiffel, the languages that know what's
> > good for you better than you know yourself, so they'll force you
> > to program the way Wirth (or Meyer) KNOWS is the one right
> > way to program. As I previously read your recent posts, I thought
> > that the __dynamic__ thing was about performance instead...?
>
> Here we go again. I mention one reason and it is assumed that this is
> the only reason. As I've said before, my mind works a lot faster than
> my fingers can type (it seems it's the opposite for you :-), and
> sometimes the rationalization for an idea comes only gradually.

My fingers are pretty fast indeed, but I seriously doubt if you're
qualified to judge on the working speed of my mind; although you're
of course free to disagree, it appears that many readers enjoy my
posts, which suggests a reasonable amount of working-mind in them.

Your (relative) inability to rationalize your (often excellent)
ideas is indeed well-known. But does that mean we should never
respond to what you say, because what you say is known to be a
probably-imperfect verbalization of reality?


> If you think about it, Python does a lot to protect typical users from
> themselves (why otherwise do you think it's gaining success as an
> educational language)? For example, an input line of a million bytes
> won't cause a buffer overflow.

Not in Java, either.

> For example, arithmetic overflow is
> not silently truncated.

Not in Java, either.

> For example, almost anything that could cause
> a core dump is caught before it does (and we fix the remaining core
> dumps in real time :-).

Ditto for Java.

> For example, mixing incompatible types in
> expressions causes a TypeError rather than having a random undefined
> side effect. For example, using an undefined variable name raises a
> NameError rather than silently being equal to zero. For example, the
> whole division thing. And so on.

"Better" still, in Java, mixing incompatible types is often
caught at compile-time, and using undefined identifiers is
_always_ caught at compile-time. Since typical users do not
test their programs anywhere like nearly enough, it's crucial
to provide compile-time catches for their typical mistakes,
particularly typos, isn't it?

So, if protecting "typical users" against their own mistakes
is a priority, it seems that Java fills that niche 'better'
than Python does.

So where's Python's strength? In *NOT* going as far as Java
in this 'protection' racket (notoriously a lucrative one, but
shady:-) -- in *selectively* protecting against stuff that's
worth protecting against, but no more than that -- and thus
not *getting in the way* the way Java does.

Say I'm using class X from some library. The library author
didn't think of providing instances of X with a 'fleep'
attribute, yet I need that attribute in X instances for
my use. In Java, I need much work -- I must subclass X
with my own Y, and arrange for Y instances to be generated
instead of X instances (assuming that's possible -- if
the instances are generated from within some framework,
I'm SOL if the framework author didn't make the factories
user-overridable). In Python, it's a breeze -- if nothing
else, I can add a fleep to X instances on the fly; but,
better still, I can also (in Python as it stands today)
provide a *DEFAULT* value of fleep for X instances for
which it's not been explicitly set otherwise, just by
doing X.fleep=23.

Ooops -- now, in the new Python, I'll be SOL again (to
some extent) if the framework author lacked foresight:
I'll still be able (presumably, and hopefully) to add
an instance-attribute fleep to specific instances, but
rather than the handy default X.fleep I'll have to
surround every use of x.fleep for some instance x of
X with hasattr tests or try/except statements, so as
to be able to set x.fleep to the default 23 if fleep
was never set for this specific x as yet. A pity.

In some cases I'll be lucky -- if X instances are
usable as directory keys, for example. Or maybe I
can concoct a grandiose scheme for "non-invasive
fleep adjunction" based on weak references and id().
Again, these kludges are roughly equivalent to what
Java provides -- if X is hashable I can use that in
Java too, and, in Java too, I can (and have to)
concoct grandiose weak-references-based schemes.

I'd rather not have to, for such an unfortunately
frequent task as "customize to my needs a library/
framework whose author didn't have the foresight
to provide the customizability features I need".

I wish you worked for a few months with, say, the
Microsoft MFC library/framework: I think you'd
come out of the experience with a new, powerful
respect for the way Python (accidentally, it seems)
lets one naturally and effortlessly avoid such
huge wastes of effort.


> > Can we expect variable declarations a few minor releases from
> > now, then? That would presumably be consistent with the new
> > focus on protecting typical users from themselves.
>
> I'm presuming you're being sarcastic. We may indeed see *optional*
> variable declarations -- but not required ones. But realistically,
> even the optional declarations seem far away -- the types-sig is only
> active for about 6 weeks per year, and most of that time is used
> rediscovering where we were a year ago... :-(.

Definitely not a recipe for fast progress.


> > "Of course" as long as the extra devices don't significantly
> > interfere with the tools' previous strengths in terms of cost and
> > power. When the significant interference is there, there is no "of
> > course" about it -- it becomes a highly problematical trade-off.
>
> Exactly. What's going on here is that I'm trying to tease out how far
> I should go with the safety device without making the tool unusable.

*unusable* is a *VERY* strong word. Java is not "unusable".
Neither is C++, or, for that matter, C, or Fortran IV. In
each case, this is proven by a lot of important and very
successful software systems written and deployed in the various
languages. So, a language completely bereft of dynamic
feature, or very poor in them, can still to some extent
be 'usable' -- just not as useful as it would be with
simpler and richer dynamic features, such as Python's today.

I don't think your test should be, whether removing features/
"adding safety devices" makes Python "unusable". Making it
"less useful than before" should hopefully be enough to block
the feature-removal/safety-addition.

> The new classes in Python 2.2 (which are entirely optional -- by
> default you get the same classic classes as in 2.1 and before) are one
> step of a new design. We'll see how it needs to be tweaked.

The problem from my POV is therefore not with Python 2.2 -- if
the default still allows the dynamic aspects of today's Python,
providing a way for such aspects to be explicitly turned off
is not a disaster from my point of view. Just like Java's "final"
keyword to turn off class extensibility or method overridability,
it can be abused, but it takes positive malice (what's the
opposite of 'forethought'?-) to abuse it, as it's NOT the
default. I doubt you remember it, but a potential feature
I've often chatted about in this group is the ability to
explicitly 'lock' a dictionary in various ways (and I'm
tickled pink that I'll now be able to experiment with it
by subclassing built-in dictionaries!-) -- that's optional
explicit stopping of dynamic behavior, too, and without
even the positive performance side-effects that your new
ideas have. The keywords from my POV in the sentence
"optional explicit stopping of dynamic behavior" are:
1. most important: OPTIONAL -- there must still be
a way to get the good old dynamic behavior
2. very important: EXPLICIT -- ideally, dynamic
behavior should be the default, with a way to
EXPLICITLY turn it off, rather than vice-versa

"Optional, explicit stopping" would mean a smooth progress
towards a Greater And Better Python. Changing defaults,
or, worse, removing some of today's dynamic aspects, even
though they had "just slipped in by mistake", has darker
characteristics... that 'mistake' was serendipity, the
Goddess Fortune smiling upon you. Don't turn your back
to the Goddess when She smiles -- She's pretty fickle!-)


Alex

Peter Hansen

unread,
Aug 14, 2001, 8:31:16 AM8/14/01
to
Alex Martelli wrote:
>
> So, if protecting "typical users" against their own mistakes
> is a priority, it seems that Java fills that niche 'better'
> than Python does.
>
> So where's Python's strength? In *NOT* going as far as Java
> in this 'protection' racket (notoriously a lucrative one, but
> shady:-) -- in *selectively* protecting against stuff that's
> worth protecting against, but no more than that -- and thus
> not *getting in the way* the way Java does.

Hear hear! I have a bevy of "typical" Python programmers
(lets say that means fairly new to Python, not new to
programming, but in any event they all have brains,
as most "typical" Python programmers do :) and I have
*not* felt the slightest need to protect them from
themselves. In fact, my choice of Python for this
team versus something like Java is based largely on
the lack of implicit protection and the on the default
dynamicism which Python provides, perhaps almost as much
as it is on Python's vaunted readability.

> The keywords from my POV in the sentence
> "optional explicit stopping of dynamic behavior" are:
> 1. most important: OPTIONAL -- there must still be
> a way to get the good old dynamic behavior
> 2. very important: EXPLICIT -- ideally, dynamic
> behavior should be the default, with a way to
> EXPLICITLY turn it off, rather than vice-versa
>
> "Optional, explicit stopping" would mean a smooth progress
> towards a Greater And Better Python. Changing defaults,
> or, worse, removing some of today's dynamic aspects, even
> though they had "just slipped in by mistake", has darker
> characteristics...

I agree completely that this change, ostensibly being
made (a) to protect "typical" programmers, and (b) to
make Python faster, should be made in such a way that
is is both optional and not the default.

No "typical" programmer uses this feature anyway. Nor
would one accidentally bump into it and need to be
protected from doing so, even if it were really a
good thing to protect typical Python programmers.

As far as optimizing/tuning performance, I think the
best points made on this subject have been (a) Python
can't possibly compete with many other languages and
shouldn't feel the need to and (b) make it an _option_
to select better performance, the same way it is an
option even to try optimizing one's code. Don't try to
push optimized Python down my throat; I'm quite happy
having an option to improve performance, which I
can choose not to exercise (or ignore without risk).

I should be able to say, after everything is working
and I'm long past the point of needing some dynamicism,
hmmm, this part of my code clearly will never need
this dynamicism, _and_ it needs to run faster, so I'm
going to disable that capacity, to improve performance.
Please don't make it so I have to think, for *all* the
code I write, about whether or not I will ever need this
feature: you'll just slow me down.

Terry Reedy

unread,
Aug 14, 2001, 10:20:15 AM8/14/01
to

"Alex Martelli" <al...@aleax.it> wrote in message
news:9lb0k...@enews1.newsguy.com...

> "Guido van Rossum" <gu...@python.org> wrote in message
> news:cpk807e...@cj20424-a.reston1.va.home.com...
> > Exactly. What's going on here is that I'm trying to tease out how
far
> > I should go with the safety device without making the tool
unusable.
>
> *unusable* is a *VERY* strong word. Java is not "unusable".

I agree that 'unusable' is way too strong. The bottom line is
preventing core dumps or otherwise trashing memory. Given that
instances will no longer all have the same memory layout, some
restriction on class reassignment is necessary. But I would stop with
that.

Terry J. Reedy

Ng Pheng Siong

unread,
Aug 14, 2001, 10:46:11 AM8/14/01
to
According to Erik Max Francis <m...@alcyone.com>:

> Ng Pheng Siong wrote:
> > self.state = 1 - self.state
>
> Why not just
>
> self.state = not self.state

Good point. "x = 1 - x" was what I used back in the days when I first
encountered programming and Basic. Does "x = not x" work in Basic? I no
longer remember... (Presumably so.)


--
Ng Pheng Siong <ng...@post1.com> * http://www.post1.com/home/ngps

Quidquid latine dictum sit, altum viditur.

Alex

unread,
Aug 14, 2001, 11:10:59 AM8/14/01
to

> meanwhile-perl6-is-looking-better-all-the-time'ly-yrs,

Is perl 6 available for download somewhere?

Alex.

Aahz Maruch

unread,
Aug 14, 2001, 11:23:20 AM8/14/01
to
In article <9l9d4...@enews4.newsguy.com>,

Alex Martelli <ale...@yahoo.com> wrote:
>
>Yes, but, when servers DO have to stay up, fixing them on the fly,
>albeit indeed fragile, is a specified constraint. I guess in some cases
>one could devise alternate strategies: putting up a new fixed server
>process on a different machine, port-redirecting all new requests to
>the new machine, and finally pulling down the old buggy server when
>the conversations for the requests it was serving at fix-time are gone.
>
>But that constrains your deployment possibilities enormously, and it
>has its own huge fragilities (e.g., the server must be architected so
>that multiple instances, old and new, can update the same database
>without tripping on each other's feet -- just for starters).

After spending 1.5 years on a medium-sized project (roughly 50K lines of
mostly Python code), I've come to the conclusion that if your server
architecture is designed for only one server instance, there's something
wrong with the architecture. That goes triple or quadruple if 100%
uptime is a stated goal. We got bit several times because we violated
that in our initial design, and most of our 2.0 work was fixing that.

Fortunately, the one truly uptime-critical part of our application *was*
initially designed to have multiple servers, and for precisely that
reason.

Patching running code is just so ... so ... mainframe. ;-)
--
--- Aahz <*> (Copyright 2001 by aa...@pobox.com)

Hugs and backrubs -- I break Rule 6 http://www.rahul.net/aahz/
Androgynous poly kinky vanilla queer het Pythonista

Internet $tartup$: Arbeit ueber alles

Neil Schemenauer

unread,
Aug 14, 2001, 12:21:51 PM8/14/01
to pytho...@python.org
Guido van Rossum wrote:
> All I want is to disable changes to *existing* classes by default.
> You can write your own metaclass that changes the default, or you can
> inherit from a dynamic base class -- thus, with very little effort,
> you can make all your classes dynamic, if you want to.

This sounds reasonable to me. In my experience, while assigning to
__class__ is cool, there is usually a good alternative that achieves the
same effect.

Neil

Neil Schemenauer

unread,
Aug 14, 2001, 12:29:25 PM8/14/01
to pytho...@python.org
Alex Martelli wrote:
> > Zope's persistency support manages this without assignment to
> > __class__. A persistent object has three states: ghost, valid, and
>
> Yes, and AMK has, it seems to me, posted eloquently as to
> how and why this is a kludge (or at least that's how I
> read AMK's posts on this thread), made necessary by the
> extension-class disabling (effectively) __class__ changes.

You can have the ability to efficiently subclass builtin
types or the ability to assign to __class__. Pick one. I would choose
the first any day. The effect of second can usually be achieved in some
other way.

Regarding Andrew's problem with extension-class disabling __class__
changes: I worked around it. It wasn't too bad.

Neil

Marcin 'Qrczak' Kowalczyk

unread,
Aug 14, 2001, 1:42:56 PM8/14/01
to
Sun, 12 Aug 2001 01:35:16 GMT, Guido van Rossum <gu...@python.org> pisze:

> I guess I have a bit of a hidden agenda: Python is more dynamic than
> the language I *wanted* to design. Some of the dynamicism was simply
> a implementation trick. Some of the dynamicism is getting in the way
> of optimizing code, because the optimizer can never prove that certain
> variables won't be changed. So I'm trying to look for ways that pin
> down things a bit more. I'm making assumptions about how "typical"
> Python code uses the dynamic features, and I'm slowly trying to
> introduce restrictions in the language that make the optimizer's life
> easier without affecting "typical" code.

This will not be directly relevant for Python, although maybe somebody
will find a use of parts of this. I need some help with making things
less dynamic.

Crossposting to comp.lang.misc. I really wanted to talk with Python
people, so followups are set to comp.lang.python; feel free to change
to comp.lang.misc. I'm sorry for the off-topic. Python people just
appear to have good minds for these things.

Many years ago I wanted to design an ideal programming language.

Later I learned many more existing languages, realized how naive my
ideas were, how languages for different purposes should be different,
what diverse principles languages are based on, and that it doesn't
matter that much for practice: how important are reasons other than
technical details of the language core.

I was no longer working on my own language, and even discouraging some
people from repeating my mistakes. More important for practice and also
fun is helping teams maintaining existing language implementations,
which often drive language design of non-mainstream languages. I
adopted Haskell for the purpose of being called "my language",
and recently watched closely Python development and took part in
discussions about it.

But my old dreams are crying, wanting to become true. I couldn't
resist. Now, with different feelings, I quickly designed and
implemented a little language, only for enjoying working on it.

[...]

The language is similar in spirit to Lisp and Python, with some of ML,
Haskell and Smalltalk added. Design goal: simple core, still reasonably
convenient and readable (unlike pure lambda calculus or Forth for
example) but more idealistic than pragmatic, with interesting stuff
implemented in the language itself. Here is a description.

The main syntactic entity is an expression. An expression denotes
a function. A function takes a sequence of functions as arguments,
performs some action, and returns a value (or throws an exception
which is also a value). There are no static types but scoping is
completely static.

It follows that all arguments are passed by name (arguments are
functions themselves). Most functions however begin with evaluating
their parameters once (applying them to empty sequences) and
remembering the results. This effect is obtained by the default syntax
of function definition. It's thus equivalent to passing arguments by
reference. Since all values are immutable, the default effect is also
equivalent to passing them by value.

Values are either functions (as described above), or terms, or
boring primitives (integers and strings for now). A term has a symbol
(an unique tag for distinguishing a family of terms) and a sequence
of values.

That's all of the execution model. More sophisticated objects,
including imperative features, are expressed as functions, perhaps
builtin. Objects don't have intrinsic identity.

[...]

I have a prototype interpreter written in Haskell:
24 lines of abstract syntax,
191 lines of lexer,
114 lines of parser,
45 lines of the runtime system,
423 lines of builtins (evolving much),
226 lines of the interpreter of the core,
60 lines of the main program (read-eval-print loop),
379 lines of the standard library (evolving much),
38 lines of the Makefile,
----
1500 in total.

The standard library introduces the concept of a type. The type of an
object is used to choose implementations of generic functions applied
to it.

A type is represented as a symbol. Here is how the type of an object
is determined:
- If it's a function, it's called with one argument: the symbol Type,
and is expected to return its type.
- If it's a term, its symbol is looked up in the dictionary of types
of symbols.
- Types of integers and strings are fixed to Integer_t and String_t.

Moreover there is a dictionary which maps types to tuples of things
called its supertypes (a tuple is a term with the symbol being Tuple).
Perhaps they should be functions returning tuples, such that supertypes
can depend on the considered object (e.g. some proxy would have
type Proxy itself but its supertypes would include the type of the
wrapped object).

A generic function is created from a dictionary which maps types to
functions. When such function is called, it determines the type of
its first argument and looks up the implementation in the dictionary.
If it's not found for the given type, its supertypes are examined
recursively (depth first search). If it's still not found and the
first argument is a function or term, a generic type Function_t or
Term_t is tried. Finally Object_t is tried; an entry at Object_t is
the default implementation for all objects not having their own.

I think this is similar to how CLOS expresses methods. Unfortunately
I have almost no experience in Lisp and I never read complete Lisp
references. But I'm sure that my language has much in common with Lisp.

I had a chicken-and-egg problem here. Hashing and equality are good
candidates for generic functions. They are needed for implementing
generic dictionaries. In order to look up types in the dictionary,
it must be determined how types are hashed, so the hashing function
should be extracted from the relevant dictionary at Type_t. How to do
it? The type of Type_t is Type_t, so one has to determine how to hash
Type_t by looking up a dictionary at Type_t, oops! Even to determine
that the type of Integer_t is Type_t one has to look up Integer_t in
a dictionary, which needs its type to hash it...

I solved this by using dictionaries specialized for symbols (they
use no generic functions) and declaring that types must be symbols
and not arbitrary objects (unless they are not used by the standard
overloading machinery, or at least a different kind of dictionary is
used for their interfaces).

Example, how equality is defined:

prim_equal <- :(==)
# Save the builtin definition of (==) for using it below (the builtin
# version compares only symbols, integers and string).

equal_dict <- const dict_of_symbols
(==) <- method equal_dict

# A generic indexing operator is not defined yet at this place and
# operator (.) works only for terms now, so I'm using an ugly direct
# function call to set the entry in the dictionary (which is
# implemented as a function):
equal_dict! Set Term_t [x y]
# [x y] starts a lambda abstraction.
is y Term_t & prim_equal (symbol x) (symbol y) &
len <- const (length x)
(len == length y) &
check i = (i >= len) | (((x.i) == (y.i)) & check (i+1))
check 0

equal_dict! Set Integer_t: prim_equal
equal_dict! Set String_t: prim_equal

End of example. Further types may insert additional entries.

The overloading machinery is similar to Haskell classes. Not only
further types may insert additional entries, but existing types may
be put into new dictionaries of new overloaded operations. Thus an
interface of a type is extensible.

There is a problem I don't know how to solve: I want the language
to have a potential of efficient implementation. It works only to
some extent...

Static binding means that identifiers like length and (&) refer to
statically known functions. They are not looked up in a dictionary at
runtime. An optimizing compiler can see their definitions and inline
their calls. The fact that my prototype implementation looks them up
by names at runtime is irrelevant :-)

It's important that a compiler can be sure that
x <- var 0
refers to the standard function called var which creates a variable, so
also uses of x (getting and setting its value) can be inlined as well.
It's also important to really pass by value what would be evaluated
anyway.

Note that evaluating x as an expression reads the current value of this
variable. Every identifier is bound to a function, and mutable variable
is implemented as a function (a constant too). The syntax of calling
a function bound to an identifier doesn't need parentheses: just the
name and parametres separated by spaces. The <- construct evaluates
the rhs which must return a function which is bound to the identifier.

One can't assign to length for example. This function is not a
variable, won't understand the assignment message, and certainly won't
ever change its internal state (it doesn't have any): it will always do
the same thing. That's why calls to length can in principle be inlined.

Unless the standard library defines a generic length which will be
used instead of the builtin one - and it will surely do! The problem
is with overloading. The dispatching code could be statically analysed
and inlined, but not the result of the dispatch.

There is in practice no way a compiler can infer that (==) defined
as above, applied to integers, really uses the above definition. At
any time somebody could replace the equal_dict.Integer_t entry with
something entirely different.

Even if dict_of_symbols disallowed overwriting existing values and
a compiler would be taught about it, it doesn't work for example in
the following case: equality on Tuple_t uses the generic definition
for Term_t, but someone later inserts an explicit definition for
Tuple_t. Or adds a supertype of tuples and an implementation of length
for them which is searched before Term_t. Or whatever.

I don't know how to solve it. It seems a silly problem, but the
prototype implementation is really slow. Even though I don't intend
to let this language have serious uses, I'm worried that I can't
design it well. Having a few mandatory dictionary lookups on using
(==) on integers is not a good option.

The intent of a code calling (==) on integers is to use the concrete
standard implementation! The intent is to not allow somebody define how
integers are compared, and the compiler should in theory take advantage
of that. I already captured the intent for non-overloaded functions.

It could be solved in an ad-hoc way by special-casing important
operations, but I want a general solution. And a solution which would
not complicate the core language.

The core and syntax are really simple. No reserved words at all and
only the following symbols have special meaning (besides literals
and comments):

; sequencing, separation of declarations
= recursive binding
<- side-effecting binding
: reference, i.e. a special case of lambda abstraction
[ ] lambda abstraction
, insert all values of the tuple as arguments
! get the function to which it evaluates
( ) grouping

Even the while loop is defined in the library. The syntax feels
a bit Pythonic, thanks to using layout:

i <- var 0
while (i < 10):
print i ", "
i :+ 1

There are two arguments of while here: a condition (which is evaluated
multiple times) and a lambda abstraction (which is evaluated once,
but the function it evaluates to is the body of the loop and is called
on each iteration).

Of course :+ is also just an operator, defined thus (after + is
defined to do the right thing):
x! :+ y = x! := (x! + y)
The exclamation mark turns off the evaluation of the argument on entry,
such that the identifier x returns a function which was denoted by the
expression passed as the first argument, instead of its result. Passing
variables by reference is thus unified with delayed evaluation.

Parentheses are necessary here because there is no operator precedence
and operators associate in the opposite direction. You could save the
parentheses by puting the rhs indented in the lines which follow.

Even assignment is defined in the standard library:
x! := y = x! Assign y
Finally the Assign symbol is born as a builtin because it's referred to
by another builtin, namely var, and var is a builtin because creation
of new mutable references can't be defined in terms of something
more primitive.

The core is so simple that I don't want to introduce a builtin concept
of polymorphic dispatching. The whole fun is bootstrapping something
powerful from small. But I still want it to be potentially efficient,
so the compiler can often infer what function to call. Help!

--
__("< Marcin Kowalczyk * qrc...@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK

Andrew Dalke

unread,
Aug 14, 2001, 7:05:00 AM8/14/01
to
Guido:

>No, in the new system, __getattr__ is called for all attributes. See
>my response to Roman.

I hadn't noticed this __getattr__ change on my previous readings.

I confess I couldn't find that response - perhaps my newsserver
dropped it? But I do see a post of yours with no attribution of
who you are responding to, which says:

> new-style classes allow you to
> overload __getattr__ for all attribute accesses -- classic classes
> only call __getattr__ when "normal" attribute access fails, which is
> less flexible.

I want to point out that the current ("old-style") behaviour is
very usful for caching results. I have written several classes
of the form

class Obj:
def __getattr__(self, name):
if name == "prop1":
compute prop1
self.prop1 = prop1
return prop1
elif name == "prop2":
...
raise AttributeError(name)

For example, one is a system where the "compute prop" calls a
C function, and another version talks to a database to get the
requested information.

In both cases, I liked the simplicity of implementation and the
fact that once cached there is no performance loss.

With the new-style __getattr__ this means the code will be
slightly more complicated, as in

def __getattr__(self, name):
if name in self.__dict__:
return self.__dict__[name]
... code as usual ...

and definitely slower.

Yes, I am concerned about the performance. Instead of using
__getattr__/__setattr__ to mimic attribute lookup I could also
have done C++/Java style accessor methods

def getProp1(self):
...
def setProp1(self):
...

I happen to think accessor functions look ugly - things that
are attributes should look like attributes. I've been able to
argue against them because __*attr__ exists partially because I
can say "and the results can be cached with no performance loss."
But now it appears that getProp1()/setProp1() methods will never
be slower than __getattr__, so I will only have an esthetic
argument to justify my preference.

I do understand that not having a __getattr__ hook for all attribute
lookups is less flexible. I just not that I've much more often
used this caching lookup ability than needed that flexibility. In
fact, I'm hard pressed to think of a case where I have needed more
flexibility.

>I'd like to see which is more common -- the need to change __class__
>or the need to inherit from a built-in type. I'm betting the latter,
>by a large amount.

I've derived from UserList and UserDict, many more times then I've
changed __class__. I've also derived from them many more than I've
used the dis, netrc, or mailcap modules, or used complex numbers
or unicode, or specified a PYTHONSTARTUP variable for interactive use.
Should those features also be removed?

Andrew
da...@dalkescientific.com

Alex Martelli

unread,
Aug 13, 2001, 5:47:46 PM8/13/01
to
"Michael Robin" <m...@mikerobin.com> wrote in message
news:52e5ab5f.01081...@posting.google.com...

> "Alex Martelli" <ale...@yahoo.com> wrote in message
news:<9l4a9...@enews4.newsguy.com>...
> > "Glyph Lefkowitz" <gl...@twistedmatrix.com> wrote in message
> > news:mailman.997561650...@python.org...
> > ...
> > > Also, as far as I know, better alternatives do not exist; for example,
> > > "promise" objects from a database which are latently initialized when
they
> > > are accessed. One module that I use *all the time*;
> >
> > I don't know of any better way to handle the Promise design pattern,
> > either. In C++, I'm having to kludge around it all the time via
> > letter/envelope idioms -- the possibility of changing classes on
> > the fly makes it much simpler and more direct. Hadn't thought
> > of that earlier...
>
> Can't you use a proxy object and override __call__ and friends?
> (In a sense, that's what Smalltalk does with the Object Table,
> vs. Python which uses a non-changing address for the object.)
> Or am I missing something...

Yes, I can use an 'indirector' (that's basically what letter/envelope
is: Coplien, "Advanced C++", an oldie but goldie). I'd rather not
have to code it up -- a weak reference (an almost-transparent,
built-in indirector) would do, if it just wasn't _weak_:-).


Alex

Alex Martelli

unread,
Aug 14, 2001, 4:14:26 PM8/14/01
to
"Neil Schemenauer" <n...@python.ca> wrote in message
news:mailman.997806503...@python.org...

> Alex Martelli wrote:
> > > Zope's persistency support manages this without assignment to
> > > __class__. A persistent object has three states: ghost, valid, and
> >
> > Yes, and AMK has, it seems to me, posted eloquently as to
> > how and why this is a kludge (or at least that's how I
> > read AMK's posts on this thread), made necessary by the
> > extension-class disabling (effectively) __class__ changes.
>
> You can have the ability to efficiently subclass builtin
> types or the ability to assign to __class__. Pick one. I would choose

Or pick (e.g.) Ruby and get both. What makes you think it HAS
to be "pick one" when many languages have neither (C++, say)
and several have both (Ruby, Smalltalk [with 'becomes'], Dylan...)?

> the first any day. The effect of second can usually be achieved in some
> other way.

So can "the effect of first", or, how have we ever been able to
live with Python for so long? If I *had* to choose, because some
evil witch hexed me to, I guess I'd choose subclassing builtins,
too -- more frequent opportunities. But surely *some* ability
to keep a subset of the existing class-changing possibilities is a
possibility -- Guido himself mentioned "a subclass that doesn't
add slots" as one which could be an easily-checked safe case to
assign in lieu of an existing instance's __class__, for example.


Alex

Markus Schaber

unread,
Aug 14, 2001, 5:02:32 PM8/14/01
to
Hi,

Kirby Urner <ur...@alumni.princeton.edu> schrub:

> This would be like making tuples a root type, with lists a subclass
> of tuple -- they inherit the functionality of tuples, plus add
> mutability and a host of other things.
>
> In this sense classes would come in two variaties, immutable (default)
> and mutable (different type of class).

I don't think ist is a good thing to make mutable subclasses of
immutable classes.

When we think of classes as defining common behaviour of a set of
objects, then subclasses should

--
1) Customers cause problems.
2) Marketing is trying to create more customers.
Therefore:
3) Marketing is evil. (Grand Edwards in comp.lang.python)

Marcin 'Qrczak' Kowalczyk

unread,
Aug 14, 2001, 6:21:15 PM8/14/01
to
Tue, 14 Aug 2001 22:14:26 +0200, Alex Martelli <ale...@yahoo.com> pisze:

> What makes you think it HAS to be "pick one" when many languages
> have neither (C++, say) and several have both (Ruby, Smalltalk
> [with 'becomes'], Dylan...)?

IIRC Smalltalk's 'becomes' is a horrible hack which used to be easy
to implement where it appeared for the first time because of an extra
indirection in all objects, but requires scanning the whole world
changing all references to the given object in modern implementations.
And it causes a disaster when an important object like nil is replaced.

So maybe it is possible, but it's certainly a bad fit to the
Smalltalk's object model, discouraged and maybe deprecated.

Disclaimer: I never used Smalltalk and may remember wrong.

Guido van Rossum

unread,
Aug 14, 2001, 6:48:40 PM8/14/01
to
Fortunately, Alex's message scrolled off my screen before I could
capture it, so I don't have to reply point-by-point. :-)

After talking to some more people and running an experiment
(converting Jeremy's compiler package to use the new class system), I
am convinced that both dynamic class features under attack recently
are important. Making __dynamic__ the default slows things down by
about 25% for one particular benchmark (compiling the compiler with
itself), so I'll have to work on that.

I've also figured out the correct rule for allowing safe assignment to
__class__. The underlying rule is that instances of the old and new
class have the same memory lay-out. A necessary and sufficient
condition for this is (in our case) that the two classes have
instances of the same size, and that there is a common base class that
also has that same size. In addition, the GC flag bits must be the
same.

Now I'm going to withdraw from Usenet again. I prefer python-dev as a
platform for discussing language changes -- I get the same results
without being flamed, and I waste much less time.

Peter Hansen

unread,
Aug 14, 2001, 7:26:55 PM8/14/01
to
Ng Pheng Siong wrote:
>
> According to Erik Max Francis <m...@alcyone.com>:
> > Ng Pheng Siong wrote:
> > > self.state = 1 - self.state
> >
> > Why not just
> >
> > self.state = not self.state
>
> Good point. "x = 1 - x" was what I used back in the days when I first
> encountered programming and Basic. Does "x = not x" work in Basic? I no
> longer remember... (Presumably so.)

I don't really remember, but like you I used that only back
in the days of BASIC. I think BASIC would often use -1 for
TRUE, so I don't remember whether x = 1 - x really produced
values that were testable as booleans, or whether we always
had to make it explicit with "IF x = 1 THEN GOSUB 1000".
Does that mean BASIC had the "better explicit than implicit"
part of Python years before? Guido stole from BASIC! The nerve!
;)

Martijn Faassen

unread,
Aug 14, 2001, 9:59:27 PM8/14/01
to
Andrew Dalke <da...@acm.org> wrote:
[snip]

> I want to point out that the current ("old-style") behaviour is
> very usful for caching results.

Just wanted to chip in to say that this style of caching is pretty nice;
I just used it in a project of mine in the same way.

Some alternative in the future system to regain the speed benefits of
this would be nice.

Regards,

Martijn

Alex Shindich

unread,
Aug 14, 2001, 10:32:40 PM8/14/01
to
As always, Mr. Martelli&#8217;s posts are long and full of fascinating
detail. This time though, I have managed to read it all the way
through without forgetting the beginning of the post. In fact, I am
fully and entirely in support of Alex&#8217;s position! I do care for
the new features in PEP 252, but I think that the default behavior
should not change. Here are my reasons:
1. I do believe that "premature" optimizations are evil. They usually
lead to really clumsy design decisions with very little performance
gain.
2. I am worried about incompatibility with older versions of Python.
As a person who does have some amount of production code written in
Python, I am worried about the effort of making my code compatible
with this change. Even though I have unit tests for most of my code,
it is still an effort to fix all the places where I need to turn on
"dynamism".
3. Most Python developers did not choose Python over C++ for speed.
Python is attractive because it is a very clean OO language with
dynamic behavior. Making Python less dynamic and &#8220;agedly&#8221;
faster will not attract more developers. Making it less dynamic will
be a turn off for some members of the existing Python community.

Just thought I would offer my 2 cents on this issue.

Aahz Maruch

unread,
Aug 15, 2001, 1:27:23 AM8/15/01
to
In article <etdvgjq...@w20-575-7.mit.edu>, Alex <new_...@mit.edu> wrote:
>
>> meanwhile-perl6-is-looking-better-all-the-time'ly-yrs,
>
>Is perl 6 available for download somewhere?

Is joke. Perl6 is still in the design stages; alpha is probably a year
or two (maybe even three) away.


--
--- Aahz <*> (Copyright 2001 by aa...@pobox.com)

Hugs and backrubs -- I break Rule 6 http://www.rahul.net/aahz/
Androgynous poly kinky vanilla queer het Pythonista

"I used to have a .sig but I found it impossible to please everyone..." --SFJ

Robin Becker

unread,
Aug 15, 2001, 8:22:51 AM8/15/01
to
In article <9l7t...@enews2.newsguy.com>, Alex Martelli
<ale...@yahoo.com> writes
>Guido's counter-argument is that classes that need to be dynamic
>will emerge in testing, while having dynamism as the default might
>lead to it being used wantonly and making later optimization hard.
>On this issue as well as unchangeability of __class__, he seems to
>be very motivated towards performance/optimization possibilities,
>which may explain the eerie C++ parallels (performance was always
>paramount in C++'s design, while, so far, it doesn't seem to have
>had all that strong an influence on Python).
>
>He may be right, of course (he generally is) -- perhaps we've all
>grown used to a Python that's in good part defined by dynamic
>possibilities that "just happened" to fall out of implementation
>techniques and were never a design-intent (if somebody knows
>the design-intent, it should be him:-), and giving up on those
>will have a general benefit in terms of performance increase. I
>still believe that when I need performance I code or recode in
>C++, or in C, and I'd like Python to stay simple and wonderful.
>But many (particularly those who have not exploited the dynamic
>possibilities, are not familiar with C++, etc) may agree with Guido.
>
>It will surely be a very different language when this type/class
>unification thing gets finished...!
>
>
>Alex
>
I'm with Alex on this. Many of the latest changes seem to be in the
direction of making Python less and less dynamic in favour of speed or
regularity. I believe Python should be more dynamic not less.

The unification was supposed to bring the types into the class world.
This thread seems to indicate classes being made more like types.

The problem with dynamic not being the default is that the requirement
for dynamic behaviour is not very obvious.

If this class assignment is disallowed what about messing with __bases__
as in the injective/mixin methodologies?
--
Robin Becker

Christian Tanzer

unread,
Aug 15, 2001, 9:31:51 AM8/15/01
to Guido van Rossum, pytho...@python.org

Guido van Rossum <gu...@python.org> wrote:

> > I'd like to support the proposal of Glyph and Alex to make 0 the
> > default for `__dynamic__`. IMHO, optimization should be restricted to
> > those few modules where it is really necessary.
>
> See my response earlier in this thread.

I read that and was not convinced. I'll try to explain why.

Unless one has a time machine, the most important property of a
program is being able to evolve. Lack of evolvability means its hard
to fix bugs, its hard to refactor to improve reusability, its harder
to satisfy new requirements...

As one can't know which part(s) of a program will need to be changed,
its best to try to keep them all changeable. One means to do that is
to do bindings as late as possible and to allow rebindings at
run-time. This hurts performance wise but results in better software.
As most of the code won't be performance critical anyway the
performance hit might be negligable and it should be easy to optimize
the hot spots by making them less dynamic if necessary.

The current dynamicism of Python allows late bindings and even
rebindings at run-time. Just one example where Python really saved my
day: a customer uses a frozen Python program. After a while, he
decides that he wants something done slightly differently. Even if I
think the request unreasonable I can easily send him a tiny script
which changes one method in the frozen application to do what the
customer wants. With no dynamicism as default this wouldn't be
possible. Of course, I could have created a customer-specific version
of the application -- the effort would have been days instead of
twenty minutes plus an extra branch in the code (instant killer of any
such proposal).

There is yet another aspect to this. If a language forces you to
choose between performance and flexibility at design-time it eases the
work of the language implementer but at the same time makes the life
of all language users much more difficult. If you want to keep your
code flexible and/or reusable you forgo performance -- the language
will not try to optimize the flexible code. If you decide to choose
performance, later changes might be much more difficult and on top end
up with poor performance.

> > Dynamicity is one of the really strong points of Python -- eye popping
> > as Glyph just called it.
>
> Eye popping can be a negative point too. I'd prefer a warning before
> my eyes are popped.

:-)

> > I wouldn't mind if I had to ask for some dynamic features more
> > explicitly than now but I'd really be hard hit if they went away
> > entirely. I'd love to get better performance but not at the price of
> > loosing all this dynamicism. I if wanted to use a non-dynamic language
> > I'd know way too many candidates vying to make my life unhappy <0.1
> > wink>
>
> Some people like Python for its extreme dynamicism. But there are
> other languages in that niche (like Lisp).

IMO, those aren't really contenders (certainly not Lisp [for me]).
Python is just the right thing: simple, readable, maintainable,
dynamic. (And the dynamicism can help to make the code more
maintainable).

> Most people like Python
> because it's so darn readable and maintainable. Unbridled dynamicism
> goes against that. I am striving for a balance that allows most forms
> of dynamicism, but requires a declaration in advanced for the more
> extreme kinds.

Agreed.

> > Maybe module objects could even grow `get-set` magic in the process
> > <duck>.
>
> You probably don't realize it, but you can write modules with get/set
> magic now, by stuffing an instance in sys.modules[__name__].

Maybe the intended <wink> was too implicit in my mail?

I knew about putting instances into the module dictionary -- somebody
borught that up on c.l.py. a few months ago. As this is not a
documented feature (IIRC), I would have been worried about using it,
though.

Hope-this-didn't-add-to-the-flames, y'rs

--
Christian Tanzer tan...@swing.co.at
Glasauergasse 32 Tel: +43 1 876 62 36
A-1130 Vienna, Austria Fax: +43 1 877 66 92


Christian Reis

unread,
Aug 18, 2001, 10:50:50 AM8/18/01
to
Guido van Rossum <gu...@python.org> wrote in message news:<cpzo969...@cj20424-a.reston1.va.home.com>...
> Glyph Lefkowitz <gl...@twistedmatrix.com> writes:
>
> > Am I correct in understanding from this thread that there is an intent to
> > remove the ability to assign an instance's __class__ attribute?
>
> Yes, I'd like to remove this. See my previous post in this thread for
> more of an explanation of the problem. But I haven't decided yet!
> This thread will help me figure out how big of a deal it will be.

I can present a different use-case so the decision can take these
things into account. I've written a set of wrappers for the basic GTK
widgets, and they're done in pure Python. Now Gtk has a companion
library called libglade which can query an XML file and retrieve
widget definitions, which it then instantiates and returns into
Pythonland through a libglade.py module. The libglade C module does
some fancy work with the widgets, and in the end they show up as
regular Python GTK instances. However, because I want them to be
"cast" into my wrappers, I just swap attribute my wrapper class to the
instances' __class__ attributes. This is, AFAICS, the only solution
without modifying the C module or re-wrapping the Python code.

This is a very valuable ability in python, and I'd very much like to
know, in the case it won't be preserved, if something functionally
equivalent could be proposed as a future addition.

> But you're getting so much in return! Subclassing built-in types,
> get/set methods, class and static methods, uniform introspection...

Yep, these are valuable. But it's tough giving up something nice
you've grown fond of :)

> If you know the type it's going to be eventually, you can use
> C.__new__() to create an uninitialized C instance.

This doesn't help in my case, unfortunately. :/

Take care,
--
Christian Reis, Senior Engineer, Async Open Source, Brazil.
http://async.com.br/~kiko/ | [+55 16] 272 3330 | NMFL

It is loading more messages.
0 new messages