An inspiration for the code I am about to mention is simple and
beautiful code posted a bit over a month ago by Tim Hochberg. I played
with that, and enhanced it fairly greatly at:
http://www.gnosis.cx/secret/multimethods.py
I welcome comments of any sort on the code.
There are a few things that I had particularly in mind. In reading
about Dylan, I found that it has a method named 'next-method( )' to
propogate dispatch from the closest match to more distant ones. You can
control whether you want this to happen by including that method, or
not. Take a look at:
http://www.tpk.net/~ekidd/dylan/multiple-dispatch.html
In my code, I added the facility to propogate dispatch. Using the
simple example that was a topic of conversation here 6 weeks ago, you
can make a call like:
beats.add_rule((Fire,Thing), firepower, next_meth=AT_END)
I give you options to propogate dispatch either before or after the code
within the current method execution.
As well, I decided to make dispatch resolution order configurable.
Hochberg's example was perhaps naive in fixing resolution order
according to definition order. That seems a little fussier than I would
want. So I made that configurable as well, and provide four sample
resolution functions. Two are naively definition-order based (either
forward or reversed). But two others are perhaps better candidates:
'lexicographic_mro()' and 'weighted_mro()'. If I understand correctly
(which I may not), the first follows Dylan, the second follows Damian
Conway' Class::Multimethods. I would be interested to understand other
examples, particularly CLOS. I have little intuition about what is the
"right" answer.
Since I allow propogated dispatch, I had to decide what to do with
results from the function calls. I decided to accumulate all returned
values into a list, and let the user pick what they want. So for
example:
<fire, fire> ['Fire always wins!', 'Fire always wins!', 0]
But for a non-propogating rule, the list has a single member:
<rock, scissors> [1]
If you stick to either 'SKIP' or 'AT_END' propogation, you can count on
index 0 of the return being the most specific function's value. Or if
conversely, you like 'AT_START', you can use index -1. If you mix the
two styles, it could become complicated to find relevant return values.
Yours, David...
--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.
> I've got to thinking about multimethods/multiple dispatch lately. I
> wonder if Pythonistas have some futher opinions on the use of these.
> Actually, part of it is that I'd -really- like to better understand use
> cases for multiple dispatch. As a trick, I can see it has a certain
> elegance, but little rock/paper/scissors toys are not, finally,
> compelling.
1) You can think about operators as multimethods vs. __add__ __radd__ .
2) Think about the visitor pattern once who have multiple dispatch.
3) Suppose you want the presentation of objects to change depending on
"context"
pres(ctxt,obj)
4) If you have also dispatch on singletons and not only types (both Dylan
and CLOS have that too):
e.g. PEP 246 or variations are trivially implemented
...
> There are a few things that I had particularly in mind. In reading
> about Dylan, I found that it has a method named 'next-method( )' to
> propogate dispatch from the closest match to more distant ones. You can
> control whether you want this to happen by including that method, or
> not. Take a look at:
>
> http://www.tpk.net/~ekidd/dylan/multiple-dispatch.html
>
> In my code, I added the facility to propogate dispatch. Using the
> simple example that was a topic of conversation here 6 weeks ago, you
> can make a call like:
>
> beats.add_rule((Fire,Thing), firepower, next_meth=AT_END)
>
> I give you options to propogate dispatch either before or after the code
> within the current method execution.
I think a multi-argument generalization of super (similar also to Cecil
resend
http://www.cs.washington.edu/research/projects/cecil/www/pubs/cecil-spec.htm
l
), would be the most flexible, pythonic solution.
> As well, I decided to make dispatch resolution order configurable.
> Hochberg's example was perhaps naive in fixing resolution order
> according to definition order. That seems a little fussier than I would
> want. So I made that configurable as well, and provide four sample
> resolution functions. Two are naively definition-order based (either
> forward or reversed). But two others are perhaps better candidates:
> 'lexicographic_mro()' and 'weighted_mro()'. If I understand correctly
> (which I may not), the first follows Dylan, the second follows Damian
> Conway' Class::Multimethods. I would be interested to understand other
> examples, particularly CLOS. I have little intuition about what is the
> "right" answer.
lexicographic is default CLOS approach, in Dylan:
given B isa A, then the signatures
(B,A)
(A,B)
would be ambiguous. (but if I recall correctly some(one) Dylan impl use the
CLOS ordering and not the Dylan one, oh well).
Unless you have only single inheritance and so you don't need linearization
of class hierarchies, I don't think the weighted approach
is manageable.
> Since I allow propogated dispatch, I had to decide what to do with
> results from the function calls. I decided to accumulate all returned
> values into a list, and let the user pick what they want. So for
> example:
>
> <fire, fire> ['Fire always wins!', 'Fire always wins!', 0]
>
> But for a non-propogating rule, the list has a single member:
>
> <rock, scissors> [1]
>
> If you stick to either 'SKIP' or 'AT_END' propogation, you can count on
> index 0 of the return being the most specific function's value. Or if
> conversely, you like 'AT_START', you can use index -1. If you mix the
> two styles, it could become complicated to find relevant return values.
>
which applicable methods to call and how to combine the results is dealt in
CLOS with method combinations.
A summary of CLOS can be found here:
http://www.dreamsongs.com/NewFiles/ECOOP.pdf
regards.
Theoretically multimethods are a great idea. Being an elegant
generalization of object-orientation, having multimethods available
may save you from a clumsy workaround every once in a blue moon.
The problem is that designs that take full advantage of multimethods
are all too often bad.
> There are a few things that I had particularly in mind. In reading
> about Dylan, I found that it has a method named 'next-method( )' to
> propogate dispatch from the closest match to more distant ones. You can
> control whether you want this to happen by including that method, or
> not. Take a look at:
>
> http://www.tpk.net/~ekidd/dylan/multiple-dispatch.html
Excellent example.
I would describe the purpose of inspect-vehicle like this:
Perform the relevant inspections sequentially.
which would naturally translate into:
for inspection in vehicle.relevant_inspections():
inspection.perform(vehicle)
# or perhaps: vehicle.perform(inspection)
Unfortunately the next-method() example code looks nothing like this.
'relevant_inspections', instead of being a mallable data structure, is
hardwired into the class hierarchy. You can't rearrange the order of
the inspections according to some criterion, you can't print a list of
inspections to be done, you can't collect a list of inspections that
failed without adding boilerplate code to each and every component
method, and you can't skip inspections already performed successfully.
All things that come naturally in the straightforward single-dispatch
solution.
- Anders
The problem is that designs that take full advantage of X are all too often
bad.
[I don't find the counter-analysis of the dull example particularly
convincing, super and inheritance are easely abused also with single
dispatch]
Whatever you're trying to say here, I don't get it.
> [I don't find the counter-analysis of the dull example particularly
> convincing, super and inheritance are easely abused also with single
> dispatch]
Suit yourself. I have no particular need to convince you; I used the
example for illustration, not as evidence.
- Anders
First let me thank you for all your great articles and your
on-line book in progress.
I am currently writing a multi-disk sector editor for
use with Storage devices such as digital film cards. These
devices are Flash Memory based and require complex business
logic to move data around because jpg files tend to be smaller
than erase blocks and people tend to erase files one at a time.
Because erase cycles in flash tend to be slow, files not to
be deleted are mapped to other sectors before the sector is
erased. This all leads to why it is interesting to read/write
the same data to multiple drives and then have a sector editor
that can work with multiple drives referencing the same physical
sector for each device ( think map() et. al. ). Drives are slow
in respect to CPU time and therefor, if one is working with a
'bunch' of devices at the same time I believe your Multiple Dispatch
logic will be of some help. As the project progresses I will
incorporate your code and report back my findings to this thread,
although it maybe a month or two before I do so.
Thanks again,
yaipa.h
me...@gnosis.cx (David Mertz) wrote in message news:<mailman.1041492423...@python.org>...
where's inspector in your reformulation?
DM> I've got to thinking about multimethods/multiple dispatch lately. I
DM> wonder if Pythonistas have some futher opinions on the use of these.
DM> Actually, part of it is that I'd -really- like to better understand use
DM> cases for multiple dispatch. As a trick, I can see it has a certain
DM> elegance, but little rock/paper/scissors toys are not, finally,
DM> compelling.
In CAD there is intersection multimetod - line/line, line/arc,
line/bezier ...
regards,
Niki Spahiev
> > As well, I decided to make dispatch resolution order configurable.
> > Hochberg's example was perhaps naive in fixing resolution order
> > according to definition order. That seems a little fussier than I would
> > want. So I made that configurable as well, and provide four sample
> > resolution functions. Two are naively definition-order based (either
> > forward or reversed). But two others are perhaps better candidates:
> > 'lexicographic_mro()' and 'weighted_mro()'. If I understand correctly
> > (which I may not), the first follows Dylan, the second follows Damian
> > Conway' Class::Multimethods. I would be interested to understand other
> > examples, particularly CLOS. I have little intuition about what is the
> > "right" answer.
>
> lexicographic is default CLOS approach, in Dylan:
>
> given B isa A, then the signatures
>
> (B,A)
> (A,B)
>
> would be ambiguous. (but if I recall correctly some(one) Dylan impl use the
> CLOS ordering and not the Dylan one, oh well).
Functional Objects' Dylan system started with a lot of code that is
based closely on libraries from CLOS and depends ont the CLOS ordering.
I think they try to use the Dylan ordering, and if it ambiguous then
they fall back to CLOS.
-- Bruce
Oh, you're right, I read the original to hastily, focused on
next-method and missed the role of the inspector subtype completely.
I'm used to argument types being a mostly-redundant mechanism for
catching errors, not a mechanism for specifying execution semantics.
I'll try again: The specs say:
"However, different types of vehicle inspectors may have different
policies."
def inspect(car, inspector):
for inspection in vehicle.relevant_inspections():
if inspector.policy_includes(inspection):
inspection.perform(vehicle)
My, that was hard. The lack of multimethods forced me add an entire
if-statement ;-)
- Anders
The results of relevant_inspections and the bodies of policy_includes are
not orthogonal, so that's far from ideal.
Are you sure you have illustrated what you wanted to illustrate?
I would go for:
for inspection in relevant_inspections(car,inspector): # multi method
...
and use a union combination or filter the results through the super-call
chain.
Nevertheless the example is too much a toy to show anything for whatever
side.
You could try with something larger.
Thanks.
I could do a lot of things, but I have work to do.
cheers,
Anders
The interesting question this brings up is how replacing the current
set of rules by a true multiple dispatch mechanism would change adding
types/methods to the existing type lattice. Anyone have any thoughts
on that?
<mike
--
Mike Meyer <m...@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Mike Meyer <m...@mired.org> wrote previously:
|The interesting question this brings up is how replacing the current
|set of rules by a true multiple dispatch mechanism would change adding
|types/methods to the existing type lattice. Anyone have any thoughts
|on that?
I'm not sure I understand why much anything would have to change. My
'lexicographic_mro()' and 'weighted_mro()' linearizations utilize the
existing method resolution order (object.mro()) for the heavy work.
What improvement would a change to this offer?
Of course, the way I set up my sample, it is simple to plug in your
favorite linearization technique. This is a rough area in my knowledge,
so I'm not religious in my preference for a particular style.
Yours, David...
--
mertz@ _/_/_/_/_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY:_/_/_/_/ v i
gnosis _/_/ Postmodern Enterprises _/_/ s r
.cx _/_/ MAKERS OF CHAOS.... _/_/ i u
_/_/_/_/_/ LOOK FOR IT IN A NEIGHBORHOOD NEAR YOU_/_/_/_/_/ g s
Well... I think Damian Conway is pretty smart guy. And the weighted
approach is what he did in his Perl version. I'm not necessarily in
favor of that linearization over others, but I don't see anything
obviously wrong about it.
|I think a multi-argument generalization of super (similar also to Cecil
|resend), would be the most flexible, pythonic solution.
My first feeling on reading Pedroni's suggestion was that I should
really implement this. But as I reflect on it, I have trouble seeing an
actual benefit.
It seems to me that when you want to propogate dispatch to a generic
"less specific" method, you would -always- want to do so either at the
start or end of the current methods code body.
Now I can easily imaging that a method body would want to call on some
-specific- function that happened to be one of the multimethods. But
AFAICS, it wouldn't be simply "some more general method". Especially
given that no more general method might be defined at all, and the call
would have to be guarded by a try/except in case nothing existed.
E.g.
def very_general(...): ...
def slightly_general(...): ...
def pretty_specific(...):
...do stuff
val1 = very_general(...) # this make sense!
val2 = next_method(...) # what will this be??!
...do more stuff...
multi.add_rule((Thing,), very_general)
multi.add_rule((Foo,), slightly_general)
multi.add_rule((MyOwnFoo,), pretty_specific)
multi(myownfoo)
Depending only on which rules are defined, the 'next_method()' in
'pretty_specific()' can change its meaning in unpredictable ways. E.g.
multi.remove_rule((Foo,))
Changes the whole thing. Maybe even the type of value returned by the
'next_method()' call. If "more stuff" depends on that value...
Then again, I am -delighted- to be shown wrong with an example of a
useful use of this sort of thing.
|http://www.dreamsongs.com/NewFiles/ECOOP.pdf
Thanks for this reference. It's quite interesting. But I think I'm
more interested in programming (and demonstrating) what is concretely
useful than merely "everything CLOS can do"... y'know what I mean?
Yours, David...
--
---[ to our friends at TLAs (spread the word) ]--------------------------
Echelon North Korea Nazi cracking spy smuggle Columbia fissionable Stego
White Water strategic Clinton Delta Force militia TEMPEST Libya Mossad
---[ Postmodern Enterprises <me...@gnosis.cx> ]--------------------------
Damian Conway on his method:
"some languages (e.g. CLOS) take a different approach -- breaking the tie by
a recount on the inheritance distance of each argument starting from the
left. In this example, that would mean that the call would be dispatched to
put_peg(RoundPeg,Hole), since the left-most parameter of that variant is a
``closer'' match than the left-most parameter of the put_peg(Peg,SquareHole)
variant.
In the author's opinion, this approach is appalling, since it favours one
parameter above all others for no better reason than it comes first in the
argument list. But there is no reason why pegs should be more significant
than holes. Moreover, arbitrarily resolving the dispatch in this way will
often mask a fundamental design flaw in the multimethod."
If those are the worries then I would go the Dylan route: the applicable
methods' signatures should be all comparable given the class precedence list
of the arguments otherwise the dispatch is ambiguous.
This avoids masked flaws and avoid guessing. This is an ideal for something
like an arithmetic operator, figure intersection etc
In practice sometimes choosing to break ties using some order on the
arguments can be useful, what one gets is like chaining single inheritance
dispatch. Also in this case it is not hard to have a grasp on what will
happen.
So I think that a multi method dispatch mechanism should provide support for
both approaches.
About the weighted approach:
- Python uses linearization (the mro) for single dispatch, your
implementation uses that to compute the "inheritance distance", and I don't
know what Perl counterpart does
- "inheritance distance" in the presence of multiple inheritance is for sure
a tricky thing to get right and manageable.
the problem is that it is hard to have a firm grasp on dispatching and the
results can feel rather arbitrary, consider
class A(object): pass
class B(A): pass
class C(B): pass
the signatures
(A,B)
(B,A)
are ambiguous for Dylan no matter what, now with the weighted approach if we
dispatch on (C,C) for example
[I'm using a modified version of weighted_mro that returns matches]
>>> m=[[(A,B),None,None],[(B,A),None,None]]
>>> scratch.weighted_mro((C,C),m)
[(3, [(<class '__main__.A'>, <class '__main__.B'>), None, None]), (3,
[(<class '__main__.B'>, <class '__main__.A'>), None, None])]
they are ambiguous.
Now:
class D(A): pass
class E(C,D): pass
(E's mro (C3 or 2.2) is E,C,B,D,A,object )
>>> m=[[(A,B),None,None],[(B,A),None,None]]
>>> scratch.weighted_mro((C,E),m)
[(4, [(<class '__main__.A'>, <class '__main__.B'>), None, None]), (5,
[(<class '__main__.B'>, <class '__main__.A'>), None, None])]
go figure.
regards.
oops, there's a minimal applicable method's signature that is comparable
with all applicable methods' ones...
It really seems that the notion of multiple dispatching fill your mind with
FUD <wink>.
Your line of reasoning could be equally applied to single dispatch:
we don't write thing as:
def Amethod(self,...):
...
class A:
method = Amethod
class B(A):
def method(self,...):
...
Amethod(self) # it is not in general the same thing as A.method even
less as super(B,self).method
...
Btw Damian Conway support this <wink>, so CLOS, Dylan, Cecil
Where did you find this quote? I don't think I saw it in the POD, but
maybe I overlooked it.
|If those are the worries then I would go the Dylan route: the applicable
|methods' signatures should be all comparable given the class precedence list
|of the arguments otherwise the dispatch is ambiguous.
I think for Python, both the Perl approach and the Dylan approach are
slightly wrong. I guess my 'lexicographic_mro()' is essentially the
same as CLOS... which maybe leads back to that whole canard/thread about
Python being a Lisp dialect :-).
I think Conway's approach is very Perl-ish: try to figure out what the
user was most likely to mean, than execute it willy-nilly. In that
sense, he definitely does the -right- thing for the language the module
is in. But Python doesn't make such assumptions.
On the other hand, Dylan seems much more bondage-and-discipline than
Python is. Covariance/contravariance concerns, and generally an
obsession with type hierarchies, is not very Pythonic, IMO. I wouldn't
say that raising a descriptive error in case of method ambiguities would
be totally contrary to a Python attitude... but I think better is a rule
which simply decides matters for every case, regardless of inheritence
fussiness. lexicographic_mro() fills that criterion.
After all, plain inheritence in Python doesn't raise compilation/runtime
errors if the graph isn't pretty enough--as it does in some languages.
MRO never blows up. Multimethod dispatch should follow that same
general attitude.
|So I think that a multi method dispatch mechanism should provide support
|for both approaches.
I think this remark is Pedroni, not Conway. I wasn't positive what was
quoted. But I agree with this. I'll probably add some more
linearization functions--if only for demonstration. The problem is just
to think of some that seem plausible. I'm leaning towards
lexicographic_mro() as the right default one (not reverse_def(), which I
only used for compatibility with Tim Hochberg's original code). But
options are nice.
Btw. Very nice example about ambiguities in weighted_mro().
"Samuele Pedroni" <pedr...@bluewin.ch> wrote previously:
|It really seems that the notion of multiple dispatching fill your mind
|with FUD <wink>.
Nah... my mind already becomes filled with FUD when I think of single
dispatch. My naivety doesn't need the extra wrinkle.
|Your line of reasoning could be equally applied to single dispatch:
In my defense though, there IS a difference. Every new-style child
class HAS a superclass--if nothing else, 'object' (or 'type'). If I
define some rules for multiple dispatch, I have no way of knowing within
the function bodies whether there even *is* a 'multi_super()' method to
call. And whether there is can change at runtime (actually, I could
change it at runtime within that very function body, but that would be
perverse).
The way I handle things in my current implementation (only propogating
dispatch AT_END or AT_START) already includes generic guards to prevent
non-existing less specific methods from being called. With a user
controlled multi_super(), you would need to add your own guards, and
change the flow to account for contingencies:
def semi_specific(this, that):
...do stuff...
try: # maybe there is *some* less specific method
val = multi_super()
except NothingLessSpecificError:
try_to_fix_things()
...more stuff...
Calling super() doesn't run into this problem. Now I -could- try to do
something "safe" in multi_super() as a fallback. A 'pass' is pretty
safe from an execution perspective. But if 'semi_specific()' wants a
return value, there is no way a fallback can guess a usuable value.
Still I don't see what is the difference:
>>> class A(object): pass
...
>>> class B(A):
... def method(self):
... return super(B,self).method()
...
>>> B().method()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 3, in method
AttributeError: 'super' object has no attribute 'method'
It is just like single dispatch, one does not write methods in a vacuum, one
has some knowledge about what signatures are covered,
and about the hierarchies.
multi_super should probably have a signature like:
multi_super(multimethod,signature,args...)
or be a method-as-useal of Dispatcher (?).
And yes I could produce Lisp fragments (((....))) using call-next-method.
And indeed there's also next-method-p
http://www-2.cs.cmu.edu/Groups/AI/html/hyperspec/HyperSpec/Body/locfun_next-
method-p.html
but it is rarely used (see the vacuum argument), e.g. Free CLIM codebase
does not use it.
regards.
it's from his Perl6 RFC inspired by his previous work
http://dev.perl.org/rfc/256.html
>
> I think Conway's approach is very Perl-ish: try to figure out what the
> user was most likely to mean, than execute it willy-nilly. In that
> sense, he definitely does the -right- thing for the language the module
> is in. But Python doesn't make such assumptions.
I would agree on that.
> On the other hand, Dylan seems much more bondage-and-discipline than
> Python is. Covariance/contravariance concerns, and generally an
> obsession with type hierarchies, is not very Pythonic, IMO.
once taken the linearization and cooperative methods' path, what is
reasonable, intuitive and usable change.
> I wouldn't
> say that raising a descriptive error in case of method ambiguities would
> be totally contrary to a Python attitude... but I think better is a rule
> which simply decides matters for every case, regardless of inheritence
> fussiness. lexicographic_mro() fills that criterion.
>
> After all, plain inheritence in Python doesn't raise compilation/runtime
> errors if the graph isn't pretty enough--as it does in some languages.
> MRO never blows up.
those were the days <wink>
Python 2.3a1 (#38, Dec 31 2002, 17:53:59) [MSC v.1200 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class A(object): pass
...
>>> class B(object): pass
...
>>> class C(A,B): pass
...
>>> class D(B,C): pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: MRO conflict among bases B, C
Thanks. BTW, I have found your direction on multmethods very helpful
generally (and I see the point of super() being unsafe all by itself...
it scares me too :-)).
|> After all, plain inheritence in Python doesn't raise compilation/runtime
|> errors if the graph isn't pretty enough--as it does in some languages.
|> MRO never blows up.
|those were the days <wink>
|Python 2.3a1 (#38, Dec 31 2002, 17:53:59)
|Traceback (most recent call last):
| File "<stdin>", line 1, in ?
|TypeError: MRO conflict among bases B, C
Hmmm... I DO NOT like that (nor did I know it). After all:
Python 2.2.2 (#0, Oct 24 2002, 20:53:04) [EMX GCC 2.8.1] on os2emx
>>> class A(object): pass
...
>>> class B(object): pass
...
>>> class C(A,B): pass
...
>>> class D(B,C): pass
...
>>>
Ah, that halcyon day of Oct 24, 2002...
Is there a (good) reason for this change, or could it be an artifact of
the alpha version? I don't have 2.3 installed myself.
Yours, David...
--
_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY: Postmodern Enterprises _/_/_/
_/_/ ~~~~~~~~~~~~~~~~~~~~[me...@gnosis.cx]~~~~~~~~~~~~~~~~~~~~~ _/_/
_/_/ The opinions expressed here must be those of my employer... _/_/
_/_/_/_/_/_/_/_/_/_/ Surely you don't think that *I* believe them! _/_/
See
http://www.python.org/doc/2.3a1/whatsnew/node14.html
"* The method resolution order used by new-style classes has changed, ..."
http://www.python.org/2.2.2/descrintro.html#mro
If I understand what you're talking about, multiple dispatch is
simply another name for method overloading - or is it?
The only use cases I've seen that are at all compelling for
method overloading fall into the pattern of eliminating
type tests on parameters - that is, let the compiler figure
out what method to invoke based on the parameter types.
In statically typed languages, I've seen any number of applications
of that, and properly used, they make programs much easier to
read than either having to create special names for the different
methods, or having to put a nest of type tests and special case
code in one method.
As far as arcane discussions of exactly how to mix this
with multiple inheritance hierarchies, I'm all in favor of the old
"parenthesis rule:" when there's any possible doubt about the
precedence, add parenthesis. If I have to think about the class
hierarchy to figure out what method overloading is going to do
in a particular case, I don't want to do it. Maintainable code is
blatently obvious code.
John Roth
> If I understand what you're talking about, multiple dispatch is
> simply another name for method overloading - or is it?
No, it isn't.
--
When C++ is your hammer, everything looks like a thumb.
-- Steven M. Haflich
(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
I originally conceived the idea of multiple dispatch on my own when
trying to program a video game. (OO is a good paradigm for video
games because all the characters and blocks and goodies map well to
the concept of an object.)
I wanted an overloaded function that determined what would happen if
two objects collided. Since the two objects could be any two objects,
it really didn't work with single dispatch.
About the best you could do is define an abstract method collide,
which the derived classes overrides to call a certain method of the
collidee, like this:
class Bullet: public Object
{
virtual void collide(Object* that)
{
that->be_collided_with_bullet(this);
}
}
That was a lot of work, and caused an ugly imbalance. Half the time,
the be_collided_with_* method simply reciprocated the call; for
example:
class Human: public Object
{
virtual void be_collided_with_bullet(Bullet* that)
{
that->be_collided_with_human(this);
}
}
Also, I was disturbed by the fact that such calls bypassed data
abstraction, since Bullet and Human would have to be friends for them
to affect each other. Those were the days when I thought data
abstraction was important. Still, multiple dispatch could have held
my hand through my period of ignorance and allowed me to define a
single method that could access the private parts of Human and Bullet,
while keeping them well apart otherwise.
--
CARL BANKS
OK, I read the thread at:
http://mail.python.org/pipermail/python-dev/2002-October/029035.html
I can now say with absolute certainty that -single- dispatch fills me
with Fear, Uncertainty, and Doubt.
Since the BDFL has spoken, I assume 2.3 is going with C3 MRO. But I
don't like that. I think it will be a wart for two reasons. One is
that--in contrast to the so-called "naive ordering"--you cannot explain
C3 to anyone without making their brain explode. In a way, the
consequences will be less surprising, but almost no one will be able to
comprehend exactly what the rule is. That doesn't match my idea of
Python (it's not Lisp :-)).
But the idea of raising an error on inconsistencies also strikes me as a
wart for the same reason that I think the incommensurability of complex
numbers with other objects is a wart (and Python's biggest wart). It is
a nod to purity over practicality. Sure complex numbers aren't -really-
bigger or smaller than integers, but setting an (arbitrary)
well-ordering makes sorting easy. Likewise, inconsistency in the class
graph isn't -really- well founded, but that's a fact about graph theory,
not about everyday programming. In this case, I'd rather have something
that is unintuitive at the margins than I would an exception.
Oh well. A lot fewer people will ever -notice- the MRO change than did
the complex comparisons. So in a sense, my concern with "practicality"
is itself overly pure.
Yours, David...
--
mertz@ | The specter of free information is haunting the `Net! All the
gnosis | powers of IP- and crypto-tyranny have entered into an unholy
.cx | alliance...ideas have nothing to lose but their chains. Unite
| against "intellectual property" and anti-privacy regimes!
-------------------------------------------------------------------------
You merge *preserving their order* the mros of the superclasses plus their
list.
If there are more choices privelege the mros of the superclasses
left-to-right.
If order cannot be preserved, give up.
+ Examples
> In a way, the
> consequences will be less surprising,
indeed most of the advantages of C3 can be shown through examples.
> but almost no one will be able to
> comprehend exactly what the rule is.
See above. For the irony value of it, nobody is still able to understand
what the 2.2 rule exactly was.
> That doesn't match my idea of
> Python (it's not Lisp :-)).
It comes with the territory. Guido explains why we need some non-trivial
linearization (__setattr__ problem) and that we need such a thing is the
complexity issue.
> In this case, I'd rather have something
> that is unintuitive at the margins than I would an exception.
you can voice for that on python-dev. Or for the other options not requiring
non-trivial linearizations:
- one could argue that the __setattr__ problem is really that with new-style
classes we do not have pure mixins anymore because everything has a (solid)
superclass. (I thought about this just more-or-less now). I don't know what
options this line could open.
- the Cecil route. That means a dispatch rule where only true subclassing
relationships are considered. All hierarchies go, but some method calls
would throw ambiguous dispatch exceptions.
regards.
Paul Foley <s...@below.invalid> wrote previously:
|When you write
| "foo" + 7
|and get an exception, is that a "wart"? You'd rather get "something
|unintuitive" than an exception, right? :-)
The thing is, the only way that I can even begin to comprehend what Paul
thinks he is arguing is by putting on my "purity-beats-all-else"
blinders/glasses. Python isn't (supposed to be) a bondage-and-discipline
language... which only from a total purity perspective would mean it
never raised exceptions.
That said, I'll finish reading "A Monotonic Superclass Linearization for
Dylan" today... maybe I'll be more convinced then. But I definitely see
the introduction of brand new exceptions for the same actions that used
to work as a *bad thing*, that needs an awfully strong justification. I
don't find "because the new way is theoretically pure" to be such a
reason.
I'm actually still not convinced that using this is a good idea in
practice. But then, I've come to believe that 'super()' cannot be
trusted either (even though I am starting to appreciate the elegance of
C3 linearization) *wink*. Even though I've never really been a C++
programmer, I sense a virtue in explicitly naming the superclass you
want to call.
Nonetheless, I don't want to disappoint Sam and Paul, so I've added this
functionality to:
http://www.gnosis.cx/secret/multimethods.py
I decided to call the capability 'dispatch.next_method()' rather than
'multi_super()' that I had toyed with. This naming follows Dylan, and
would seem to serve the same purpose.
There are some more improvements too. I gave an example of using
(simulated) positional and keyword arguments in multimethods. I made
lexicographic_mro() the default linearization.
One thing I don't yet have is a linearization that matches Dylan, i.e.
rejecting ambiguities. If anyone feels like contributing that, I'd be
thankful.
To be more specific, method overloading can be viewed as a static
form of multiple dispatch. That is, while syntactically the call
looks like a call to a single method, the compiler determines one
of several methods which should be used and generates code to call
that method.
On the other hand, what is being discussed here would be a
dynamic multiple dispatch, in that the determination of which method
to call must be delayed until run time. A virtual method call is
considered a single dynamic dispatch, in that it only uses a single
piece of information (i.e. the target object class) at run time to
determine the actual method to invoke. A dynamic multiple dispatch
requires examining multiple pieces of information (e.g. the target
object class and the type/class of the parameters.)
You can read an interesting paper about adding dynamic multiple
dispatch to Java here:
http://stillwater.csd.uwo.ca/~wade/Research/MDJ/
Gary Duzan
BBN Technologies
A Verizon Company
> OK, I read the thread at:
>
> http://mail.python.org/pipermail/python-dev/2002-October/029035.html
>
> I can now say with absolute certainty that -single- dispatch fills me
> with Fear, Uncertainty, and Doubt.
:)
> Since the BDFL has spoken, I assume 2.3 is going with C3 MRO. But I
> don't like that. I think it will be a wart for two reasons. One is
> that--in contrast to the so-called "naive ordering"--you cannot explain
> C3 to anyone without making their brain explode.
The constraints it preserves aren't that hard.
> In a way, the consequences will be less surprising, but almost no
> one will be able to comprehend exactly what the rule is. That
> doesn't match my idea of Python (it's not Lisp :-)).
As I see it, the advantages of C3 are that you find out about problems
earlier, rather than when super() doesn't calls an unexpected method
some arbitrary time later. Seems perfectly Pythonic to me.
> But the idea of raising an error on inconsistencies also strikes me as a
> wart for the same reason that I think the incommensurability of complex
> numbers with other objects is a wart (and Python's biggest wart).
I think you'll have to argue that one with Guido.
Cheers,
M.
--
MARVIN: What a depressingly stupid machine.
-- The Hitch-Hikers Guide to the Galaxy, Episode 7