Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Another newbie question

1 view
Skip to first unread message

solaris_1234

unread,
Dec 7, 2005, 8:56:31 PM12/7/05
to pytho...@python.org
I am a python newbie and have been trying to learn python. To this
end, I have coded the following program creates:
a 8 by 8 checker board
Places two checkers on the board
Checks the board and prints out which squares has a checker on them.

It works. But I have a one question:

1) The stmt "board.Blist[10].DrawQueen(board.Blist[10].b1)" seems
awkward. Is there another way (cleaner, more intuitive) to get the
same thing done?

I appreciate any and all input on how the following program could be
improved.

from Tkinter import *
import time
totalSolutionCount = 0

class MyBox:
def __init__(self, myC, myrow, mycolumn, color):
self.b1 = Canvas(myC, background=color, width=50, height=50)
self.b1.grid(row=myrow, column=mycolumn)
self.occupied = 0

def ChangebgColor(self, box):
box.config(bg="black")

def DrawQueen(self, box):
box.item = box.create_oval(4,4,50,50,fill="black")
self.occupied = 1
box.update()

def unDrawQueen(self, box):
box.delete(box.item)
self.occupied = 0
box.update()

class MyBoard(MyBox) :
def __init__(self, myC):
self.Blist = []
count=0
for i in range(8):
count += 1
for j in range(8):
count += 1
if (count%2):
self.Blist.append(MyBox(myContainer,i,j, "red"))
else:
self.Blist.append(MyBox(myContainer,i,j, "green"))


root=Tk()
myContainer = Frame(root)
myContainer.pack()

board=MyBoard(myContainer)

board.Blist[10].DrawQueen(board.Blist[10].b1)
board.Blist[22].DrawQueen(board.Blist[22].b1)

raw_input() # A Hack debug statement

for i in range(64):
if board.Blist[i].occupied == 1:
print i, "is occupied"

raw_input() # A Hack debug statement
print "\n"*3

Mike Meyer

unread,
Dec 7, 2005, 11:58:02 PM12/7/05
to
"solaris_1234" <solari...@yahoo.com> writes:

> 1) The stmt "board.Blist[10].DrawQueen(board.Blist[10].b1)" seems
> awkward. Is there another way (cleaner, more intuitive) to get the
> same thing done?

Yes. Reaching through objects to do things is usually a bad idea. Some
languages don't allow you to do that at all; they require you to
provide methods for manipulating the state of the object For instance,
you can extend your MyBoard class with an extra method:

def DrawQueen(self, cell):
square = self.Blist[cell]
square.DrawQueen(square.b1)


And then those two lines become:

board.DrawQueen(10)
board.DrawQueen(22)

Except that's still ugly - you probably want something like
board.DrawQueen(1, 2).

Basically, Blist should be part of MyBoards implementation, not a
visible attribute. You should define methods for MyBoard that let
clients manipulate the board, without needing to know how it's
represented internally.

Along the same lines, why does MyBoard inherit from MyBox? It's not
using any of the features of MyBox. The code still works if you don't
do that. And why do you pass instances of Cavnas to the methods of
MyBox - it's got a canvas already! Do you really expect a MyBox to
draw onto Canvases other than it's own (if so, that's a bad design as
well).

Here's an updated version of your code. I've used the convention of an
_ prefix on attributes to indicate implementation details, and made
the classes inherit from object, as well as using "box" instead of
"b1", and changed the interface to MyBoard squares to use standard
2d-array indexing instead of forcing the clients to do array
index calculations. You may have a good reason for doing these things
that doesn't appear in your code fragment, but I consider these to be
improvements in the fragment.

Hmm. "b1" seems to indicate that you will eventually have more than
one canvas, which is why you passed in the canvas? In which case, the
distinguishing feature would be the number (or mabye the "b1"). In
that case, have your clients pass in the number (or name), and look up
the canvas in an internal structure.

<mike

from Tkinter import *
import time
totalSolutionCount = 0

class MyBox(object):


def __init__(self, myC, myrow, mycolumn, color):

self._box = Canvas(myC, background=color, width=50, height=50)
self._box.grid(row=myrow, column=mycolumn)
self.occupied = False

def ChangebgColor(self):
self._box.config(bg="black")

def DrawQueen(self):
self._box.item = self._box.create_oval(4,4,50,50,fill="black")
self.occupied = True
self._box.update()

def unDrawQueen(self):
self._box.delete(self._box.item)
self.occupied = False
self._box.update()

class MyBoard(object) :
def __init__(self, myC):
self._blist = {}
for i in range(8):
for j in range(8):
self._blist[i, j] = MyBox(myContainer, i, j,
("green", "red")[(i * 8 + j) % 2])
def DrawQueen(self, i, j):
square = self._blist[i, j]
square.DrawQueen()

def occupied(self, i, j):
return self._blist[i, j].occupied


root=Tk()
myContainer = Frame(root)
myContainer.pack()

board=MyBoard(myContainer)

board.DrawQueen(1, 2)
board.DrawQueen(2, 6)

raw_input() # A Hack debug statement

for i in range(8):
for j in range(8):
if board.occupied(i, j):
print "%d, %d is occupied" % (i, j)

raw_input() # A Hack debug statement
print "\n"*3


--
Mike Meyer <m...@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.

solari...@yahoo.com

unread,
Dec 8, 2005, 1:13:09 AM12/8/05
to
Mike,

Thanks for your insight. It has been a big help.

I guess I was trying to learn too much with my original code. Trying to
implement inheritance, object creation, calling methods via inheritance
made the code harder than it needed to be.

I'm off to study the code. (Hmm.. how does python parse ("green",
"red")[(i * 8 + j) % 2] command ... he says while reaching for "python
for the semi-illiterate" ...)

Again, thanks for your help.


Jpl

John Bushnell

unread,
Dec 8, 2005, 5:42:43 AM12/8/05
to
I think that's supposed to be [(i + j) % 2] for the index to the
("green","red") tuple
(since i*8 is always even).

Steven D'Aprano

unread,
Dec 8, 2005, 4:51:01 PM12/8/05
to
On Wed, 07 Dec 2005 23:58:02 -0500, Mike Meyer wrote:

> "solaris_1234" <solari...@yahoo.com> writes:
>
>> 1) The stmt "board.Blist[10].DrawQueen(board.Blist[10].b1)" seems
>> awkward. Is there another way (cleaner, more intuitive) to get the
>> same thing done?
>
> Yes. Reaching through objects to do things is usually a bad idea.

I don't necessarily disagree, but I don't understand why you say this. Why
it is bad?

> Some languages don't allow you to do that at all;

Fortunately we aren't using "some languages", we're using Python, and so
we aren't forced to fill our classes with helper functions when we can
simply call the object methods directly.


--
Steven.

Paul Rubin

unread,
Dec 8, 2005, 5:29:11 PM12/8/05
to
Steven D'Aprano <st...@REMOVETHIScyber.com.au> writes:
> > Yes. Reaching through objects to do things is usually a bad idea.
> I don't necessarily disagree, but I don't understand why you say this. Why
> it is bad?

The traditional OOP spirit is to encapsulate the object's entire
behavior in the class definition.

Kent Johnson

unread,
Dec 8, 2005, 5:57:12 PM12/8/05
to
Steven D'Aprano wrote:
> On Wed, 07 Dec 2005 23:58:02 -0500, Mike Meyer wrote:
>>>1) The stmt "board.Blist[10].DrawQueen(board.Blist[10].b1)" seems
>>>awkward. Is there another way (cleaner, more intuitive) to get the
>>>same thing done?
>>
>>Yes. Reaching through objects to do things is usually a bad idea.
>
>
> I don't necessarily disagree, but I don't understand why you say this. Why
> it is bad?

http://en.wikipedia.org/wiki/Law_of_Demeter

Kent

Mike Meyer

unread,
Dec 8, 2005, 6:25:36 PM12/8/05
to
Steven D'Aprano <st...@REMOVETHIScyber.com.au> writes:
> On Wed, 07 Dec 2005 23:58:02 -0500, Mike Meyer wrote:
>> "solaris_1234" <solari...@yahoo.com> writes:
>>> 1) The stmt "board.Blist[10].DrawQueen(board.Blist[10].b1)" seems
>>> awkward. Is there another way (cleaner, more intuitive) to get the
>>> same thing done?
>> Yes. Reaching through objects to do things is usually a bad idea.
> I don't necessarily disagree, but I don't understand why you say this. Why
> it is bad?

Such behavior couples you to the objects you use very tightly. This
makes it harder to adapt those objects to changing needs. One popular
rule of thumb is the "Law of Demeter". Googling for that will turn up
lots of information.

My standard object interface is modeled after Meyer's presentation in
OOSC: an objects state is manipulated with methods and examined with
attributes; manipulating attributes doesn't change the internal state
of the object. This makes it possible to change the internal
representation of a class without having to change all the clients of
the class to match.

>> Some languages don't allow you to do that at all;
> Fortunately we aren't using "some languages", we're using Python, and so
> we aren't forced to fill our classes with helper functions when we can
> simply call the object methods directly.

Yup. I'm not pushing for a change in Python to suit my design
goals. If I feel the need for languages that enforce my design
decisions, I know where to find them.

<mike

Bruno Desthuilliers

unread,
Dec 8, 2005, 8:10:08 PM12/8/05
to
solari...@yahoo.com a écrit :

>
> I'm off to study the code. (Hmm.. how does python parse ("green",
> "red")[(i * 8 + j) % 2] command ...

("green", "red")[0] == "green"
("green", "red")[1] == "red"

(i * 8 + j) is somewhat trivial (just take care of precedence order),
and will return an integer
% is the modulo operator.
the modulo 2 of any integer x is 0 if x is even and 1 if x is odd
(that's in fact the reversed definition !-)

So this expression[1] will return "green" if (i * 8 + j) is pair and
"red" if (i * 8 + j) is even.

Using computed indexed access for dispatch is a common python idiom.
Instead of:

if condition:
result = iftrue
else:
result = iffalse

you can simply write:
result = (iffalse, iftrue)[condition]


[1] 'expression', not 'command'. An expression has a value and can be
used on the right hand side of the assignment operator.

Steven D'Aprano

unread,
Dec 8, 2005, 7:40:41 PM12/8/05
to
Paul Rubin wrote:


Uh huh. Say I have:

class Coordinate:
def __init__(self, x=0.0, y=0.0):
self.x = x
self.y = y

pt = Coordinate(1.0, 2.5)

Then, somewhere in my application, I need twice the
value of the y ordinate. I would simply say:

value = 2*pt.y

But you claim that this is bad practice, and to be true
to the OOP spirit I should encapsulate the objects
behaviour by creating a method like this:

# add this to the class definition
def mult_y_ord(self, other):
"Return the y ordinate multiplied by other."
return other*self.y

Presumably then I also need add_y_ord, sub_y_ord,
rsub_y_ord, div_y_ord, and so on for every method that
floats understand, plus *another* set of methods that
do the same thing for the x ordinate. And in every
case, these Coordinate methods are trivial one-liners.

Do people really do this?

Yes, I could encapsulate the lot with a factory
function that applied a specified operator to a
specified attribute, and populate the class at runtime.
But why would I want to?

Now, as I see it, the whole point of encapsulation is
that you *don't* need to fill your class definition
with meaningless helper functions. If an attribute of a
instance is a float, you can just call float methods on
the attribute and it should work. If the attribute is a
list, list methods will work. If the attribute is an
instance of a custom class, the same general technique
will still work.

--
Steven.

Mike Meyer

unread,
Dec 8, 2005, 8:46:33 PM12/8/05
to
Steven D'Aprano <st...@REMOVEMEcyber.com.au> writes:
> Paul Rubin wrote:
>> Steven D'Aprano <st...@REMOVETHIScyber.com.au> writes:
>>>> Yes. Reaching through objects to do things is usually a bad idea.
>>>I don't necessarily disagree, but I don't understand why you say this. Why
>>>it is bad?
>> The traditional OOP spirit is to encapsulate the object's entire
>> behavior in the class definition.
> Uh huh. Say I have:
>
> class Coordinate:
> def __init__(self, x=0.0, y=0.0):
> self.x = x
> self.y = y
>
> pt = Coordinate(1.0, 2.5)
> Presumably then I also need add_y_ord, sub_y_ord, rsub_y_ord,
> div_y_ord, and so on for every method that floats understand, plus
> *another* set of methods that do the same thing for the x
> ordinate. And in every case, these Coordinate methods are trivial
> one-liners.
> Do people really do this?

Yes, but usually with better API design. For instance, your Coordinate
class might have scale, translate, rotate and move methods. These are
still relatively simple, but they aren't one-liners. The import thing
is that these methods capture common, Coordinate-level operations and
bundle them up so that clients don't have to, for instance, do the
trig needed to rotate a point themselves. They can just use the rotate
method.

> Yes, I could encapsulate the lot with a factory function that applied
> a specified operator to a specified attribute, and populate the class
> at runtime. But why would I want to?

You don't. You're thinking about things at the wrong level. You don't
want to think about "things you do to a Coordinate's attribute". You
want to think about "things you do to a Coordinate".

> Now, as I see it, the whole point of encapsulation is that you *don't*
> need to fill your class definition with meaningless helper
> functions.

Correct. That's where you went wrong - your methods were essentially
meaningless. They just manipulated the attributes, not the
Coordinate. Your methods should be meaningful for the object, not just
the attributes.

> If an attribute of a instance is a float, you can just call
> float methods on the attribute and it should work. If the attribute is
> a list, list methods will work. If the attribute is an instance of a
> custom class, the same general technique will still work.

So, if we expand your Coordinate class to have attriutes r and theta
(the same coordinate expressed in polar form), which are also floats,
does it make sense to write: pt.r *= 2? For that matter, does it
*still* make sense to write pt.x *= x?

Ok, my design philosophy is that you don't do those things. However,
Python is powerful enough that you can do those things and have them
work right. You just have to make x, y, theta and r properties, so you
can run code when someone sets or reads them.

And finally, there are times when your class is really just a
convenient way to bundle data together (because it's easier to write
"pt.x" than pt['x']), so there really aren't any applicable high-level
methods. In that case, all you do is tweak the attributes; you might
as well do it directly.

<mike

Alex Martelli

unread,
Dec 8, 2005, 10:32:19 PM12/8/05
to
Mike Meyer <m...@mired.org> wrote:

> My standard object interface is modeled after Meyer's presentation in
> OOSC: an objects state is manipulated with methods and examined with
> attributes; manipulating attributes doesn't change the internal state
> of the object. This makes it possible to change the internal
> representation of a class without having to change all the clients of
> the class to match.

Note that properties enable you to obtain the goal in your final
sentence while letting attributes still be freely assigned -- a vastly
preferable solution. As far as I recall, Meyer's Eiffel allows this
syntactic transparency for accessing attributes (since it does not
require parentheses in function calls) but not for setting them; so you
might end up writing boilerplate setThis, setThat, and so on, although
at least you're spared the half of the boilerplate that goes getThis,
getThat in typical Java...


Alex

Mike Meyer

unread,
Dec 8, 2005, 11:31:53 PM12/8/05
to

Yup, properties are powerful things - I was very happy to see them
added to Python. And Eiffel indeed gets the same transparency by
allowing obj.feature to be a parameterless method invocation (without
having to go through the dance required to create a property), an
attribute reference, or a reference to a computed constant (a "once"
feature).

On the other hand, Eiffel specifically forbids setting attributes
directly, except for the object they belong to. This is done to
enforce the design goals I stated above: attributes are the "readouts"
for an object, and methods are the knobs/dials/etc. This also ties in
with the compiler having facilities to check class invariants. If you
allow assignments to attributes in other classes, the assignments have
to generate code to check the invariants every time you have such an
assignment, otherwise a future attribute read may come from an object
in an invalid state. If you only allow attributes to be set by the
owning objects methods, then you only have to check the invariants on
method exit.

Since I like that object model, this restriction doesn't bother me. On
the other hand, I don't find myself writing set_* boilerplate very
often - my methods tend to be written to manipulate the class rather
than the attributes. Most of the time I write such methods, it's
because Eiffel doesn't have default values for arguments. So what I'd
write in Python as:

obj = My_class(1, foo = 23)

I write in Eiffel as:

obj = MY_CLASS(1)
obj.set_foo(23)

Different idioms for different languages. Trying to write Python in
Eiffel doesn't work any better than trying to write C++ in Python.

BartlebyScrivener

unread,
Dec 9, 2005, 9:03:07 AM12/9/05
to
>>http://en.wikipedia.org/wiki/Law_of_Demeter <<

That was fun. Thanks, Kent.

<rd>

Alex Martelli

unread,
Dec 9, 2005, 10:44:06 AM12/9/05
to
Mike Meyer <m...@mired.org> wrote:

> al...@mail.comcast.net (Alex Martelli) writes:
> > Mike Meyer <m...@mired.org> wrote:
> >> My standard object interface is modeled after Meyer's presentation in
> >> OOSC: an objects state is manipulated with methods and examined with
> >> attributes; manipulating attributes doesn't change the internal state
> >> of the object. This makes it possible to change the internal
> >> representation of a class without having to change all the clients of
> >> the class to match.

...


> On the other hand, Eiffel specifically forbids setting attributes
> directly, except for the object they belong to. This is done to
> enforce the design goals I stated above: attributes are the "readouts"

We must have a different meaning in mind for the word "goal"; I take it
to mean, as a web definition says, "the state of affairs that a plan is
intended to achieve". By this definition, your second quoted sentence
identifies a goal, but the former one doesn't -- it's a MEANS, not an
END, as strongly confirmed by the words "This makes it possible to...".

Enabling internals' changes is a goal, and an important one; doing so by
forbidding the setting of attributes is one way to help achieve it, but
properties are another way, and as a matter of language design it
appears to me to be a better approach. I haven't used Eiffel "in
anger", just evaluated and rejected it years ago for an employer, but I
have used other languages such as ObjectiveC which follow a similar
approach, so my preference is based on some experience (Ruby, though
strongly influenced by Smalltalk, does let you define attribute-setter
methods, with a net effect similar to Deplhi's or Python's properties).

> for an object, and methods are the knobs/dials/etc. This also ties in
> with the compiler having facilities to check class invariants. If you
> allow assignments to attributes in other classes, the assignments have
> to generate code to check the invariants every time you have such an
> assignment, otherwise a future attribute read may come from an object
> in an invalid state. If you only allow attributes to be set by the
> owning objects methods, then you only have to check the invariants on
> method exit.

What classes' invariants do you have to check in those cases? E.g.,
consider zim.foo.bar.baz() -- you do have to check the invariants of
bar, foo AND zim, right? And you must do it with code placed inline
after this specific call, since not all calls to that baz method must
check invariants in foo and zim. So, what's different about having to
generate just the same inline code for, say, zim.foo.bar=baz ? Since
your compiler must be ready to generate such checks anyway, forbidding
the second form appears to have no pluses. Or, in other words, having
to spell zim.foo.bar=baz as zim.foo.set_bar(baz) [[and having to code
boilerplate to implement set_bar]] is a style-choice (common to Eiffel
and Smalltalk, and others) which I consider inferior to the alternative
choice you find in Ruby or Python [[allowing autogeneration of the
implied attribute-setter method if necessary]].

> Different idioms for different languages. Trying to write Python in
> Eiffel doesn't work any better than trying to write C++ in Python.

Absolutely, but here I was discussing language design, not usage of
languages whose design is an external "given".


Alex

Mike Meyer

unread,
Dec 9, 2005, 5:42:52 PM12/9/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> Mike Meyer <m...@mired.org> wrote:
>> for an object, and methods are the knobs/dials/etc. This also ties in
>> with the compiler having facilities to check class invariants. If you
>> allow assignments to attributes in other classes, the assignments have
>> to generate code to check the invariants every time you have such an
>> assignment, otherwise a future attribute read may come from an object
>> in an invalid state. If you only allow attributes to be set by the
>> owning objects methods, then you only have to check the invariants on
>> method exit.
> What classes' invariants do you have to check in those cases? E.g.,
> consider zim.foo.bar.baz() -- you do have to check the invariants of
> bar, foo AND zim, right?

Nope, just bar. Attributes display state, they don't let you change
it. Nothing you do with zim.foo or zim.foo.bar can change the state of
zim. The only invariants you need to check are bar's, which you do at
the exit to it's baz method.

> Or, in other words, having
> to spell zim.foo.bar=baz as zim.foo.set_bar(baz) [[and having to code
> boilerplate to implement set_bar]] is a style-choice (common to Eiffel
> and Smalltalk, and others) which I consider inferior to the alternative
> choice you find in Ruby or Python [[allowing autogeneration of the
> implied attribute-setter method if necessary]].

It's a style choice derived from the designers believe about what
constitutes "good design". It's not really any different from python
requiring you to spell fetch_tuple().index(val) as
list(fetch_tuple()).index(val): the language designers have decided
what's "proper" usage of some element of the language, and the
features of the language support such usage, and not other usage.

You may not agree with what Meyer believes - in which case you would
be right to reject any language he designed.

Alex Martelli

unread,
Dec 9, 2005, 10:02:09 PM12/9/05
to
Mike Meyer <m...@mired.org> wrote:
...

> > What classes' invariants do you have to check in those cases? E.g.,
> > consider zim.foo.bar.baz() -- you do have to check the invariants of
> > bar, foo AND zim, right?
>
> Nope, just bar. Attributes display state, they don't let you change
> it. Nothing you do with zim.foo or zim.foo.bar can change the state of
> zim. The only invariants you need to check are bar's, which you do at
> the exit to it's baz method.

So foo's class is not allowed to have as its invariant any formula
depending on the attributes of its attribute bar, such as "bar.x>23" or
the like? Wow! How does Eiffel enforce this prohibition? Forbidding
access to any attribute's attribute in an invariant? I sure don't
remember that from my study of Eiffel, but admittedly that was in the
past. I'm also quite dubious as to how you can then express some
invariants that can be very important, but before I delve into that I
would ask you to confirm that there's a prohibition of access to
attributes' attributes in an invariant (ideally with some URLs,
thanks!).

> You may not agree with what Meyer believes - in which case you would
> be right to reject any language he designed.

Of course I don't agree with such absurd statements of his as the fact
that you can have OO only with static typing, which would rule out
Smalltalk (for example!) from the set of OO languages; I'm hard put to
see how anybody could _agree_ with them, in fact. Further, I believe
static typing is appropriate only in a functional language (unmodifiable
data), and untenable, as a matter of principle, in languages which let
you alter data on the fly; the IS-A relationship conjoined with
modifiable data is just too strict (e.g., you cannot state "a Circle
IS-A Ellipse" if modification exists and Ellipse has been reasonably
designed with methods set_x and set_y whose postconditions include the
fact that they modify ONLY the named axis; you typically want
covariance, but that's mathematically unsound, as it would violate
Liskov's substition principle; etc, etc) so you inevitably end up with
runtime checks instead (so that the stance of "things must be checked at
compile time" gets broken -- but then, the whole concept of contract
programming implies lots of *runtime* checks, anyway). Still, if I
_had_ to use a statically typed OO language without being allowed to
pick a functional one, I guess I would not consider Eiffel (as a
language, net of any practical issues of implementation) necessarily
worse than, say, C++ or Java, which have their own problems.

Fortunately, for most of my work, I do get to use Python, Objective-C,
or Haskell, which, albeit in very different ways, are all "purer" (able
to stick to their principles)...


Alex

Mike Meyer

unread,
Dec 9, 2005, 10:35:44 PM12/9/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> Mike Meyer <m...@mired.org> wrote:
>> > What classes' invariants do you have to check in those cases? E.g.,
>> > consider zim.foo.bar.baz() -- you do have to check the invariants of
>> > bar, foo AND zim, right?
>> Nope, just bar. Attributes display state, they don't let you change
>> it. Nothing you do with zim.foo or zim.foo.bar can change the state of
>> zim. The only invariants you need to check are bar's, which you do at
>> the exit to it's baz method.
> So foo's class is not allowed to have as its invariant any formula
> depending on the attributes of its attribute bar, such as "bar.x>23" or
> the like?

Of course you can do such things. But it's a silly thing to do. That
invariant should be written as x > 23 for the class bar is an instance
of. Invariants are intended to be used to check the state of the
class, not the state of arbitary other objects. Doing the latter
requires that you have to check the invariants of every object pretty
much every time anything changes.

Invariants are a tool. Used wisely, they make finding and fixing some
logic bugs much easier than it would be otherwise. Used unwisely, they
don't do anything but make the code bigger.

> I'm also quite dubious as to how you can then express some
> invariants that can be very important

Not all invariants, pre-conditions or post-conditions can be
expressed.

> Fortunately, for most of my work, I do get to use Python, Objective-C,
> or Haskell, which, albeit in very different ways, are all "purer" (able
> to stick to their principles)...

I think Eiffel is fairly pure. But practicality beats purity, so there
are places where it has to give in and deviate from it's
principles. Clearly, you don't agree with the underlying
philosoiphy. So don't use it.

Steven D'Aprano

unread,
Dec 10, 2005, 12:32:31 AM12/10/05
to
On Thu, 08 Dec 2005 20:46:33 -0500, Mike Meyer wrote:

> Steven D'Aprano <st...@REMOVEMEcyber.com.au> writes:
>> Paul Rubin wrote:
>>> Steven D'Aprano <st...@REMOVETHIScyber.com.au> writes:
>>>>> Yes. Reaching through objects to do things is usually a bad idea.
>>>>I don't necessarily disagree, but I don't understand why you say this. Why
>>>>it is bad?
>>> The traditional OOP spirit is to encapsulate the object's entire
>>> behavior in the class definition.
>> Uh huh. Say I have:
>>
>> class Coordinate:
>> def __init__(self, x=0.0, y=0.0):
>> self.x = x
>> self.y = y
>>
>> pt = Coordinate(1.0, 2.5)
>> Presumably then I also need add_y_ord, sub_y_ord, rsub_y_ord,
>> div_y_ord, and so on for every method that floats understand, plus
>> *another* set of methods that do the same thing for the x
>> ordinate. And in every case, these Coordinate methods are trivial
>> one-liners.
>> Do people really do this?
>
> Yes, but usually with better API design. For instance, your Coordinate
> class might have scale, translate, rotate and move methods.

Which I obviously left as exercises for the reader. Did I really need to
specify the entire API for an example like this?


> These are
> still relatively simple, but they aren't one-liners. The import thing
> is that these methods capture common, Coordinate-level operations and
> bundle them up so that clients don't have to, for instance, do the
> trig needed to rotate a point themselves. They can just use the rotate
> method.

Of course, if you have to do an operation on a *coordinate* then it makes
perfect sense to create coordinate methods. I'm not disputing that. But in
my example, I'm not doing things to a coordinate, I'm doing things to the
entities which make up a coordinate -- the x and y ordinates.


>> Yes, I could encapsulate the lot with a factory function that applied
>> a specified operator to a specified attribute, and populate the class
>> at runtime. But why would I want to?
>
> You don't. You're thinking about things at the wrong level. You don't
> want to think about "things you do to a Coordinate's attribute". You
> want to think about "things you do to a Coordinate".

I've done my thinking about coordinates. That is why I wrote a Coordinate
class. But sometimes you don't want to do an operation on a coordinate, you
want to do something to the x or y ordinate alone. It is utter nonsense to
suggest that you should abstract coordinates to the point that you no
longer know -- or pretend that you don't -- that a coordinate is a pair of
ordinates. What, we're supposed to guard against the possibility that
coordinates might be implemented by a B-tree or something?

(Yes, I'm aware there are 3D coordinates, or even n-dimensional ones, and
all sorts of special purpose coordinates with, e.g. ordinates limited to
integral values. I'm not writing a general anything-coordinate class, I'm
writing a simple 2D coordinate pair class.)


>> Now, as I see it, the whole point of encapsulation is that you *don't*
>> need to fill your class definition with meaningless helper functions.
>
> Correct. That's where you went wrong - your methods were essentially
> meaningless. They just manipulated the attributes, not the Coordinate.
> Your methods should be meaningful for the object, not just the
> attributes.

*Exactly* my point -- and demonstrating that you've missed that point.
Writing special purpose methods to manipulate object attributes when you
can just as easily manipulate the object attributes is a bad idea. Methods
should be meaningful for the object.

According to the Law of Demeter (more of a guideline really), each level
should only talk to the immediate level next to it. Fine: I have a name
"pt", which is bound to a Coordinate object -- so "pt" can call Coordinate
methods. But it shouldn't call float methods on the attributes of that
Coordinate object, because those float methods are two levels away.

The bad side of the Guideline of Demeter is that following it requires
you to fill your class with trivial, unnecessary getter and setter
methods, plus methods for arithmetic operations, and so on.

Or just say No to the "law" of Demeter.

As a guideline, to make you think about what your doing ("am I doing too
much work? should my class implement a helper function for this common
task?") it is perfectly fine. But when you find yourself writing trivial
methods that do nothing but call methods on a attribute, you are another
victim of a bad law.


>> If an attribute of a instance is a float, you can just call float
>> methods on the attribute and it should work. If the attribute is a
>> list, list methods will work. If the attribute is an instance of a
>> custom class, the same general technique will still work.
>
> So, if we expand your Coordinate class to have attriutes r and theta
> (the same coordinate expressed in polar form), which are also floats,
> does it make sense to write: pt.r *= 2?

If I expand the Coordinate class to allow both polar coordinates and
Cartesian coordinates, I need some mechanism for keeping the two views in
sync. If I can do that (say, with properties) then sure it makes sense to
double the length of the coordinate vector.

If I *can't* keep the two views in sync, then I have some choices to make.
For instance, I might have a Cartesian class and a Polar class, with
methods to convert from one to the other, and give up on the desire for
one object to encapsulate both. My Coordinate class doesn't encapsulate
writing a point in English "one point two, three point four five"
either -- you shouldn't expect a single class to encapsulate every
imaginable representation of that class ("but what if I want to write my
binary tree in Morse code?").


--
Steven.

Mike Meyer

unread,
Dec 10, 2005, 1:28:52 AM12/10/05
to

Given that we're talking about API design, then yes, you do. With only
partial examples, you'll only get partial conclusions. In particular,
you can get most of your meaningless methods out of a properly
designed Coordinate API. For example, add/sub_x/y_ord can all be
handled with move(delta_x = 0, delta_y = 0).

>>> Yes, I could encapsulate the lot with a factory function that applied
>>> a specified operator to a specified attribute, and populate the class
>>> at runtime. But why would I want to?
>> You don't. You're thinking about things at the wrong level. You don't
>> want to think about "things you do to a Coordinate's attribute". You
>> want to think about "things you do to a Coordinate".
> I've done my thinking about coordinates. That is why I wrote a Coordinate
> class. But sometimes you don't want to do an operation on a coordinate, you
> want to do something to the x or y ordinate alone. It is utter nonsense to
> suggest that you should abstract coordinates to the point that you no
> longer know -- or pretend that you don't -- that a coordinate is a pair of
> ordinates.

Which demonstrates that *you've* missed the point. It's not utter
nonsense to abstract coordinates - or any other class - to the point
that you don't know what the underlying implementation is. That's what
abstraction is all about. Yup, a cordinate has a pair of ordinates. So
you have attributes to get them.

>>> Now, as I see it, the whole point of encapsulation is that you *don't*
>>> need to fill your class definition with meaningless helper functions.
>> Correct. That's where you went wrong - your methods were essentially
>> meaningless. They just manipulated the attributes, not the Coordinate.
>> Your methods should be meaningful for the object, not just the
>> attributes.
> *Exactly* my point -- and demonstrating that you've missed that point.
> Writing special purpose methods to manipulate object attributes when you
> can just as easily manipulate the object attributes is a bad idea. Methods
> should be meaningful for the object.

And you've once again missed the point. The reason you don't
manipulate the attributes directly is because it violates
encapsulation, and tightens the coupling between your class and the
classes it uses. It means you see the implementation details of the
classes you are using, meaning that if that changes, your class has to
be changed to match.

> The bad side of the Guideline of Demeter is that following it requires
> you to fill your class with trivial, unnecessary getter and setter
> methods, plus methods for arithmetic operations, and so on.

No, it requires you to actually *think* about your API, instead of
just allowing every class to poke around inside your implementation.

> As a guideline, to make you think about what your doing ("am I doing too
> much work? should my class implement a helper function for this common
> task?") it is perfectly fine. But when you find yourself writing trivial
> methods that do nothing but call methods on a attribute, you are another
> victim of a bad law.

More likely, you're just committing a bad design.

> If I *can't* keep the two views in sync, then I have some choices to make.
> For instance, I might have a Cartesian class and a Polar class, with
> methods to c
onvert from one to the other, and give up on the desire for
> one object to encapsulate both.

Those are all valid choices. They have their own tradeoffs. Following
the LoD lowers the coupling between classes, making it easier to
change/extend/etc. them; on the down side, you have to take more care
in your API design, and some operations take a bit more typing. You
may not like that trade off. That's fine; I've already mentioned some
cases where it makes sense to do otherwise. Don't do things that way -
but do know what the tradeoffs are.

Xavier Morel

unread,
Dec 10, 2005, 6:07:13 AM12/10/05
to
Mike Meyer wrote:
> And you've once again missed the point. The reason you don't
> manipulate the attributes directly is because it violates
> encapsulation, and tightens the coupling between your class and the
> classes it uses. It means you see the implementation details of the
> classes you are using, meaning that if that changes, your class has to
> be changed to match.
>
One of Python's greatnesses is that a property is, for all means an
purposes, a fully virtual instance attribute/member.

If you follow the KISS principle, as long as your (naive? probably)
implementation of the class has "real" attributes and their manipulation
is meaningful & makes sense from an external point of view, just leave
it at that. If you happen to change the implementation for whatever
reason and happen to remove the real attributes, just create virtual
attributes with a property and be done with it.

Wrapping everything just because you can and considering that
encapsulation is only truly done if you never happen to touch what's
under the hood (even without knowing it) is the Java Way, this is Python.

In Python, the interface of an object instance is always virtualized
because you can never know if you're manipulating "real" attributes or
property-spawned virtual attributes. _that_, in my opinion, is not
knowing about the implementation details.

While a Java object is mineral (set in stone) and trying to abstract
everything from the start (and generate 50 pages documentation for each
class to be sure that you didn't miss anything) kind of makes sense, a
Python object is an organic, morphing, living entity. Don't abstract
everything and over-engineer from the start just because you can and
because you'd do it in Java or C#, only abstract (from your point of
view) when you *have to*. And remember that "Java's Object Oriented
Programming" is not the only one worth using, even though some people
would like to make you believe it.

Steven D'Aprano

unread,
Dec 10, 2005, 8:52:42 AM12/10/05
to

I think you and I are coming at this problem from different directions.

To my mind, the Coordinate class was complete in potentia, and did not
need to be listed because I was not operating on a Coordinate instance as
a whole. If you go back and look at my example, I was explicit about
wanting to do something with a single ordinate, not a coordinate pair.
Listing the entire API for the Coordinate class would be a waste of time,
since I wasn't concerned about Coordinate.rotate(), Coordinate.reflect()
or any other method applying to a coordinate pair.


> In particular,
> you can get most of your meaningless methods out of a properly
> designed Coordinate API. For example, add/sub_x/y_ord can all be
> handled with move(delta_x = 0, delta_y = 0).

Here is my example again:

[quote]


Then, somewhere in my application, I need twice the
value of the y ordinate. I would simply say:

value = 2*pt.y
[end quote]

I didn't say I wanted a coordinate pair where the y ordinate was double
that of the original coordinate pair. I wanted twice the y ordinate, which
is a single real number, not a coordinate pair.


[snip]

> And you've once again missed the point. The reason you don't
> manipulate the attributes directly is because it violates
> encapsulation, and tightens the coupling between your class and the
> classes it uses. It means you see the implementation details of the
> classes you are using, meaning that if that changes, your class has to
> be changed to match.

Yes. And this is a potential problem for some classes. The wise programmer
will recognise which classes have implementations likely to change, and
code defensively by using sufficient abstraction and encapsulation to
avoid later problems.

The not-so-wise programmer takes abstraction as an end itself, and
consequently spends more time and effort defending against events which
almost certainly will never happen than it would have taken to deal with
it if they did.

Do you lie awake at nights worrying that in Python 2.6 sys.stdout will be
renamed to sys.standard_output, and that it will no longer have a write()
method? According to the "law" of Demeter, you should, and the writers of
the sys module should have abstracted the fact that stdout is a file away
by providing a sys.write_to_stdout() function.

That is precisely the sort of behaviour which I maintain is unnecessary.

>> The bad side of the Guideline of Demeter is that following it requires
>> you to fill your class with trivial, unnecessary getter and setter
>> methods, plus methods for arithmetic operations, and so on.
>
> No, it requires you to actually *think* about your API, instead of
> just allowing every class to poke around inside your implementation.

But I *want* other classes to poke around inside my implementation.
That's a virtue, not a vice. My API says:

"In addition to the full set of methods which operate on the coordinate as
a whole, you can operate on the individual ordinates via instance.x and
instance.y which are floats."

Your API says:

"In addition to the full set of methods which operate on the coordinate as
a whole, you can operate on the individual ordinates via methods add_x,
add_y, mult_x, mult_y, sub_x, sub_y, rsub_x, rsub_y, div_x, div_y, rdiv_x,
rdiv_y, exp_x, exp_y, rexp_x, rexp_y...; the APIs of these methods are: ... "

My class is written, tested and complete before you've even decided on
your API. And you don't even really get the benefit of abstraction: I have
two public attributes (x and y) that I can't change without breaking other
people's code, you've got sixteen-plus methods that you can't change
without breaking other people's code.

(It goes without saying that these are in addition to the full set of
methods which operate on the coordinate as a whole -- our classes are
identical for those.)

The end result is that your code is *less* abstract than mine: your code
has to specify everything about ordinates: they can be added, they can be
subtracted, they can be multiplied, they can be printed, and so on. That's
far more concrete and far less abstract than mine, which simply says
ordinates are floats, and leave the implementation of floats up to Python.

--
Steven.

Antoon Pardon

unread,
Dec 10, 2005, 10:46:35 AM12/10/05
to
On 2005-12-10, Steven D'Aprano <st...@REMOVETHIScyber.com.au> wrote:
> On Sat, 10 Dec 2005 01:28:52 -0500, Mike Meyer wrote:
>
> The not-so-wise programmer takes abstraction as an end itself, and
> consequently spends more time and effort defending against events which
> almost certainly will never happen than it would have taken to deal with
> it if they did.
>
> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will be
> renamed to sys.standard_output, and that it will no longer have a write()
> method? According to the "law" of Demeter, you should, and the writers of
> the sys module should have abstracted the fact that stdout is a file away
> by providing a sys.write_to_stdout() function.

I find this a strange interpretation.

sys is a module, not an instance. Sure you can use the same notation
and there are similarities but I think the differences are more
important here.

> That is precisely the sort of behaviour which I maintain is unnecessary.
>
>
>
>>> The bad side of the Guideline of Demeter is that following it requires
>>> you to fill your class with trivial, unnecessary getter and setter
>>> methods, plus methods for arithmetic operations, and so on.
>>
>> No, it requires you to actually *think* about your API, instead of
>> just allowing every class to poke around inside your implementation.
>
> But I *want* other classes to poke around inside my implementation.
> That's a virtue, not a vice. My API says:
>
> "In addition to the full set of methods which operate on the coordinate as
> a whole, you can operate on the individual ordinates via instance.x and
> instance.y which are floats."

Yikes. I would never do that. Doing so would tie my code unnecesary
close to yours and would make it too difficult to change to an
other class with a different implementation like one using tuples or
lists instead of a seperate x and y instances.

> Your API says:
>
> "In addition to the full set of methods which operate on the coordinate as
> a whole, you can operate on the individual ordinates via methods add_x,
> add_y, mult_x, mult_y, sub_x, sub_y, rsub_x, rsub_y, div_x, div_y, rdiv_x,
> rdiv_y, exp_x, exp_y, rexp_x, rexp_y...; the APIs of these methods are: ... "

Who in heavens name would need those? Maybe there is no x or y because
the implementation uses a list or a tuple, maybe the implementation
uses polar coordinates because that is more usefull for the application
it was planned for.

Sure a way to unpack your coordinate in a number of individual
ordinate variables could be usefull for when you want to manipulate
such an individual number.

> My class is written, tested and complete before you've even decided on
> your API. And you don't even really get the benefit of abstraction: I have
> two public attributes (x and y) that I can't change without breaking other
> people's code, you've got sixteen-plus methods that you can't change
> without breaking other people's code.

No he would have none.

--
Antoon Pardon

Alex Martelli

unread,
Dec 10, 2005, 11:15:28 AM12/10/05
to
Mike Meyer <m...@mired.org> wrote:
...
> >> it. Nothing you do with zim.foo or zim.foo.bar can change the state of
> >> zim. The only invariants you need to check are bar's, which you do at
> >> the exit to it's baz method.
> > So foo's class is not allowed to have as its invariant any formula
> > depending on the attributes of its attribute bar, such as "bar.x>23" or
> > the like?
>
> Of course you can do such things. But it's a silly thing to do. That

I guess this is the crux of our disagreement -- much like, it seems to
me, your disagreement with Xavier and Steven on the other half of this
thread, as I'll try to explain in the following.

> invariant should be written as x > 23 for the class bar is an instance

Let's, for definiteness, say that bar is an instance of class Bar. Now,
my point is that absolutely not all instances of Bar are constrained to
always have their x attribute >23 -- in general, their x's can vary all
over the place; rather, the constraint applies very specifically to this
one instance of Bar -- the one held by foo (an instance of Foo) as foo's
attribute bar.

Let's try to see if I can make a trivially simple use case. Say I'm
using a framework to model statical structures in civil engineering.
I have classes such as Truss, Beam, Pier, Column, Girder, and so forth.

So in a given structure (class Foo) I might have a certain instance of
Beam, attribute beam1 of instances of Foo, which carries a certain load
(dependent on the overall loads borne by each given instance of Foo),
and transfers it to an instance of Pier (attribute pier1 of instances of
Foo) and one of Girder (attribute girder1 ditto).

Each of these structural elements will of course be able to exhibit as
attributes all of its *individual* characteristics -- but the exact
manner of *relationship* between the elements depends on how they're
assembled in a given structure, and so it's properly the business of the
structure, not the elements.

So, one invariant that had better hold to ensure a certain instance foo
of Foo is not about to crash, may be, depending on how Foo's detailed
structual geometry is, something like:

foo.beam1.force_transferred_A <= foo.pier1.max_load_top AND
foo.beam1.force_transferred_B <= foo.girder1.max_load_A

The natural place to state this invariant is in class Foo, by expressing
'foo' as 'self' in Python (or omitting it in languages which imply such
a lookup, of course).

If I'm not allowed (because you think "it's silly"!) to express a class
invariant in terms of attributes of the attributes of an instance of
that class, I basically have to write tons of boilerplate, violating
encapsulation, to express what are really attributes of attributes of
foo "as if" they were attributes of foo directly, e.g.

def beam1_force_transferred_A(): return beam1.force_transferred_A

(or other syntax to the same purpose). After going through this
pointless (truly silly) exercise I can finally code the invariant as

self.beam1_force_transferred_A <= self.pier1_max_load_top AND

(etc). Changing a lot of dots into underscores -- what a way to waste
programmer time! And all to NO advantage, please note, since:

> of. Invariants are intended to be used to check the state of the
> class, not the state of arbitary other objects. Doing the latter
> requires that you have to check the invariants of every object pretty
> much every time anything changes.

...in the end the invariant DOES have to be checked when anything
relevant changes, anyway, with or without the silly extra indirection.

But besides the wasted work, there is a loss of conceptual integrity: I
don't WANT Foo to have to expose the internal details that beam1's
reference point A transfers the force to pier1's top, etc etc, elevating
all of these internal structural details to the dignity of attributes of
Foo. Foo should expose only its externally visible attributes: loads
and forces on all the relevant points, geometric details of the
exterior, and structural parameters that are relevant for operating
safety margins, for example.

The point is that the internal state of an object foo which composes
other objects (foo's attributes) beam1, pier1, etc, INCLUDES some
attributes of those other objects -- thus stating that the need to check
those attributes' relationships in Foo's class invariant is SILLY,
strikes me as totally out of place. If the state is only of internal
relevance, important e.g. in invariants but not to be externally
exposed, what I think of as very silly instead is a style which forces
me to build a lot of "pseudoattributes" of Foo (not to be exposed) by
mindless delegation to attributes of attributes.


> Invariants are a tool. Used wisely, they make finding and fixing some
> logic bugs much easier than it would be otherwise. Used unwisely, they
> don't do anything but make the code bigger.

I disagree, most intensely and deeply, that any reference to an
attribute of an attribute of self in the body of an invariant is
necessarily "unwise".

> > I'm also quite dubious as to how you can then express some
> > invariants that can be very important
>
> Not all invariants, pre-conditions or post-conditions can be
> expressed.

Not all can be sensibly CHECKED, but most definitely all can be
EXPRESSED. Another one of my long-standing contentions with Eiffel is
the inability to express invariants (and pre- and post- conditions)
because the compiler is unable to figure out a decent way to check them;
Z and the VDL, just to name very old design languages, show easy ways to
allow full expression. Of course, if a condition is of the form, say,
"all items of potentially infinite iterable X satisfy predicate P", it
may not be runtime-checkable -- big furry deal, I want to be able to
EXPRESS it anyway, because apart from runtime checking there are other
precious uses of such conditions (e.g., the compiler might be able to
DEDUCE from such a condition some important optimization, or the
compiletime proof of other assertions, when run in the appropriate
mode).

But that has little to do with the use case I covered here. Surely
you're not claiming that the structural invariants showing that foo
isn't about to crash *can't be expressed*, just because the obvious way
to express them violates your stylistic preferences (which I find
totally unjustified in this case) and the way that meets your style
preferences, without offering any advantages, requires lots of pesky
useless boilerplate?! That's not the DEFINITION of "can't"!-)

> > Fortunately, for most of my work, I do get to use Python, Objective-C,
> > or Haskell, which, albeit in very different ways, are all "purer" (able
> > to stick to their principles)...
>
> I think Eiffel is fairly pure. But practicality beats purity, so there
> are places where it has to give in and deviate from it's
> principles. Clearly, you don't agree with the underlying
> philosoiphy. So don't use it.

I don't, but I also occasionally take the time to explain, as I've done
here, where it (or, in this case, a specific style you claim it requires
-- not using attributes of attributes in a class invariant -- I'd
appreciate URLs to where Meyers dictates this restriction, btw)
interferes with purity, practicality, or both.


Alex

Mike Meyer

unread,
Dec 10, 2005, 1:33:25 PM12/10/05
to
Steven D'Aprano <st...@REMOVETHIScyber.com.au> writes:
>> In particular,
>> you can get most of your meaningless methods out of a properly
>> designed Coordinate API. For example, add/sub_x/y_ord can all be
>> handled with move(delta_x = 0, delta_y = 0).
>
> Here is my example again:
>
> [quote]
> Then, somewhere in my application, I need twice the
> value of the y ordinate. I would simply say:
>
> value = 2*pt.y
> [end quote]
>
> I didn't say I wanted a coordinate pair where the y ordinate was double
> that of the original coordinate pair. I wanted twice the y ordinate, which
> is a single real number, not a coordinate pair.

Here you're not manipulating the attribute to change the class -
you're just using the value of the attribute. That's what they're
there for.

>> And you've once again missed the point. The reason you don't
>> manipulate the attributes directly is because it violates
>> encapsulation, and tightens the coupling between your class and the
>> classes it uses. It means you see the implementation details of the
>> classes you are using, meaning that if that changes, your class has to
>> be changed to match.
> Yes. And this is a potential problem for some classes. The wise programmer
> will recognise which classes have implementations likely to change, and
> code defensively by using sufficient abstraction and encapsulation to
> avoid later problems.

Except only the omniscennt programmer can do that perfectly. The
experienced programmers knows that requiments change over the lifetime
of a project, including things that the customer swears on a stack of
holy books will never change.

> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will be
> renamed to sys.standard_output, and that it will no longer have a write()
> method? According to the "law" of Demeter, you should, and the writers of
> the sys module should have abstracted the fact that stdout is a file away
> by providing a sys.write_to_stdout() function.
> That is precisely the sort of behaviour which I maintain is unnecessary.

And that's not the kind of behavior I'm talking about here, nor is it
the kind of behavior that the LoD is designed to help you with (those
are two different things).

>>> The bad side of the Guideline of Demeter is that following it requires
>>> you to fill your class with trivial, unnecessary getter and setter
>>> methods, plus methods for arithmetic operations, and so on.
>>
>> No, it requires you to actually *think* about your API, instead of
>> just allowing every class to poke around inside your implementation.
>
> But I *want* other classes to poke around inside my implementation.
> That's a virtue, not a vice. My API says:
>
> "In addition to the full set of methods which operate on the coordinate as
> a whole, you can operate on the individual ordinates via instance.x and
> instance.y which are floats."

That's an API which makes changing the object more difficult. It may
be the best API for the case at hand, but you should be aware of the
downsides.

> Your API says:

Actually, this is *your* API.

> "In addition to the full set of methods which operate on the coordinate as
> a whole, you can operate on the individual ordinates via methods add_x,
> add_y, mult_x, mult_y, sub_x, sub_y, rsub_x, rsub_y, div_x, div_y, rdiv_x,
> rdiv_y, exp_x, exp_y, rexp_x, rexp_y...; the APIs of these methods are: ... "

That would be a piss-poor API design. Any designer who knows what they
are doing should be able to turn out a better API than that given a
reasonable set of real-world requirements.

> My class is written, tested and complete before you've even decided on
> your API. And you don't even really get the benefit of abstraction: I have
> two public attributes (x and y) that I can't change without breaking other
> people's code, you've got sixteen-plus methods that you can't change
> without breaking other people's code.

> The end result is that your code is *less* abstract than mine: your code
> has to specify everything about ordinates: they can be added, they can be
> subtracted, they can be multiplied, they can be printed, and so on. That's
> far more concrete and far less abstract than mine, which simply says
> ordinates are floats, and leave the implementation of floats up to Python.

Again, this is *your* API, not mine. You're forcing an ugly, obvious
API instead of assuming the designer has some smidgen of ability. I've
already pointed out one trivial way to deal with this, and there are
others.

Mike Meyer

unread,
Dec 10, 2005, 1:59:09 PM12/10/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> Mike Meyer <m...@mired.org> wrote:
> ...
>> >> it. Nothing you do with zim.foo or zim.foo.bar can change the state of
>> >> zim. The only invariants you need to check are bar's, which you do at
>> >> the exit to it's baz method.
>> > So foo's class is not allowed to have as its invariant any formula
>> > depending on the attributes of its attribute bar, such as "bar.x>23" or
>> > the like?
>> Of course you can do such things. But it's a silly thing to do. That
> I guess this is the crux of our disagreement -- much like, it seems to
> me, your disagreement with Xavier and Steven on the other half of this
> thread, as I'll try to explain in the following.
>> invariant should be written as x > 23 for the class bar is an instance
> Let's, for definiteness, say that bar is an instance of class Bar. Now,
> my point is that absolutely not all instances of Bar are constrained to
> always have their x attribute >23 -- in general, their x's can vary all
> over the place; rather, the constraint applies very specifically to this
> one instance of Bar -- the one held by foo (an instance of Foo) as foo's
> attribute bar.

Well, the hard-core solution is to note that your class doesn't really
deal with the type Bar, but deals with a subtype of Bar for which x >
23 in all cases. Since types are represented by classes, you should
subclass Bar so you have a class that represents this subtype. The
class is trivial (with Eiffel conventions):

class RESTRICTED_BAR
inherits BAR
invariant x > 23
END

> So, one invariant that had better hold to ensure a certain instance foo
> of Foo is not about to crash, may be, depending on how Foo's detailed
> structual geometry is, something like:
>
> foo.beam1.force_transferred_A <= foo.pier1.max_load_top AND
> foo.beam1.force_transferred_B <= foo.girder1.max_load_A
>
> The natural place to state this invariant is in class Foo, by expressing
> 'foo' as 'self' in Python (or omitting it in languages which imply such
> a lookup, of course).

I don't think that's the natural place. It's certainly one place to
consider, and may be the best one. However, it might work equally well
to use preconditions on the methods that add the beam and pier to Foo
to verify that the beam and pier in question are valid. If the
attributes of the beam and pier can't change, this would be the right
way to do it.

> If I'm not allowed (because you think "it's silly"!) to express a class
> invariant in terms of attributes of the attributes of an instance of
> that class, I basically have to write tons of boilerplate, violating
> encapsulation, to express what are really attributes of attributes of
> foo "as if" they were attributes of foo directly, e.g.

[...]


> (etc). Changing a lot of dots into underscores -- what a way to waste
> programmer time! And all to NO advantage, please note, since:

If you knew it was going to be to no advantage, why did you write the
boilerplate? That's also pretty silly. Care to provide reasons for
your wanting to do this?

>> of. Invariants are intended to be used to check the state of the
>> class, not the state of arbitary other objects. Doing the latter
>> requires that you have to check the invariants of every object pretty
>> much every time anything changes.
> ...in the end the invariant DOES have to be checked when anything
> relevant changes, anyway, with or without the silly extra indirection.

No, it doesn't have to be checked. Even invariants that don't suffer
from this don't have to be checked. It would be nice if every
invariant was checked every time it might be violated, but that's not
practical. If checking relationships between attributes attributes is
the best you can do, you do that, knowing that instead of an invariant
violation raising an exception after the code that violates it, the
exception may raised after the first method of your class that is
called after the invariant is violated. That's harder to debug than
the other way, but if it's the best you can get, it's the best you can
get.

> But besides the wasted work, there is a loss of conceptual integrity: I
> don't WANT Foo to have to expose the internal details that beam1's
> reference point A transfers the force to pier1's top, etc etc, elevating
> all of these internal structural details to the dignity of attributes of
> Foo. Foo should expose only its externally visible attributes: loads
> and forces on all the relevant points, geometric details of the
> exterior, and structural parameters that are relevant for operating
> safety margins, for example.

You're right. Choosing to do that would be a bad idea. I have no idea
why you would do that in any language I'm familiar with. I'd be
interested in hearing about the language you use that requires you to
do that.

>> Invariants are a tool. Used wisely, they make finding and fixing some
>> logic bugs much easier than it would be otherwise. Used unwisely, they
>> don't do anything but make the code bigger.
> I disagree, most intensely and deeply, that any reference to an
> attribute of an attribute of self in the body of an invariant is
> necessarily "unwise".

So do I.

>> > I'm also quite dubious as to how you can then express some
>> > invariants that can be very important
>> Not all invariants, pre-conditions or post-conditions can be
>> expressed.
> Not all can be sensibly CHECKED, but most definitely all can be
> EXPRESSED. Another one of my long-standing contentions with Eiffel is
> the inability to express invariants (and pre- and post- conditions)
> because the compiler is unable to figure out a decent way to check them;
> Z and the VDL, just to name very old design languages, show easy ways to
> allow full expression. Of course, if a condition is of the form, say,
> "all items of potentially infinite iterable X satisfy predicate P", it
> may not be runtime-checkable -- big furry deal, I want to be able to
> EXPRESS it anyway, because apart from runtime checking there are other
> precious uses of such conditions (e.g., the compiler might be able to
> DEDUCE from such a condition some important optimization, or the
> compiletime proof of other assertions, when run in the appropriate
> mode).

So you think that a tool being imperfect means you shouldn't use it
all? So you don't test your code, because testing can't reveal all
bugs? That's odd - I always thought that testing was a critical part
of program development. I'd be interested in hearing about any
research that justifies doing development with testing, preferably
with URLs.

Alex Martelli

unread,
Dec 10, 2005, 3:21:56 PM12/10/05
to
Mike Meyer <m...@mired.org> wrote:
...
> Well, the hard-core solution is to note that your class doesn't really
> deal with the type Bar, but deals with a subtype of Bar for which x >
> 23 in all cases. Since types are represented by classes, you should
> subclass Bar so you have a class that represents this subtype. The
> class is trivial (with Eiffel conventions):
>
> class RESTRICTED_BAR
> inherits BAR
> invariant x > 23
> END

Yes, but then once again you have to "publicize" something (an aspect of
a class invariant) which should be dealt with internally; also, this
approach does not at all generalize to "bar1.x>23 OR bar2.x>23" and any
other nontrivial constraint involving expressions on more than
attributes of a single instance's attribute and compile-time constants.
So, besides "hard-coreness", this is just too limited to serve.


> > So, one invariant that had better hold to ensure a certain instance foo
> > of Foo is not about to crash, may be, depending on how Foo's detailed
> > structual geometry is, something like:
> >
> > foo.beam1.force_transferred_A <= foo.pier1.max_load_top AND
> > foo.beam1.force_transferred_B <= foo.girder1.max_load_A
> >
> > The natural place to state this invariant is in class Foo, by expressing
> > 'foo' as 'self' in Python (or omitting it in languages which imply such
> > a lookup, of course).
>
> I don't think that's the natural place. It's certainly one place to
> consider, and may be the best one. However, it might work equally well
> to use preconditions on the methods that add the beam and pier to Foo
> to verify that the beam and pier in question are valid. If the
> attributes of the beam and pier can't change, this would be the right
> way to do it.

What ever gave you the impression that the loads on beams and piers (and
therefore the forces they transfer) "can't change"? That would be a
pretty weird way to design a framework for structural modeling in any
language except a strictly functional (immutable-data) one, and I've
already pointed out that functional languages, thanks to their immutable
data approach, are very different from ones (like Eiffel or Python)
where data routinely does get changed.


> > If I'm not allowed (because you think "it's silly"!) to express a class
> > invariant in terms of attributes of the attributes of an instance of
> > that class, I basically have to write tons of boilerplate, violating
> > encapsulation, to express what are really attributes of attributes of
> > foo "as if" they were attributes of foo directly, e.g.
> [...]
> > (etc). Changing a lot of dots into underscores -- what a way to waste
> > programmer time! And all to NO advantage, please note, since:
>
> If you knew it was going to be to no advantage, why did you write the
> boilerplate? That's also pretty silly. Care to provide reasons for
> your wanting to do this?

If I had to program under a styleguide which enforces the style
preferences you have expressed, then the stupid boilerplate would allow
my program to be accepted by the stylechecker, thus letting my code be
committed into the source control system; presumably that would be
necessary for me to keep my job (thus drawing a salary) or getting paid
for my consultancy billed hours. Just like, say, if your styleguide
forbade the use of vowels in identifiers, I might have a tool to convert
such vowels into consonants before I committed my code. I'm not saying
there cannot be monetary advantage for me to obey the deleterious and
inappropriate rules of any given arbitrary styleguide: it may be a
necessary condition for substantial monetary gains or other preferments.
I'm saying there is no advantage whatsoever to the organization as a
whole in imposing arbitrary constraints such as, "no vowels in
identifiers", or, "no access to attributes of attributes in invariants".


> >> of. Invariants are intended to be used to check the state of the
> >> class, not the state of arbitary other objects. Doing the latter
> >> requires that you have to check the invariants of every object pretty
> >> much every time anything changes.
> > ...in the end the invariant DOES have to be checked when anything
> > relevant changes, anyway, with or without the silly extra indirection.
>
> No, it doesn't have to be checked. Even invariants that don't suffer
> from this don't have to be checked. It would be nice if every
> invariant was checked every time it might be violated, but that's not
> practical. If checking relationships between attributes attributes is
> the best you can do, you do that, knowing that instead of an invariant
> violation raising an exception after the code that violates it, the
> exception may raised after the first method of your class that is
> called after the invariant is violated. That's harder to debug than
> the other way, but if it's the best you can get, it's the best you can
> get.

Let's silently gloss on the detail that calling "invariant" something
that is in fact not guaranteed not to vary (or at least not to vary
without raising exceptions) is a recipe for semantic confusion;-) The
point remains that forcing me to define a beam1_load method, which just
delegates to beam1.load, and use beam1_load in my invariant's code
instead of beam1.load, is a silly rule -- yet it follows from the
stylistic prohibition on using beam1.load directly there, which
highlights the fact that said stylistic prohibition is silly in its
turn.


> > But besides the wasted work, there is a loss of conceptual integrity: I
> > don't WANT Foo to have to expose the internal details that beam1's
> > reference point A transfers the force to pier1's top, etc etc, elevating
> > all of these internal structural details to the dignity of attributes of
> > Foo. Foo should expose only its externally visible attributes: loads
> > and forces on all the relevant points, geometric details of the
> > exterior, and structural parameters that are relevant for operating
> > safety margins, for example.
>
> You're right. Choosing to do that would be a bad idea. I have no idea
> why you would do that in any language I'm familiar with. I'd be
> interested in hearing about the language you use that requires you to
> do that.

Eiffel PLUS your constraint against using attributes' attributes in an
invariant (language + additional constraint you desire) induces me to
wrap each beam1.load access as an attribute of my class under the name
beam1_load (and so on for all attributes' attributes which I need to
access internally in my invariants). I guess I can put band-aids on the
self-inflicted wounds by keeping those attributes private, but it would
be better to avoid the wounds in the first place by ditching the style
constraint against using attributes' attributes in invariants.


> >> Invariants are a tool. Used wisely, they make finding and fixing some
> >> logic bugs much easier than it would be otherwise. Used unwisely, they
> >> don't do anything but make the code bigger.
> > I disagree, most intensely and deeply, that any reference to an
> > attribute of an attribute of self in the body of an invariant is
> > necessarily "unwise".
>
> So do I.

Yet you called it "silly" -- which DOES imply "unwise" (and more).


> >> > I'm also quite dubious as to how you can then express some
> >> > invariants that can be very important
> >> Not all invariants, pre-conditions or post-conditions can be
> >> expressed.
> > Not all can be sensibly CHECKED, but most definitely all can be
> > EXPRESSED. Another one of my long-standing contentions with Eiffel is
> > the inability to express invariants (and pre- and post- conditions)
> > because the compiler is unable to figure out a decent way to check them;
> > Z and the VDL, just to name very old design languages, show easy ways to
> > allow full expression. Of course, if a condition is of the form, say,
> > "all items of potentially infinite iterable X satisfy predicate P", it
> > may not be runtime-checkable -- big furry deal, I want to be able to
> > EXPRESS it anyway, because apart from runtime checking there are other
> > precious uses of such conditions (e.g., the compiler might be able to
> > DEDUCE from such a condition some important optimization, or the
> > compiletime proof of other assertions, when run in the appropriate
> > mode).
>
> So you think that a tool being imperfect means you shouldn't use it
> all? So you don't test your code, because testing can't reveal all
> bugs? That's odd - I always thought that testing was a critical part
> of program development. I'd be interested in hearing about any
> research that justifies doing development with testing, preferably
> with URLs.

Start with:

http://www.mtsu.edu/~storm/
http://www.softwareqatest.com/
http://www.testing.com/
http://www.faqs.org/faqs/software-eng/testing-faq/

and feel free to come back and ask for more once you've exhausted the
wealth of pointers, articles, books and surveys these URLs will direct
you to.

As far as I know, the only outstanding figure in the history of
programming who decisively condemned testing because "it can only show
the presence of bugs, never their absence" was Djikstra; it's funny that
he failed to notice the parallel with Popper's epistemology -- by the
same thought-structure, we should condemn scientific experiments,
because, per Popper, they can only show the falsity of a scientific
theory, never its truth.

I never said nor implied that a tool's imperfections must prohibit its
use: I was just pointing out that your assertion about important
conditions which *can't be expressed* is simply false (as well as
totally inapplicable to the specific examples being discussed).


Alex

Alex Martelli

unread,
Dec 10, 2005, 3:37:34 PM12/10/05
to
Mike Meyer <m...@mired.org> wrote:

> > "In addition to the full set of methods which operate on the coordinate as
> > a whole, you can operate on the individual ordinates via instance.x and
> > instance.y which are floats."
>
> That's an API which makes changing the object more difficult. It may
> be the best API for the case at hand, but you should be aware of the
> downsides.

Since x and y are important abstractions of a "2-D coordinate", I
disagree that exposing them makes changing the object more difficult, as
long of course as I can, if and when needed, change them into properties
(or otherwise obtain similar effects -- before we had properties in
Python, __setattr__ was still quite usable in such cases, for example,
although properties are clearly simpler and more direct).

You could make a case for a "2D coordinate" class being "sufficiently
primitive" to have immutable instances, of course (by analogy with
numbers and strings) -- in that design, you would provide no mutators,
and therefore neither would you provide setters (with any syntax) for x
and y, obviously. However, a framework for 2D geometry entirely based
on immutable-instance classes would probably be unwieldy (except in a
fully functional language); as long as we have a language whose normal
style allows data mutation, we'll probably fit better into it by
allowing mutable geometrical primitives at some level -- and as soon as
the mutable primitives are reached, "settable attributes" and their
syntax and semantics come to the fore again...


Alex

Mike Meyer

unread,
Dec 10, 2005, 4:32:42 PM12/10/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> Mike Meyer <m...@mired.org> wrote:
> ...
>> Well, the hard-core solution is to note that your class doesn't really
>> deal with the type Bar, but deals with a subtype of Bar for which x >
>> 23 in all cases. Since types are represented by classes, you should
>> subclass Bar so you have a class that represents this subtype. The
>> class is trivial (with Eiffel conventions):
>> class RESTRICTED_BAR
>> inherits BAR
>> invariant x > 23
>> END
> Yes, but then once again you have to "publicize" something (an aspect of
> a class invariant) which should be dealt with internally

Contracts are intended to be public; they are part of the the class's
short form, which is the part that's intended for public consumption.
If your vision of invariants is that they are for internal use only,
and clients don't need to know them, then you probably ought to be
considering another language.

> also, this approach does not at all generalize to "bar1.x>23 OR
> bar2.x>23" and any other nontrivial constraint involving expressions
> on more than attributes of a single instance's attribute and
> compile-time constants. So, besides "hard-coreness", this is just
> too limited to serve.

I believe it's the best solution for the case at hand. It causes the
violation of the invariant to be caught as early as possible. As I
mentioned elsewhere, it's not suitable for all cases, so you have to
use other, possibly less effective, tools.

>> > So, one invariant that had better hold to ensure a certain instance foo
>> > of Foo is not about to crash, may be, depending on how Foo's detailed
>> > structual geometry is, something like:
>> >
>> > foo.beam1.force_transferred_A <= foo.pier1.max_load_top AND
>> > foo.beam1.force_transferred_B <= foo.girder1.max_load_A
>> >
>> > The natural place to state this invariant is in class Foo, by expressing
>> > 'foo' as 'self' in Python (or omitting it in languages which imply such
>> > a lookup, of course).
>>
>> I don't think that's the natural place. It's certainly one place to
>> consider, and may be the best one. However, it might work equally well
>> to use preconditions on the methods that add the beam and pier to Foo
>> to verify that the beam and pier in question are valid. If the
>> attributes of the beam and pier can't change, this would be the right
>> way to do it.
>
> What ever gave you the impression that the loads on beams and piers (and
> therefore the forces they transfer) "can't change"?

Your incomplete specification of the problem. You didn't say whether
or not they could change, so I pointed out what might - key word, that
- be a better solution for a more complete specification.

True. But if you think this is an arbitary constraint, why did you
impose it in the first place?

Yes, it's a silly rule. Why did you impose it?

>> > But besides the wasted work, there is a loss of conceptual integrity: I
>> > don't WANT Foo to have to expose the internal details that beam1's
>> > reference point A transfers the force to pier1's top, etc etc, elevating
>> > all of these internal structural details to the dignity of attributes of
>> > Foo. Foo should expose only its externally visible attributes: loads
>> > and forces on all the relevant points, geometric details of the
>> > exterior, and structural parameters that are relevant for operating
>> > safety margins, for example.
>> You're right. Choosing to do that would be a bad idea. I have no idea
>> why you would do that in any language I'm familiar with. I'd be
>> interested in hearing about the language you use that requires you to
>> do that.
> Eiffel PLUS your constraint against using attributes' attributes in an
> invariant (language + additional constraint you desire)

No, that's *your* constraint. I can think of no rational reason you
would want to impose it, but you have.


>> >> Invariants are a tool. Used wisely, they make finding and fixing some
>> >> logic bugs much easier than it would be otherwise. Used unwisely, they
>> >> don't do anything but make the code bigger.
>> > I disagree, most intensely and deeply, that any reference to an
>> > attribute of an attribute of self in the body of an invariant is
>> > necessarily "unwise".
>> So do I.
> Yet you called it "silly" -- which DOES imply "unwise" (and more).

No, I called one specific example silly. Yesterday, I fixed a bit of
python that did:

datetime = str(datetime)
year = int(datetime[x:y])
# and so on to pull month, day, hour, and minute out

I'd call this use of str, int and slicing silly as well. You would
apparently therefore conclude that I think *any* use of str, int and
slicinng is silly. You'd be wrong to do so, just as you were wrong to
conclude from my thinking that one particular example of referencing
an attributes attribute in ann invariant that any use of an attributes
attribute in an invariant is silly. That particular straw man is
strictly *your* creation. If you want it justified, you're going to
have to do it yourself.

>> >> > I'm also quite dubious as to how you can then express some
>> >> > invariants that can be very important
>> >> Not all invariants, pre-conditions or post-conditions can be
>> >> expressed.
>> > Not all can be sensibly CHECKED, but most definitely all can be
>> > EXPRESSED. Another one of my long-standing contentions with Eiffel is
>> > the inability to express invariants (and pre- and post- conditions)
>> > because the compiler is unable to figure out a decent way to check them;
>> > Z and the VDL, just to name very old design languages, show easy ways to
>> > allow full expression. Of course, if a condition is of the form, say,
>> > "all items of potentially infinite iterable X satisfy predicate P", it
>> > may not be runtime-checkable -- big furry deal, I want to be able to
>> > EXPRESS it anyway, because apart from runtime checking there are other
>> > precious uses of such conditions (e.g., the compiler might be able to
>> > DEDUCE from such a condition some important optimization, or the
>> > compiletime proof of other assertions, when run in the appropriate
>> > mode).
>>
>> So you think that a tool being imperfect means you shouldn't use it
>> all? So you don't test your code, because testing can't reveal all
>> bugs? That's odd - I always thought that testing was a critical part
>> of program development. I'd be interested in hearing about any
>> research that justifies doing development with testing, preferably
>> with URLs.
>
> Start with:

[...]


> and feel free to come back and ask for more once you've exhausted the
> wealth of pointers, articles, books and surveys these URLs will direct
> you to.

Sorry, I misspoke. I meant to ask you to justify your believe in doing
development *without* testing.

Mike Meyer

unread,
Dec 10, 2005, 6:19:42 PM12/10/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> Mike Meyer <m...@mired.org> wrote:
>> > "In addition to the full set of methods which operate on the coordinate as
>> > a whole, you can operate on the individual ordinates via instance.x and
>> > instance.y which are floats."
>> That's an API which makes changing the object more difficult. It may
>> be the best API for the case at hand, but you should be aware of the
>> downsides.
> Since x and y are important abstractions of a "2-D coordinate", I
> disagree that exposing them makes changing the object more difficult, as
> long of course as I can, if and when needed, change them into properties
> (or otherwise obtain similar effects -- before we had properties in
> Python, __setattr__ was still quite usable in such cases, for example,
> although properties are clearly simpler and more direct).

Exposing them doesn't make making changes more difficult. Allowing
them to be used to manipulate the object makes some changes more
difficult. Properties makes the set of such changes smaller, but it
doesn't make them vanish.

Take our much-abused coordinate example, and assume you've exposed the
x and y coordinates as attributes.

Now we have a changing requirement - we want to get to make the polar
coordinates available. To keep the API consistent, they should be
another pair of attributes, r and theta. Thanks to Pythons nice
properties, we can implement these with a pair of getters, and compute
them on the fly.

If x and y can't be manipulated individually, you're done. If they
can, you have more work to do. If nothing else, you have to decide
that you're going to provide an incomplete interface, in that users
will be able to manipulate the object with some attributes but not
others for no obvious good reason. To avoid that, you'll have to add
code to run the coordinate transformations in reverse, which wouldn't
otherwise be needed. Properties make this possible, which is a great
thing.

Paul Rubin

unread,
Dec 10, 2005, 7:41:21 PM12/10/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> You could make a case for a "2D coordinate" class being "sufficiently
> primitive" to have immutable instances, of course (by analogy with
> numbers and strings) -- in that design, you would provide no mutators,
> and therefore neither would you provide setters (with any syntax) for x
> and y, obviously. However, a framework for 2D geometry entirely based
> on immutable-instance classes would probably be unwieldy

I could imagine using Python's built-in complex numbers to represent
2D points. They're immutable, last I checked. I don't see a big
conflict.

Bernhard Herzog

unread,
Dec 10, 2005, 7:50:58 PM12/10/05
to

al...@mail.comcast.net (Alex Martelli) writes:
> You could make a case for a "2D coordinate" class being "sufficiently
> primitive" to have immutable instances, of course (by analogy with
> numbers and strings) -- in that design, you would provide no mutators,
> and therefore neither would you provide setters (with any syntax) for x
> and y, obviously. However, a framework for 2D geometry entirely based
> on immutable-instance classes would probably be unwieldy

Skencil's basic objects for 2d geometry, points and transformations, are
immutable. It works fine. Immutable object have the great advantage of
making reasoning about the code much easier as the can't change behind
your back.

More complex objects such as poly bezier curves are mutable in Skencil,
and I'm not sure anymore that that was a good design decision. In most
cases where bezier curve is modified the best approach is to simply
build a new bezier curve anyway. Sort of like list-comprehensions make
it easier to "modify" a list by creating a new list based on the old
one.

Bernhard

--
Intevation GmbH http://intevation.de/
Skencil http://skencil.org/
Thuban http://thuban.intevation.org/

Alex Martelli

unread,
Dec 10, 2005, 9:11:20 PM12/10/05
to
Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
...

> I could imagine using Python's built-in complex numbers to represent
> 2D points. They're immutable, last I checked. I don't see a big
> conflict.

No big conflict at all -- as I recall, last I checked, computation on
complex numbers was optimized enough to make them an excellent choice
for 2D points' internal representations. I suspect you wouldn't want to
*expose* them as such (e.g. by inheriting) but rather wrap them, because
referring to the .real and .imag "coordinates" of a point (rather than
.x and .y) IS rather weird. Wrapping would also leave you the choice of
making 2D coordinates a class with mutable instances, if you wish,
reducing the choice of a complex rather than two reals to a "mere
implementation detail";-).

The only issue I can think of: I believe (I could be wrong) that a
Python implementation might be built with complex numbers disabled (just
like, e.g., it might be built with unicode disabled). If that's indeed
the case, I might not want to risk, for the sake of a little
optimization, my 2D geometry framework not working on some little
cellphone or PDA or whatever...;-)


Alex

Alex Martelli

unread,
Dec 10, 2005, 9:11:20 PM12/10/05
to
Bernhard Herzog <b...@intevation.de> wrote:
...

> > and y, obviously. However, a framework for 2D geometry entirely based
> > on immutable-instance classes would probably be unwieldy
>
> Skencil's basic objects for 2d geometry, points and transformations, are
> immutable. It works fine. Immutable object have the great advantage of
> making reasoning about the code much easier as the can't change behind
> your back.

Yes, that's important -- on the flip side, you may, in some cases, wish
you had mutable primitives for performance reasons (I keep daydreaming
about adding mutable-number classes to gmpy...;-)


> More complex objects such as poly bezier curves are mutable in Skencil,
> and I'm not sure anymore that that was a good design decision. In most
> cases where bezier curve is modified the best approach is to simply
> build a new bezier curve anyway. Sort of like list-comprehensions make
> it easier to "modify" a list by creating a new list based on the old
> one.

True, not for nothing were list comprehensions copied from the
functional language Haskell -- they work wonderfully well with immutable
data, unsurprisingly;-). However, what if (e.g.) one anchor point
within the spline is being moved interactively? I have no hard data,
just a suspicion that modifying the spline may be more efficient than
generating and tossing away a lot of immutable splines...


Alex

Alex Martelli

unread,
Dec 10, 2005, 9:11:20 PM12/10/05
to
Mike Meyer <m...@mired.org> wrote:
...
> Take our much-abused coordinate example, and assume you've exposed the
> x and y coordinates as attributes.
>
> Now we have a changing requirement - we want to get to make the polar
> coordinates available. To keep the API consistent, they should be
> another pair of attributes, r and theta. Thanks to Pythons nice
> properties, we can implement these with a pair of getters, and compute
> them on the fly.
>
> If x and y can't be manipulated individually, you're done. If they
> can, you have more work to do. If nothing else, you have to decide
> that you're going to provide an incomplete interface, in that users
> will be able to manipulate the object with some attributes but not
> others for no obvious good reason. To avoid that, you'll have to add
> code to run the coordinate transformations in reverse, which wouldn't
> otherwise be needed. Properties make this possible, which is a great
> thing.

Properties make this _easier_ (but you could do it before properties
were added to Python, via __setattr__ -- just less conveniently and
directly) -- just as easy as setX, setY, setRho, and setTheta would (in
fact, we're likely to have some of those methods under our properties,
so the difference is all in ease of USE, for the client code, not ease
of IMPLEMENTATION, compared to setter-methods).

If we keep the internal representation in cartesian coordinates
(attributes x and y), and decide that it would interfere with the
class's usefulness to have rho and theta read-only (i.e., that it IS
useful for the user of the class to be able to manipulate them
directly), we do indeed need to "add code" -- the setter methods setRho
and setTheta. But let's put that in perspective. If we instead wanted
to make the CoordinatePair class immutable, we'd STILL have to offer an
alternative constructor or factory-function -- if it's at all useful to
manipulate rho and theta in a mutable class, it must be at least as
useful to be able to construct an immutable version from rho and theta,
after all. So, we ARE going to have, say, a classmethod (note: all the
code in this post is untested)...:

class CoordinatePair(object):
def fromPolar(cls, rho, theta):
assert rho>=0
return cls(rho*math.cos(theta), rho*math.sin(theta))
fromPolar = classmethod(fromPolar)
# etc etc, the rest of this class

well, then, how much more code are we adding, to implement setRho and
setTheta when we decide to make our class mutable? Here...:

def setRho(self, rho):
c = self.fromPolar(rho, self.getTheta())
self.x, self.y = c.x, c.y
def setTheta(self, theta):
c = self.fromPolar(self.getRho(), theta)
self.x, self.y = c.x, c.y

That's the maximum possible "difficulty" (...if THIS was a measure of
real "difficulty" in programming, I doubt our jobs would be as well paid
as they are...;-) -- it's going to be even less if we need anyway to
have a method to copy a CoordinatePair instance from another, such as

def copyFrom(self, other):
self.x, self.y = other.x, other.y

since then the above setters also become no-brainer oneliners a la:

def setRho(self, rho):
self.copyFrom(self.fromPolar(rho, self.getTheta()))

and you might choose to further simplify this method's body to

self.copyFrom(self.fromPolar(rho, self.theta))

since self.theta is going to be a property whose accessor half is the
above-used self.getTheta (mostly a matter of style choice here).


Really, I don't think this makes a good poster child for your "attribute
mutators make life more difficult" campaign...;-)


Alex

Paul Rubin

unread,
Dec 10, 2005, 9:42:15 PM12/10/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> > I could imagine using Python's built-in complex numbers to represent
> > 2D points. They're immutable, last I checked. I don't see a big
> > conflict.
>
> No big conflict at all -- as I recall, last I checked, computation on
> complex numbers was optimized enough to make them an excellent choice
> for 2D points' internal representations. I suspect you wouldn't want to
> *expose* them as such (e.g. by inheriting) but rather wrap them, because
> referring to the .real and .imag "coordinates" of a point (rather than
> .x and .y) IS rather weird. Wrapping would also leave you the choice of
> making 2D coordinates a class with mutable instances, if you wish,
> reducing the choice of a complex rather than two reals to a "mere
> implementation detail";-).

Right, you could use properties to make point.x get the real part of
an internal complex number. But now you're back to point.x being an
accessor function; you've just set things up so you can call it
without parentheses, like in Perl. E.g.

a = point.x
b = point.x
assert (a is b) # can fail

for that matter

assert (point.x is point.x)

can fail. These attributes aren't "member variables" any more.

Alex Martelli

unread,
Dec 10, 2005, 10:20:39 PM12/10/05
to
Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
...
> Right, you could use properties to make point.x get the real part of
> an internal complex number. But now you're back to point.x being an
> accessor function; you've just set things up so you can call it
> without parentheses, like in Perl. E.g.
>
> a = point.x
> b = point.x
> assert (a is b) # can fail

Sure -- there's no assurance of 'is' (although the straightforward
implementation in today's CPython would happen to satisfy the assert).

But similarly, nowhere in the Python specs is there any guarantee that
for any complex number c, c.real is c.real (although &c same as above).
So what? 'is', for immutables like floats, is pretty useless anyway.


> for that matter
>
> assert (point.x is point.x)
>
> can fail. These attributes aren't "member variables" any more.

They are *syntactically*, just like c.real for a complex number c: no
more, no less. I'm not sure why you're so focused on 'is', here. But
the point is, you could, if you wished, enable "point.x=23" even if
point held its x/y values as a complex -- just, e.g.,
def setX(self, x):
x.c = complex(x, self.y)
[[or use x.c.imag as the second argument if you prefer, just a style
choice]].


Alex

Erik Max Francis

unread,
Dec 10, 2005, 10:25:27 PM12/10/05
to
Paul Rubin wrote:

> Right, you could use properties to make point.x get the real part of
> an internal complex number. But now you're back to point.x being an
> accessor function; you've just set things up so you can call it
> without parentheses, like in Perl. E.g.
>
> a = point.x
> b = point.x
> assert (a is b) # can fail
>
> for that matter
>
> assert (point.x is point.x)
>
> can fail. These attributes aren't "member variables" any more.

Which is perfectly fine, since testing identity with `is' in this
context is not useful.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM erikmaxfrancis
Never use two words when one will do best.
-- Harry S. Truman

Mike Meyer

unread,
Dec 10, 2005, 10:56:12 PM12/10/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> def setRho(self, rho):
> c = self.fromPolar(rho, self.getTheta())
> self.x, self.y = c.x, c.y
> def setTheta(self, theta):
> c = self.fromPolar(self.getRho(), theta)
> self.x, self.y = c.x, c.y
>
> That's the maximum possible "difficulty" (...if THIS was a measure of
> real "difficulty" in programming, I doubt our jobs would be as well paid
> as they are...;-) -- it's going to be even less if we need anyway to
> have a method to copy a CoordinatePair instance from another, such as

It's a trivial example. Incremental extra work is pretty much
guaranteed to be trivial as well.

> Really, I don't think this makes a good poster child for your "attribute
> mutators make life more difficult" campaign...;-)

The claim is that there exists cases where that's true. This cases
demonstrates the existence of such cases. That the sample is trivial
means the difficulty is trivial, so yeah, it's a miserable poster
child. But it's a perfectly adequate existence proof.

Steven D'Aprano

unread,
Dec 10, 2005, 11:23:19 PM12/10/05
to
On Sat, 10 Dec 2005 13:33:25 -0500, Mike Meyer wrote:

> Steven D'Aprano <st...@REMOVETHIScyber.com.au> writes:
>>> In particular,
>>> you can get most of your meaningless methods out of a properly
>>> designed Coordinate API. For example, add/sub_x/y_ord can all be
>>> handled with move(delta_x = 0, delta_y = 0).
>>
>> Here is my example again:
>>
>> [quote]
>> Then, somewhere in my application, I need twice the
>> value of the y ordinate. I would simply say:
>>
>> value = 2*pt.y
>> [end quote]
>>
>> I didn't say I wanted a coordinate pair where the y ordinate was double
>> that of the original coordinate pair. I wanted twice the y ordinate, which
>> is a single real number, not a coordinate pair.
>
> Here you're not manipulating the attribute to change the class -
> you're just using the value of the attribute. That's what they're
> there for.

[bites tongue to avoid saying a rude word]

That's what I've been saying all along!

But according to the "Law" of Demeter, if you take it seriously, I
mustn't/shouldn't do that, because I'm assuming pt.y will always have a
__mul__ method, which is "too much coupling". My Coordinate class
must/should create wrapper functions like this:

class Coordinate:
def __init__(self, x, y):
self.x = x; self.y = x
def mult_y(self, other):
return self.y * other

so I am free to change the implementation (perhaps I stop using named
attributes, and use a tuple of two items instead).

I asked whether people really do this, and was told by you that they not
only do but that they should ("only with a better API design").

So we understand each other, I recognise that abstraction is a valuable
tool, and can be important. What I object to is taking a reasonable
guideline "try to keep coupling to the minimum amount practical" into an
overblown so-called law "you should always write wrapper functions to hide
layers more than one level deep, no matter how much extra boilerplate code
you end up writing".

>> The wise programmer
>> will recognise which classes have implementations likely to change, and
>> code defensively by using sufficient abstraction and encapsulation to
>> avoid later problems.
>
> Except only the omniscennt programmer can do that perfectly.

I'm not interested in perfection, because it is unattainable. I'm
interested in balancing the needs of many different conflicting
requirements. The ability to change the implementation of my class after
I've released it is only one factor out of many. Others include initial
development time and cost, bugs, ease of maintenance, ease of
documentation, how complex an API do I expect my users to learn,
convenience of use, and others.

[snip]

>> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will
>> be renamed to sys.standard_output, and that it will no longer have a
>> write() method? According to the "law" of Demeter, you should, and the
>> writers of the sys module should have abstracted the fact that stdout
>> is a file away by providing a sys.write_to_stdout() function. That is
>> precisely the sort of behaviour which I maintain is unnecessary.
>
> And that's not the kind of behavior I'm talking about here, nor is it
> the kind of behavior that the LoD is designed to help you with (those
> are two different things).

How are they different? Because one is a class and the other is a module?
That's a meaningless distinction: you are still coupled to a particular
behaviour of something two levels away. If the so-called Law of Demeter
makes sense for classes, it makes sense for modules too.

[snip]

>> "In addition to the full set of methods which operate on the coordinate
>> as a whole, you can operate on the individual ordinates via instance.x
>> and instance.y which are floats."
>
> That's an API which makes changing the object more difficult. It may be
> the best API for the case at hand, but you should be aware of the
> downsides.

Of course. We agree there. But it is a trade-off that can (not must, not
always) be a good trade-off, for many (not all) classes. One of the
advantages is that it takes responsibility for specifying every last thing
about ordinates within a coordinate pair out of my hands. They are floats,
that's all I need to say.

If you think I'm arguing that abstraction is always bad, I'm not. But it
is just as possible to have too much abstraction as it is to have too
little.


[snip]

> Again, this is *your* API, not mine. You're forcing an ugly, obvious API
> instead of assuming the designer has some smidgen of ability.

But isn't that the whole question? Should programmers follow slavishly the
so-called Law of Demeter to the extremes it implies, even at the cost of
writing ugly, unnecessary, silly code, or should they treat it as a
guideline, to be obeyed or not as appropriate?

Doesn't Python encourage the LoD to be treated as a guideline, by allowing
class designers to use public attributes instead of forcing them to write
tons of boilerplate code like some other languages?

> I've
> already pointed out one trivial way to deal with this, and there are
> others.

Mike, the only "trivial way to deal with this" that you have pointed out
was this:

"For example, add/sub_x/y_ord can all be handled with move(delta_x = 0,
delta_y = 0)."

That's a wonderful answer *for the wrong question*. I thought I explained
that already.

--
Steven.

Alex Martelli

unread,
Dec 10, 2005, 11:43:47 PM12/10/05
to
Mike Meyer <m...@mired.org> wrote:

> al...@mail.comcast.net (Alex Martelli) writes:
> > def setRho(self, rho):
> > c = self.fromPolar(rho, self.getTheta())
> > self.x, self.y = c.x, c.y
> > def setTheta(self, theta):
> > c = self.fromPolar(self.getRho(), theta)
> > self.x, self.y = c.x, c.y
> >
> > That's the maximum possible "difficulty" (...if THIS was a measure of
> > real "difficulty" in programming, I doubt our jobs would be as well paid
> > as they are...;-) -- it's going to be even less if we need anyway to
> > have a method to copy a CoordinatePair instance from another, such as
>
> It's a trivial example. Incremental extra work is pretty much
> guaranteed to be trivial as well.

You appear not to see that this triviality generalizes. Given any set
of related attributes that among them determine non-redundantly and
uniquely the value of an instance (mathematically equivalent to forming
a primary key in a normal-form relational table), if it's at all
interesting to let those attributes be manipulated for a mutable
instance, if must be at least as important to offer an alternative ctor
or factory to create the instance from those attributes (and that
applies at least as strongly to the case of immutable instances).

Given that you have such a factory, *whatever its internal complexity*,
the supplementary amount of work to produce a setter for any subset of
the given attributes, whatever the internal representation of state used
for the instance, is bounded and indeed trivial:
a. create a new instance by calling the factory with the values of the
attributes being those of the existing instance (for attributes which
are not being changed by the current method) and the new value being set
(for attributes which are being set by the current method);
b. copy the internal state (whatever its representation may be) from the
new instance to the existing one (for many cases of classes with mutable
instances, you will already have a 'copyFrom' method doing this, anyway,
because it's useful in so many other usage contexts).

That's it -- you're done. No *DIFFICULTY* -- indeed, a situation close
enough to boilerplate that if I found myself writing a framework with
multiple such classes I'd seriously consider refactoring it upwards into
a custom metaclass or the like, just because I dislike boilerplate as a
matter of principle.


> > Really, I don't think this makes a good poster child for your "attribute
> > mutators make life more difficult" campaign...;-)
>
> The claim is that there exists cases where that's true. This cases
> demonstrates the existence of such cases. That the sample is trivial
> means the difficulty is trivial, so yeah, it's a miserable poster
> child. But it's a perfectly adequate existence proof.

You appear to be laboring under the serious misapprehension that you
have demonstrate any DIFFICULTY whatsoever in writing mutators
(specifically attribute-setters). Let me break the bad news to you as
diplomatically as I can: you have not. All that your cherished example
demonstrates is: if you're going to write a method, that method will
need a body of at least one or two statements - in this case, I've shown
(both in the single concrete example, and in the generalized code) that
IF a set of attributes is interesting enough to warrant building a new
instance based on them (if it is totally uninteresting instead, then
imagining that you have to allow such attributes to be MUTATED on an
existing instance, while forbidding them to be ORIGINALLY SET to create
a new instance, borders on the delusional -- what cases would make the
former but not the latter an important functionality?!), THEN
implementing mutators (setters for those attributes) is trivially EASY
(the converse of DIFFICULT) -- the couple of statements in the attribute
setters' bodies are so trivial that they're obviously correct, assuming
just correctness of the factory and the state-copying methods.


Alex

Steven D'Aprano

unread,
Dec 11, 2005, 12:44:41 AM12/11/05
to
On Sat, 10 Dec 2005 22:56:12 -0500, Mike Meyer wrote:

[snip]

>> Really, I don't think this makes a good poster child for your "attribute
>> mutators make life more difficult" campaign...;-)
>
> The claim is that there exists cases where that's true. This cases
> demonstrates the existence of such cases. That the sample is trivial
> means the difficulty is trivial, so yeah, it's a miserable poster
> child. But it's a perfectly adequate existence proof.

Huh?

As I see it:

Claim: doing X makes Y hard.
Here is an example of doing X where Y is easy.
Therefore that example proves that doing X makes Y hard.

Perhaps I've missed some subtle meaning of the terms "demonstrates" and
"existence proof".

--
Steven.

Steven D'Aprano

unread,
Dec 11, 2005, 1:08:34 AM12/11/05
to
On Sat, 10 Dec 2005 15:46:35 +0000, Antoon Pardon wrote:

>> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will be
>> renamed to sys.standard_output, and that it will no longer have a write()
>> method? According to the "law" of Demeter, you should, and the writers of
>> the sys module should have abstracted the fact that stdout is a file away
>> by providing a sys.write_to_stdout() function.
>
> I find this a strange interpretation.
>
> sys is a module, not an instance. Sure you can use the same notation
> and there are similarities but I think the differences are more
> important here.

The fact that sys is a module and not a class is a red herring. If the
"Law" of Demeter makes sense for classes, it makes just as much sense for
modules as well -- it is about reducing coupling between pieces of code,
not something specific to classes.

The point of the "Law" of Demeter is to protect against changes in objects
more than one step away from the caller. You have some code that wants to
write to stdout, which you get from the sys module -- that puts sys one
step away, so you are allowed to rely on the published interface to sys,
but not anything further away than that: according to the so-called "law",
you shouldn't/mustn't rely on things more than one step away from the
calling code.

One dot good, two dots bad.

Assuming that stdout will always have a write() method is "bad" because it
couples your code to a particular implementation of stdout: it assumes
that it will always be a file-like object with a write method. What if the
maintainer of sys decides to change it?

Arguing that "this will never happen, it would break too much code" is
*not* good enough, not for the Law of Demeter zealots -- they will argue
that the only acceptable way to code is to create an interface to the
stdout object one level away from the calling code. Instead of calling
sys.stdout.write() (unsafe, what if the stdout object changes?) you must
use something like sys.write_to_stdout() (only one level away).

The fact that people can and do break the "Law" of Demeter all the time,
with no harmful effects, shows that it is no law at all. It is a
*guideline*, and as a guideline I've heard worse ideas than "keep your
options open". That's what it really boils down to: if you specify an
interface of helper functions, you can change your implementation, at the
expense of doing a lot extra work now just in case you will need it later.
But that's not a virtue in and of itself -- it is only good if you
actually intend to change your implementation, or at least think you might
want to some day, and then only if the work needed to write your
boilerplate is less than the work needed to adapt to the changed
implementation.

[snip]

>> But I *want* other classes to poke around inside my implementation.
>> That's a virtue, not a vice. My API says:
>>
>> "In addition to the full set of methods which operate on the coordinate
>> as a whole, you can operate on the individual ordinates via instance.x
>> and instance.y which are floats."
>
> Yikes. I would never do that. Doing so would tie my code unnecesary
> close to yours and would make it too difficult to change to an other
> class with a different implementation like one using tuples or lists
> instead of a seperate x and y instances.

Do you really think that my class and some other class written by
another person will have the same API? If you change from my class to
another class, the chances are that the interfaces will be different
unless the second class writer deliberately emulated my class interface.

To class users, there is *no* difference in consequences between me
changing my published API by removing named attributes x and y from my
class, and me changing my published API by removing or changing methods.


>> Your API says:
>>
>> "In addition to the full set of methods which operate on the coordinate
>> as a whole, you can operate on the individual ordinates via methods
>> add_x, add_y, mult_x, mult_y, sub_x, sub_y, rsub_x, rsub_y, div_x,
>> div_y, rdiv_x, rdiv_y, exp_x, exp_y, rexp_x, rexp_y...; the APIs of
>> these methods are: ... "
>
> Who in heavens name would need those? Maybe there is no x or y because
> the implementation uses a list or a tuple, maybe the implementation uses
> polar coordinates because that is more usefull for the application it
> was planned for.

And maybe it isn't a Coordinate class at all, hmmm?

An ordinary, Cartesian, real-valued Coordinate is a pair of ordinates, an
X and Y ordinates. That's what it *is* -- a coordinate class without X and
Y ordinates isn't a coordinate class, regardless of how they are
implemented (via properties, named attributes, or a whole bucketful of
helper functions).

I'm not interested in polar coordinates, lists, dicts, red-black trees,
complex-valued infinite dimensional vectors, byte streams or any other
class. If I wanted one of those, I'd write *that* class and I wouldn't
need to access the X and Y ordinates. But since I want a two dimensional
Cartesian coordinate class, I must have *some* way of accessing the X and
Y ordinates, otherwise it isn't a two dimensional Cartesian coordinate
class.

The question is, MUST I write a whole pile of boilerplate functions?
According to the Law of Demeter, I must, just in case somebody changes the
definition of float and suddenly code like value = 2*pt.y stops working.
In my opinion, that's taking abstraction to ridiculous extremes.

I'm not saying that there is never any reason to write getters and
setters or similar boilerplate. If I suspect (or fear) that the
implementation is going to change after my API is nailed down, then it is
a good idea to write an intermediate public level so I can change the
internal implementation at a later date. That good practice. Bad practice
is to pretend that the boilerplate code making that intermediate level is
cost-free, and that therefore one must always use it.

[snip]

>> My class is written, tested and complete before you've even decided on
>> your API. And you don't even really get the benefit of abstraction: I
>> have two public attributes (x and y) that I can't change without
>> breaking other people's code, you've got sixteen-plus methods that you
>> can't change without breaking other people's code.
>
> No he would have none.

Do you really mean to tell me that the class writer can change their
public interface without breaking code?

--
Steven.

Paul Rubin

unread,
Dec 11, 2005, 1:16:04 AM12/11/05
to
Steven D'Aprano <st...@REMOVETHIScyber.com.au> writes:
> The fact that sys is a module and not a class is a red herring. If the
> "Law" of Demeter makes sense for classes, it makes just as much sense for
> modules as well -- it is about reducing coupling between pieces of code,
> not something specific to classes.

I don't see that. If a source line refers to some module you can get
instantly to the module's code. But you can't tell where any given
class instance comes from. That's one of the usual criticisms of OOP,
that the flow of control is obscured compared with pure procedural
programming.

> One dot good, two dots bad.

Easy to fix. Instead of sys.stdout.write(...) say

from sys import stdout

from then on you can use stdout.write(...) instead of sys.stdout.write.

Mike Meyer

unread,
Dec 11, 2005, 1:16:45 AM12/11/05
to

I think you've misunderstood the LoD. In particular, 2 * pt.y doesn't
necessarily involve violating the LOD, if it's (2).__add__(pt.y). If
it's pt.y.__add__(2), then it would. But more on that later.

>>> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will
>>> be renamed to sys.standard_output, and that it will no longer have a
>>> write() method? According to the "law" of Demeter, you should, and the
>>> writers of the sys module should have abstracted the fact that stdout
>>> is a file away by providing a sys.write_to_stdout() function. That is
>>> precisely the sort of behaviour which I maintain is unnecessary.
>>
>> And that's not the kind of behavior I'm talking about here, nor is it
>> the kind of behavior that the LoD is designed to help you with (those
>> are two different things).
>
> How are they different? Because one is a class and the other is a module?
> That's a meaningless distinction: you are still coupled to a particular
> behaviour of something two levels away. If the so-called Law of Demeter
> makes sense for classes, it makes sense for modules too.

And here's where I get to avoid saying a rude word. I'm not going to
chase down my original quote, but it was something along the lines of
"You shouldn't reach through multiple levels of attribute to change
things like that, it's generally considered a bad design". You asked
why, and I responded by pointing to the LoD, because it covers that,
and the underlying reasons are mostly right. I was being lazy, and
took an easy out - and got punished for it by winding up in the
position of defending the LoD.

My problem with the original code wasn't that it violated the LoD; it
was that it was reaching into the implementation in the process, and
manipulating attributes to do things that a well-designed API would do
via methods of the object.

The LoD forces you to uncouple your code from your clients, and
provide interfaces for manipulating your object other than by mucking
around with your attribute. I consider this a good thing. However, it
also prevents perfectly reasonable behavior, and there we part
company.

And of course, it doesn't ensure good design. As you demonstrated, you
can translate the API "manipulate my guts by manipulating my
attributes" into an LoD compliant API by creating a collection
meaningless methods. If the API design was bad to begin with, changing
the syntax doesn't make it good. What's a bad idea hefre is exposing
parts of your implementation to clients so they can control your
state. Whether you do that with a slew of methods for mangling the
implementation, or just grab the attribute and use it is
immaterial. The LoD tries to deal with this by outlawing such
manipulation. People respond by mechanically translating the design
into a form that follows the law. Mechanically translating a bad
design into compliance with a design law doesn't make it a good
design.

Instead of using a vague, simple example with a design we don't agree
on, let's try taking a solid example that we both (I hope) agree is
good, and changing it to violate encapsulation.

Start with dict. Dict has an argumentless method, which means we could
easily express it as an attribute: keys. I'm going to treat it as an
attribute for this discussion, because it's really immaterial to the
point (and would work in some languages), but not to people's
perceptions.

Given that, what should mydict.keys.append('foobar') do? Given the
current implementation, it appends 'foobar' to a list that started
life as a list of the keys of mydict. It doesn't do anything to
mydict; in particular, the next time you reference mydict.keys, you
won't get back the list. This is a good design. If
mydict.keys.append('foobar') were the same as "mydict['foobar'] =
None", that would be a bad design.

Now suppose you want the keys in sorted order? That's a common enough
thing to want. The obvious way to get it is to get the list of keys
and to sort them. The LoD isn't clear on that (or maybe I failed to
read it properly), as you're allowed to call methods on objects that
you created. Did you create the list of keys? Did mydict? Which is
allowed? I dunno.

On the other hand, I don't have a problem with it. The keys feature
gives you a picture of part of the dictionary. What you do with the
picture after you get it is up to you - it isn't going to change
mydict. Once you've got the list, it's no longer part of mydict, so
invoking methods on it don't violate encapsulation, so there's no
problem with it.

Back to your question about sys.stdout. I said the LoD says it's ok
because I think a module is a collection, meaning sys.stdout is an
element of a collection, and it's ok to call methods on them. Others
may disagree about modules being collections. I say the call is ok
because sys.stdout is a value from sys, and manipulating it doesn't
change the internal state of the module sys.

I think I've explained the difference between what I'm saying and the
what LoD says. I think there's a relationship between the two; I'm
just not sure what it is.

> [snip]
>
>> Again, this is *your* API, not mine. You're forcing an ugly, obvious API
>> instead of assuming the designer has some smidgen of ability.
> But isn't that the whole question? Should programmers follow slavishly the
> so-called Law of Demeter to the extremes it implies, even at the cost of
> writing ugly, unnecessary, silly code, or should they treat it as a
> guideline, to be obeyed or not as appropriate?

I believe the correct answer is "practicality beats purity". On the
other hand, I'll continue to argue that following the LoD - or at
least parts of it - only leads to ugly, unnecessary, silly code if
your design was bad in the first place. Not following the LoD doesn't
make the design good - it just means you write a lot less code in
creating your bad design.

> Doesn't Python encourage the LoD to be treated as a guideline, by allowing
> class designers to use public attributes instead of forcing them to write
> tons of boilerplate code like some other languages?

Python encourages damn near everything to be treated as a
guideline. It's one of the things I like about the language - if I
need a hack *now* that fixes a problem, I don' have to fight the
language, I can just do it. I argue with people who try and create
classes that break that because they think "it enforces good style".

The thing about the tons of boilerplate code is that it's enforcing an
arbitrary rule in the name of enforcing "good design". But it doesn't
make the design good. In particular, if letting someone write
"obj.foo.mutate(value)" to manipulate obj is bad design, then making
them write "obj.mutate_foo(value)" doesn't mean the design is good.

>> I've
>> already pointed out one trivial way to deal with this, and there are
>> others.
> Mike, the only "trivial way to deal with this" that you have pointed out
> was this:
> "For example, add/sub_x/y_ord can all be handled with move(delta_x = 0,
> delta_y = 0)."
> That's a wonderful answer *for the wrong question*. I thought I explained
> that already.

If you did, I must have missed it. But maybe I've been answering the
wrong question all along.

Mike Meyer

unread,
Dec 11, 2005, 1:25:22 AM12/11/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> Mike Meyer <m...@mired.org> wrote:
>> > Really, I don't think this makes a good poster child for your "attribute
>> > mutators make life more difficult" campaign...;-)
>> The claim is that there exists cases where that's true. This cases
>> demonstrates the existence of such cases. That the sample is trivial
>> means the difficulty is trivial, so yeah, it's a miserable poster
>> child. But it's a perfectly adequate existence proof.
> You appear to be laboring under the serious misapprehension that you
> have demonstrate any DIFFICULTY whatsoever in writing mutators
> (specifically attribute-setters). Let me break the bad news to you as
> diplomatically as I can: you have not. All that your cherished example
> demonstrates is: if you're going to write a method, that method will
> need a body of at least one or two statements - in this case, I've shown
> (both in the single concrete example, and in the generalized code) that
> IF a set of attributes is interesting enough to warrant building a new
> instance based on them (if it is totally uninteresting instead, then
> imagining that you have to allow such attributes to be MUTATED on an
> existing instance, while forbidding them to be ORIGINALLY SET to create
> a new instance, borders on the delusional -- what cases would make the
> former but not the latter an important functionality?!), THEN
> implementing mutators (setters for those attributes) is trivially EASY
> (the converse of DIFFICULT) -- the couple of statements in the attribute
> setters' bodies are so trivial that they're obviously correct, assuming
> just correctness of the factory and the state-copying methods.

It's not my cherished example - it actually came from someone
else. That you can change the requirements so that there is no extra
work is immaterial - all you've done is shown that there are examples
where that don't require extra work. I never said that such examples
didn't exist. All you've shown - in both the single concrete example
and in a generalized case - is that any requirement can be changed so
that it doesn't require any extra work. This doesn't change the fact
that such cases exist, which is all that I claimed was the case.

Mike Meyer

unread,
Dec 11, 2005, 1:27:58 AM12/11/05
to
Steven D'Aprano <st...@REMOVETHIScyber.com.au> writes:
> On Sat, 10 Dec 2005 22:56:12 -0500, Mike Meyer wrote:
>>> Really, I don't think this makes a good poster child for your "attribute
>>> mutators make life more difficult" campaign...;-)
>> The claim is that there exists cases where that's true. This cases
>> demonstrates the existence of such cases. That the sample is trivial
>> means the difficulty is trivial, so yeah, it's a miserable poster
>> child. But it's a perfectly adequate existence proof.
> Huh?
> As I see it:
> Claim: doing X makes Y hard.

Harder, not hard.

> Here is an example of doing X where Y is easy.

Y is very easy in any case. Making it incrementally harder doesn't
make it hard - it's still very easy.

> Perhaps I've missed some subtle meaning of the terms "demonstrates" and
> "existence proof".

I think you missed the original claim.

Steve Holden

unread,
Dec 11, 2005, 5:38:26 AM12/11/05
to pytho...@python.org

The fact that you are prepared to argue for one of two mechanisms rather
than the other based simply on the syntax of two semantically equivalent
references is unworthy of someone who knows as much about programming as
you appear to do.

The "Law" of Demeter isn't about *how* you access objects, it's about
what interfaces to objects you can "legally" manipulate without undue
instability across refactoring. In other words, it's about semantics,
not syntax. And it's led a lot of Java programmers down a path that
makes their programs less, not more, readable.

Python's ability to let the programmer decide how much encapsulation is
worthwhile is one of its beauties, not a wart.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Alex Martelli

unread,
Dec 11, 2005, 11:04:16 AM12/11/05
to
Mike Meyer <m...@mired.org> wrote:
...
> It's not my cherished example - it actually came from someone

You picked it to (try and fail to) show that there is DIFFICULTY, which
I showed there isn't.

> else. That you can change the requirements so that there is no extra
> work is immaterial - all you've done is shown that there are examples
> where that don't require extra work. I never said that such examples
> didn't exist. All you've shown - in both the single concrete example
> and in a generalized case - is that any requirement can be changed so
> that it doesn't require any extra work. This doesn't change the fact
> that such cases exist, which is all that I claimed was the case.

Untrue: you claimed that the specific API (allowing attribute-setting)
"makes changing the object more difficult", not the obvious fact that
"there exist APIs so badly designed that they make changing more
difficult". And I showed that, in the GENERAL case, since attributes
worth being made assignable are obviously also worth being made settable
in a constructor of some kind, having settable attributes doesn't and
cannot introduce any DIFFICULTY -- the API with settable attributes only
requires trivial methods, ones presenting no difficulty whatsoever,
which delegate all the real work to methods you'd need anyway
(particularly the obviously-needed constructor or factory).

So, I claim I have totally disproven your claims about difficulty
("extra work", as you're trying to weaselword your way out, might be
writing one or two trivial lines of code, but that's not DIFFICULT, and
the claim you originally made was about DIFFICULTY, not tiny amounts of
trivially easy "extra work" -- as I already mentioned, obviously ANY
method you add is "extra work" for you compared to not adding it, but
the interesting question is whether that entails any DIFFICULTY).

My claim hinges on the fact that constructors are important -- more so,
of course, for immutable instances, but even in the mutable case it's
pretty bad design if the ONLY way to have an instance in the state you
know is right is to make it in a state that's wrong and then call
mutators on it until its state is finally right... obviously it's
important to avoid imposing this busywork on all users of the class. If
you further weaken your claim to "it's possible to design so badly that
everybody involved faces more work and difficulties", I'll most
obviously agree -- but such bad designs need not involve any
attribute-setters, nor does including attribute-setters imply or even
suggest that a design is bad in this way!


Alex

Alex Martelli

unread,
Dec 11, 2005, 11:17:43 AM12/11/05
to
Mike Meyer <m...@mired.org> wrote:
...
> > Claim: doing X makes Y hard.
>
> Harder, not hard.

The specific wording you used was "MORE DIFFICULT".


> > Here is an example of doing X where Y is easy
>

> Y is very easy in any case. Making it incrementally harder doesn't
> make it hard - it's still very easy.

If it's very easy, then going out of your way, as you did, to claim it's
"MORE DIFFICULT" (you did not use the words "incrementally harder") is
rather weird. There's no DIFFICULTY -- sure, if you have ANY one extra
trivial method there IS ``extra work'' (a few seconds to write the
trivial method and its unittest), but no extra DIFFICULTY.

Moreover, I believe I vindicated attribute-setters (and their lack of
difficulty) specifically by pointing out the trivial pattern which lets
you make them easily, by using a constructor that you must have anyway
in a good design (if attributes are worth setting on the fly, they're
worth setting at the birth of an instance) and a state-copying method
(which is always a good idea for mutable-instance classes). Assuming
this wasn't obvious at the start to all readers, I may thus have hoped
to have taught something to some reader -- an unusual and pleasant
fallout from a mostly "polemical" thread, since often such threads are
not very instructive.

On that vein, I'll continue by pointing out that there may well be
opportunities for optimization -- constructing a new instance is easy,
but in some cases, depending on the implementation details, there may be
faster approaches. That's most of the case for our using languages with
modifiable data rather than pure functional ones, after all...: that
changing data (rather than always having to make new objects) sometimes
affords better performance. Still, let's not optimize *prematurely*!-)


Alex

Mike Meyer

unread,
Dec 11, 2005, 6:34:13 PM12/11/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> Mike Meyer <m...@mired.org> wrote:
> ...
>> It's not my cherished example - it actually came from someone
> You picked it to (try and fail to) show that there is DIFFICULTY, which
> I showed there isn't.

No, you showed you could change the example so there is no extra
difficulty.

>> else. That you can change the requirements so that there is no extra
>> work is immaterial - all you've done is shown that there are examples
>> where that don't require extra work. I never said that such examples
>> didn't exist. All you've shown - in both the single concrete example
>> and in a generalized case - is that any requirement can be changed so
>> that it doesn't require any extra work. This doesn't change the fact
>> that such cases exist, which is all that I claimed was the case.
> Untrue: you claimed that the specific API (allowing attribute-setting)
> "makes changing the object more difficult", not the obvious fact that
> "there exist APIs so badly designed that they make changing more
> difficult".

Except you haven't shown that the API was badly designed. You can't
show that it's badly designed, because you don't know the requirements
that the API is meeting.

> And I showed that, in the GENERAL case, since attributes
> worth being made assignable are obviously also worth being made settable
> in a constructor of some kind,

But we're not dealing with a general case, we're dealing with a
specific case. Just because you can't think of cases where an
attribute being settable doesn't mean it needs to be settable in a
constructor doesn't mean they don't exist.

> So, I claim I have totally disproven your claims about difficulty
> ("extra work", as you're trying to weaselword your way out, might be
> writing one or two trivial lines of code, but that's not DIFFICULT, and
> the claim you originally made was about DIFFICULTY, not tiny amounts of
> trivially easy "extra work" -- as I already mentioned, obviously ANY
> method you add is "extra work" for you compared to not adding it, but
> the interesting question is whether that entails any DIFFICULTY).

Actually, the original claim was "more difficult". You've done your
usual trick of reaching an invalid conclusion from what someone said,
then acting as if that's what they said. Congratulations, you've
successfully beaten up the straw man you created.

Alex Martelli

unread,
Dec 11, 2005, 7:46:25 PM12/11/05
to
Mike Meyer <m...@mired.org> wrote:
...
> Except you haven't shown that the API was badly designed. You can't
> show that it's badly designed, because you don't know the requirements
> that the API is meeting.

I can show that an API is badly designed *whatever requirements it might
be intended for* if it's self-contradictory: containing a way to CHANGE
an attribute to some different value, but not containing any way to SET
THAT ATTRIBUTE TO THE RIGHT VALUE from the beginning, is inherently an
indicator of bad design, because it needlessly imposes more work on the
API's user and forces objects to pass through a transient state in which
their attributes are WRONG, or MEANINGLESS.


> > And I showed that, in the GENERAL case, since attributes
> > worth being made assignable are obviously also worth being made settable
> > in a constructor of some kind,
>
> But we're not dealing with a general case, we're dealing with a
> specific case. Just because you can't think of cases where an
> attribute being settable doesn't mean it needs to be settable in a
> constructor doesn't mean they don't exist.

The burden of the proof is on you, of course: show a design situation
where it's RIGHT to force API users to do extra work and lead objects
through states they're *NOT* meant to be in, because there is no way to
build the object correctly from the start, but rather the object must be
built in a wrong state and then later coerce it to the state you knew
was right since the beginning.

There may be languages which are so feeble as to force such behavior
(e.g., languages where every new instance has every attribute forced to
null even where it makes no sense for a certain attribute to ever be
null) but that applies to neither Eiffel nor Python, and all it shows is
that some languages are seriously lacking in the tools to allow proper
designs to be implemented, not that "all objects must always be
generated in the WRONG state" can ever be the RIGHT design.


> > So, I claim I have totally disproven your claims about difficulty
> > ("extra work", as you're trying to weaselword your way out, might be
> > writing one or two trivial lines of code, but that's not DIFFICULT, and
> > the claim you originally made was about DIFFICULTY, not tiny amounts of
> > trivially easy "extra work" -- as I already mentioned, obviously ANY
> > method you add is "extra work" for you compared to not adding it, but
> > the interesting question is whether that entails any DIFFICULTY).
>
> Actually, the original claim was "more difficult". You've done your
> usual trick of reaching an invalid conclusion from what someone said,
> then acting as if that's what they said. Congratulations, you've
> successfully beaten up the straw man you created.

Right: I claim, on the other hand, that YOU are weaselwording, by trying
to claim that any class with one extra method is thereby "MORE
DIFFICULT" to write -- equating having to write one or two lines of
trivial code with "MORE DIFFICULT" would make the "more difficult"
totally bereft of any useful meaning in whatever context.

I'm currently in an interesting job role, known as "uber technical
lead", which is meant to be a sort of a cross between technical manager
and ubergeek-guru. Fortunately, my reports are all people of technical
excellence as well as personal integrity, so, should I ever ask one of
them to explain why he or she did X and not Y, I fully trust they won't
try to explain that "doing Y would have been more difficult" when the
reality is that it would have involved a line of two of trivial code...
if they did, I can assure you that the consequences might be
interesting. (Good think I can and do trust them to say, should such a
situation ever arise, "DUH! -- I just didn't think of it!", and go fix
their code forthwith... just as they've often heard ME say,
apologetically, in the much more frequent situations where my objections
to some design were misconceived... so, my modest management abilities
will not be put to such a difficult test in the foreseeable future;-).


Alex

Mike Meyer

unread,
Dec 11, 2005, 8:57:31 PM12/11/05
to
al...@mail.comcast.net (Alex Martelli) writes:
> Mike Meyer <m...@mired.org> wrote:
> ...
>> Except you haven't shown that the API was badly designed. You can't
>> show that it's badly designed, because you don't know the requirements
>> that the API is meeting.
> I can show that an API is badly designed *whatever requirements it might
> be intended for* if it's self-contradictory: containing a way to CHANGE
> an attribute to some different value, but not containing any way to SET
> THAT ATTRIBUTE TO THE RIGHT VALUE from the beginning, is inherently an
> indicator of bad design, because it needlessly imposes more work on the
> API's user and forces objects to pass through a transient state in which
> their attributes are WRONG, or MEANINGLESS.

Nope. If the requirements are that all objects start in the same
meaningful state, then you simply create them in that state. There's
no need to provide a facility to to set the initial state, and they
never go through a meaningless state either.

>> > And I showed that, in the GENERAL case, since attributes
>> > worth being made assignable are obviously also worth being made settable
>> > in a constructor of some kind,
>> But we're not dealing with a general case, we're dealing with a
>> specific case. Just because you can't think of cases where an
>> attribute being settable doesn't mean it needs to be settable in a
>> constructor doesn't mean they don't exist.
> The burden of the proof is on you, of course: show a design situation
> where it's RIGHT to force API users to do extra work and lead objects
> through states they're *NOT* meant to be in, because there is no way to
> build the object correctly from the start, but rather the object must be
> built in a wrong state and then later coerce it to the state you knew
> was right since the beginning.

You're doing it again - I never claimed that there was any such API
requirement. You've reached this conclusion on your own, by adding
requirements to the design that I never discussed. If you want to
someone to proof this straw man, you'll have to do it yourself.

>> > So, I claim I have totally disproven your claims about difficulty
>> > ("extra work", as you're trying to weaselword your way out, might be
>> > writing one or two trivial lines of code, but that's not DIFFICULT, and
>> > the claim you originally made was about DIFFICULTY, not tiny amounts of
>> > trivially easy "extra work" -- as I already mentioned, obviously ANY
>> > method you add is "extra work" for you compared to not adding it, but
>> > the interesting question is whether that entails any DIFFICULTY).
>> Actually, the original claim was "more difficult". You've done your
>> usual trick of reaching an invalid conclusion from what someone said,
>> then acting as if that's what they said. Congratulations, you've
>> successfully beaten up the straw man you created.
> Right: I claim, on the other hand, that YOU are weaselwording, by trying
> to claim that any class with one extra method is thereby "MORE
> DIFFICULT" to write -- equating having to write one or two lines of
> trivial code with "MORE DIFFICULT" would make the "more difficult"
> totally bereft of any useful meaning in whatever context.

As I already explained, the entire change was trivial, so any extra
work is of course trivial. This extra work is exactly what I meant
when I said "more difficult". You want to play semantic games, and
argue that one trivial change can't be "more difficult" than another,
feel free to do so. But do realize that you're disproving your
strawmen, not my statement.

Antoon Pardon

unread,
Dec 12, 2005, 7:12:46 AM12/12/05
to
Op 2005-12-11, Steven D'Aprano schreef <st...@REMOVETHIScyber.com.au>:

> On Sat, 10 Dec 2005 15:46:35 +0000, Antoon Pardon wrote:
>
>>> But I *want* other classes to poke around inside my implementation.
>>> That's a virtue, not a vice. My API says:
>>>
>>> "In addition to the full set of methods which operate on the coordinate
>>> as a whole, you can operate on the individual ordinates via instance.x
>>> and instance.y which are floats."
>>
>> Yikes. I would never do that. Doing so would tie my code unnecesary
>> close to yours and would make it too difficult to change to an other
>> class with a different implementation like one using tuples or lists
>> instead of a seperate x and y instances.
>
> Do you really think that my class and some other class written by
> another person will have the same API?

If both writers try to implement the same kind of object I would
think the API would be very similar yes.

> If you change from my class to
> another class, the chances are that the interfaces will be different
> unless the second class writer deliberately emulated my class interface.

So, lets say I have one class where you can do P1 + P2 and an other
class where you have to do P1.move(P2). If it is basically the
same kind of class but with a different API I just write a wrapper
and I am done unless of course I messed with the internals and
the internals in the second class are vastly different.

> To class users, there is *no* difference in consequences between me
> changing my published API by removing named attributes x and y from my
> class, and me changing my published API by removing or changing methods.

Yes there is. Methods are just names, if you just have different names
for the same functionality all you need is write a wrapper to translate
one name into an other.

If you no longer have an x and y attribute but a 2 element tuple,
then things aren't that easy to repair.

>>> Your API says:
>>>
>>> "In addition to the full set of methods which operate on the coordinate
>>> as a whole, you can operate on the individual ordinates via methods
>>> add_x, add_y, mult_x, mult_y, sub_x, sub_y, rsub_x, rsub_y, div_x,
>>> div_y, rdiv_x, rdiv_y, exp_x, exp_y, rexp_x, rexp_y...; the APIs of
>>> these methods are: ... "
>>
>> Who in heavens name would need those? Maybe there is no x or y because
>> the implementation uses a list or a tuple, maybe the implementation uses
>> polar coordinates because that is more usefull for the application it
>> was planned for.
>
> And maybe it isn't a Coordinate class at all, hmmm?

Indeed it isn't. It is usually a Point class.

> An ordinary, Cartesian, real-valued Coordinate is a pair of ordinates, an
> X and Y ordinates. That's what it *is* -- a coordinate class without X and
> Y ordinates isn't a coordinate class, regardless of how they are
> implemented (via properties, named attributes, or a whole bucketful of
> helper functions).

That is why a coordinate class is a bad idea. It mentions an
implementation in what should be an abstract idea like a 2D point.

In that case if you find out that you are manipulating your objects
in ways, for which polar coordinates are better, you can transparantly
change the implementation.

--
Antoon Pardon

Steven D'Aprano

unread,
Dec 12, 2005, 8:17:13 AM12/12/05
to
On Mon, 12 Dec 2005 12:12:46 +0000, Antoon Pardon wrote:

>> And maybe it isn't a Coordinate class at all, hmmm?
>
> Indeed it isn't. It is usually a Point class.
>
>> An ordinary, Cartesian, real-valued Coordinate is a pair of ordinates, an
>> X and Y ordinates. That's what it *is* -- a coordinate class without X and
>> Y ordinates isn't a coordinate class, regardless of how they are
>> implemented (via properties, named attributes, or a whole bucketful of
>> helper functions).
>
> That is why a coordinate class is a bad idea. It mentions an
> implementation in what should be an abstract idea like a 2D point.
>
> In that case if you find out that you are manipulating your objects
> in ways, for which polar coordinates are better, you can transparantly
> change the implementation.

That's a great idea Antoon, but you don't go far enough. Why limit
yourself to something as concrete as a pair of floats? What we actually
need is an even more abstract class, one which can hold an arbitrary
number of ordinates, not just two. And why limit ourselves to floats? What
if the user decides that he wants to specify ordinates as written English
numbers like the Morse Code for "thirty-seven point three four", coded in
base64?

For that matter, now that we have an arbitrary number of ordinates, why
limit yourself to list implementation? Perhaps a better implementation is
a tree structure, or an orchard, or some sort of mapping? Or some hybrid
of all three.

And the methods, well, the methods. It is so limiting to be forced into
one specific API, with names like instance.move(), rotate(), reflect() and
so forth. What if I should change my mind, and decide what I really need
is a message-passing model instead? We better write some more code
isolating the methods from the user, making the class even more abstract
again, just in case we should ever choose to change those methods'
interface.

Heaven forbid that we should actually decide on a model for our class,
ever. Sure, we'll end up having to implement a Turing-complete programming
language as our class, but I think we'll all agree that that cost is a
small price to pay for something which is sufficiently abstract.

--
Steven

Antoon Pardon

unread,
Dec 12, 2005, 9:03:41 AM12/12/05
to
Op 2005-12-12, Steven D'Aprano schreef <st...@REMOVETHIScyber.com.au>:

> On Mon, 12 Dec 2005 12:12:46 +0000, Antoon Pardon wrote:
>
>>> And maybe it isn't a Coordinate class at all, hmmm?
>>
>> Indeed it isn't. It is usually a Point class.
>>
>>> An ordinary, Cartesian, real-valued Coordinate is a pair of ordinates, an
>>> X and Y ordinates. That's what it *is* -- a coordinate class without X and
>>> Y ordinates isn't a coordinate class, regardless of how they are
>>> implemented (via properties, named attributes, or a whole bucketful of
>>> helper functions).
>>
>> That is why a coordinate class is a bad idea. It mentions an
>> implementation in what should be an abstract idea like a 2D point.
>>
>> In that case if you find out that you are manipulating your objects
>> in ways, for which polar coordinates are better, you can transparantly
>> change the implementation.
>
> That's a great idea Antoon, but you don't go far enough. Why limit
> yourself to something as concrete as a pair of floats? What we actually
> need is an even more abstract class, one which can hold an arbitrary
> number of ordinates, not just two.

In point of fact, the class I have can do just that.

> And why limit ourselves to floats? What
> if the user decides that he wants to specify ordinates as written English
> numbers like the Morse Code for "thirty-seven point three four", coded in
> base64?

How the user specifies his values and how they are internally stored
are two entirely different issues. The fact that the use specifies
his numbers in Morse Code or written out in words doesn't imply
they have to be stored in that form. Just as the user supplying his
points with x,y coordinates doesn't imply the implementation has
to work with carthesion coordinates.

> For that matter, now that we have an arbitrary number of ordinates, why
> limit yourself to list implementation? Perhaps a better implementation is
> a tree structure, or an orchard, or some sort of mapping? Or some hybrid
> of all three.

Indeed different kind of applications work better with different
kind of implementations. That is the whole point, use the same
API for the same functionality even if the implementation is
different, so I can solve the same kind of problem with the same
code, independant on whether I have 2D point 3D points or maybe
sparse 10 000 000D points.

> And the methods, well, the methods. It is so limiting to be forced into
> one specific API, with names like instance.move(), rotate(), reflect() and
> so forth. What if I should change my mind, and decide what I really need
> is a message-passing model instead? We better write some more code
> isolating the methods from the user, making the class even more abstract
> again, just in case we should ever choose to change those methods'
> interface.
>
> Heaven forbid that we should actually decide on a model for our class,
> ever.

There is a difference between deciding on a model and exposing your
model. If you are working with certain kinds of objects, the solution
should more or less be independant of the model chosen to implement
the object. If you need to expose the model in order to solve particular
problems with your objects, I would think you either have chosen the
wrong kind of objects or a bad implementation of them to solve your
problem.

--
Antoon Pardon

james....@sunderland.ac.uk

unread,
Dec 12, 2005, 2:27:32 PM12/12/05
to

Mike Meyer wrote:
> al...@mail.comcast.net (Alex Martelli) writes:
> > Mike Meyer <m...@mired.org> wrote:
> >> > "In addition to the full set of methods which operate on the coordinate as
> >> > a whole, you can operate on the individual ordinates via instance.x and
> >> > instance.y which are floats."
> >> That's an API which makes changing the object more difficult. It may
> >> be the best API for the case at hand, but you should be aware of the
> >> downsides.
> > Since x and y are important abstractions of a "2-D coordinate", I
> > disagree that exposing them makes changing the object more difficult, as
> > long of course as I can, if and when needed, change them into properties
> > (or otherwise obtain similar effects -- before we had properties in
> > Python, __setattr__ was still quite usable in such cases, for example,
> > although properties are clearly simpler and more direct).
>
> Exposing them doesn't make making changes more difficult. Allowing
> them to be used to manipulate the object makes some changes more
> difficult. Properties makes the set of such changes smaller, but it
> doesn't make them vanish.

>
> Take our much-abused coordinate example, and assume you've exposed the
> x and y coordinates as attributes.
>
> Now we have a changing requirement - we want to get to make the polar
> coordinates available. To keep the API consistent, they should be
> another pair of attributes, r and theta. Thanks to Pythons nice
> properties, we can implement these with a pair of getters, and compute
> them on the fly.

But the API cannot be consistent. :-) If setting r is expensive
because it requires several trig calculations but setting x is cheap,
that's an inconsistency. It would be a vital one for any application
where I'd be likely to use a point. You certainly couldn't change the
internal representation of a point from cartesian to polar without
breaking my code.

Good example from the C++ strandard library; string only specifies the
'literal' interface. The internal representation is left totally
undefined... and so you can only program to a specific implementation
of string. (Which, of course, can and does change between different
versions of a compiler, let alone between compilers.) The STL got
things right, by contrast.

Sometimes these issues don't matter much. Other times they do.
Perhaps they matter more to me because if the Python version is not
sufficiently fast, then I have to recode the thing in C++. ;-)

Anyway, my point: some types of things fundamentally cannot be cleanly
seperated into an implementation and an API. Whether a point is 2D or
polar is one of these issues, IMO.

This is obviously not to diss the whole idea of encapsulation and
modularisation. I've worked on horrible code and beautiful code, and I
know what a difference these things make. However, you also cannot
program blindly by general rules. The toughest code I've ever had to
modify would probably have passed quite a few OO-style guides; the
author was really trying to adhere to a 'philosophy', he just didn't
get it.

James M

>
> If x and y can't be manipulated individually, you're done. If they
> can, you have more work to do. If nothing else, you have to decide
> that you're going to provide an incomplete interface, in that users
> will be able to manipulate the object with some attributes but not
> others for no obvious good reason. To avoid that, you'll have to add
> code to run the coordinate transformations in reverse, which wouldn't
> otherwise be needed. Properties make this possible, which is a great
> thing.
>

John J. Lee

unread,
Dec 12, 2005, 5:31:11 PM12/12/05
to pytho...@python.org
Steve Holden <st...@holdenweb.com> writes:
[...]

> The "Law" of Demeter isn't about *how* you access objects, it's about
> what interfaces to objects you can "legally" manipulate without undue
> instability across refactoring. In other words, it's about semantics,
> not syntax. And it's led a lot of Java programmers down a path that
> makes their programs less, not more, readable.

Not only Java programmers -- I know I've mis-applied LoD many times.
When should it (not) be applied? I don't find any inaccuracies in
'How to apply the LawOfDemeter successfully' at c2.com (link below),
but I couldn't call it a decent explanation of the problem:

http://c2.com/cgi/wiki?LawOfDemeterIsTooRestrictive


[...]


John

Mike Meyer

unread,
Dec 13, 2005, 4:55:14 PM12/13/05
to
Antoon Pardon <apa...@forel.vub.ac.be> writes:
> Op 2005-12-11, Steven D'Aprano schreef <st...@REMOVETHIScyber.com.au>:
>> On Sat, 10 Dec 2005 15:46:35 +0000, Antoon Pardon wrote:
>> Do you really think that my class and some other class written by
>> another person will have the same API?
> If both writers try to implement the same kind of object I would
> think the API would be very similar yes.

That's why we have one great web applications platform, right?

0 new messages