Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Python 1.6 The balanced language

9 views
Skip to first unread message

Manuel Gutierrez Algaba

unread,
Aug 28, 2000, 3:00:00 AM8/28/00
to

Python 1.6 The balanced language (I'd say 1.5.2, but...)
---------------------------------

Python has many features from many languages:
* Functional features:
- Lambda, filter, map, eval. And for i in list: constructs .
I've heard some occaml fans say: "I like python because of these
features"

* OO features:
- Classes, objects.

* Simple programming features:
- print statement, simple assignments, lists and tuples, asserts,
docstrings and comments, import mechanism, "input"... (some others),
open file...

* A small but big enough set of functions : max, min, range, pow,
string.join, re.match ...


And that's python, no more and no less.

And that's the greatness of python. Of course, we could add in:
* Functional features:
- Patterns for lists: l [ head : lists]

* OO features:
- Design by contract
- Strong "typing"


* Simple programming features:
- Corroutines
- A bigger set of operators ( << ...)
- Lexical closures
...

The question is that the current balance of "powers" inside python
prevents people of any of the families from writing python
ununderstable for the rest of the families.

So, functional-guys,
although may do a lot of _harm_ in dark lambda-map expressions,
they can't add to _their_ expression-confussion those "awful"
patterns.

So, Java-C++-guys, although they can write 200 lines-long-functions
(a la Java ), they can't do too much harm because they can't use
their most fearful weapon : hard typing. So , their code is
still understable.

So, academic-researchers-learners, although they can do
of dict-tricky constructs, they can't do nasty corroutines, lexical
scopes (a la Scheme), which would be ununderstable for the C++
guys.


So, as you see, Python is more a balance of forces than anything
else.

What will happen in Python 3000 ?
Well, that balance will be broken and python won't make sense
any more. Sure, it'll be the same language : python. But, people
will experience __always__ some kind unrest, because advanced
features of some of the families will pop up, and nobody
belongs to all the families. So Java-C++ guys will have hard
times with functional, Functional-guys will have hard times
with hard types...


The current state of forces is an achievement.

And the work to do should enroute into:
- (Still) Better documentation. Better systems of organizing
examples (snippets) but seriously.

- Provide clear ways (with clear doc) to expand the language for
any of the families or for some programs : Oracle, mails

- Sort all the apps available and try to organize them as example
for reusing the code.

I've hear a Macintosh user say this:
" I consider still TCL the language of choice, but python
fits so well in the Mac,... I can write GUI's and call Mac
primitives,... everything is so smooth".

Well, let's continue that path, and let's do betterment in
the library/app/documentation layer, instead of Python itself.

Thinking that python can be :
Eiffel + Haskell + Java
is simply ___DUMB___ and irrealistic.

Sometimes there are even technical drawbacks: if you use Stackless
python (corrutines), you can use Jpython compatibility !

---
MGA

Suchandra Thapa

unread,
Aug 28, 2000, 10:54:48 PM8/28/00
to
Manuel Gutierrez Algaba <th...@localhost.localdomain> wrote:
>And that's python, no more and no less.
>
>And that's the greatness of python. Of course, we could add in:
>* Functional features:
>- Patterns for lists: l [ head : lists]
>
>* OO features:
>- Design by contract
>- Strong "typing"

Actually, I would argue that strong typing is a functional language
feature. The major functional languages like ML and Haskell all have
strong typing. OTOH, C and C++ are not strongly typed, although they
are mostly there.

> So, functional-guys,
>although may do a lot of _harm_ in dark lambda-map expressions,
>they can't add to _their_ expression-confussion those "awful"
> patterns.

What's so bad about patterns?

>So, Java-C++-guys, although they can write 200 lines-long-functions
>(a la Java ), they can't do too much harm because they can't use
>their most fearful weapon : hard typing. So , their code is
>still understable.

I think a strong static type system is pretty useful. With it you're
guaranteed that if the compiler/interpeter accepts the program then
you'll have no type errors at all. In conjunction with an expressive
type system, you can encode program information into types. For example,
with ML or Haskell you can are able to create types that can only accept
odd integers or lower case letters so that if you're using a variable of
that type then you're guaranteed certain properties. Another example
would be to define a list of type x as being an element of type x prepended
to a list of type x. This guarantees that an lists using this type will
only have elements that have a certain type.


>So, academic-researchers-learners, although they can do
>of dict-tricky constructs, they can't do nasty corroutines, lexical
>scopes (a la Scheme), which would be ununderstable for the C++
>guys.

Do mean closures when you talk about lexical scope?
C, C++, and java all implement lexical scoping. They don't implement
closures however, but closures aren't that difficult to understand. Just
think of them as state that follows the object in question.


>What will happen in Python 3000 ?
> Well, that balance will be broken and python won't make sense
>any more. Sure, it'll be the same language : python. But, people
>will experience __always__ some kind unrest, because advanced
>features of some of the families will pop up, and nobody
>belongs to all the families. So Java-C++ guys will have hard
>times with functional, Functional-guys will have hard times
>with hard types...

I disagree with you there. Pulling in strict or static typing
would be useful for large projects and wouldn't confuse anyone with
with c, c++, or functional language exposure (okay, polymorphic types
and parametric types would confuse c and c++ people a bit). The functional
extensions to python seem to mainly consist of a few additions and refinements
to the current features. I don't see any calls for adding the more "advanced"
features such as lazy evaluation or monads.

>- Provide clear ways (with clear doc) to expand the language for
>any of the families or for some programs : Oracle, mails

What do you mean by this? There are already database and smtp
interfaces available. I wouldn't want python to start adding SQL or
Oracle's version of SQL to its core.

>Thinking that python can be :
>Eiffel + Haskell + Java
>is simply ___DUMB___ and irrealistic.
>
>Sometimes there are even technical drawbacks: if you use Stackless
>python (corrutines), you can use Jpython compatibility !

I don't anyone including Guido wants to have python become an
amalgam of Eiffel, Haskell, and Java since their paradigms don't mesh. I'm
not even sure how you would implement a pure functional object oriented
language. Although, I think our BDFL would lean more towards a
Modula3+Haskell mixture =).

Peter Schneider-Kamp

unread,
Aug 29, 2000, 3:00:00 AM8/29/00
to s-t...@uchicago.edu
Suchandra Thapa wrote:
>
> Manuel Gutierrez Algaba <th...@localhost.localdomain> wrote:
> >
> > What will happen in Python 3000 ?
> > Well, that balance will be broken and python won't make sense
> > any more. Sure, it'll be the same language : python. But, people
> > will experience __always__ some kind unrest, because advanced
> > features of some of the families will pop up, and nobody
> > belongs to all the families. So Java-C++ guys will have hard
> > times with functional, Functional-guys will have hard times
> > with hard types...
>
> I disagree with you there. Pulling in strict or static typing
> would be useful for large projects and wouldn't confuse anyone with
> with c, c++, or functional language exposure (okay, polymorphic types
> and parametric types would confuse c and c++ people a bit). The functional
> extensions to python seem to mainly consist of a few additions and refinements
> to the current features. I don't see any calls for adding the more "advanced"
> features such as lazy evaluation or monads.

Lazy evaluation? Guido seems to be less reluctant to add this some
time in the future, if I channel him correctly (what, of course, only
Tim can do right<wink>). There has always been a lot of people proposing/
advocating lazy evaluation.

Monads? It took me almost two weeks to understand the concept. Anyone
seriously proposing to add monads to Python should have to implement
a Python compiler with Haskell monads first. Note that I had to
implement a compiler for a Pascal subset using *a lot* of monads.

> >Sometimes there are even technical drawbacks: if you use Stackless
> >python (corrutines), you can use Jpython compatibility !

It is not impossible to implement coroutines for JPython. It's just
hard (or impossible) to implement them efficiently without changing
the VM beneath.

> I don't anyone including Guido wants to have python become an
> amalgam of Eiffel, Haskell, and Java since their paradigms don't mesh. I'm
> not even sure how you would implement a pure functional object oriented

There are stranger things under this sky. E.g. a functional
language without lambda
(http://www.eleves.ens.fr:8080/home/madore/programs/unlambda/).

> language. Although, I think our BDFL would lean more towards a
> Modula3+Haskell mixture =).

Which only shows his good taste.

pure-functional-programming-in-Modula3-is-a-pain-though-ly y'rs
Peter
--
Peter Schneider-Kamp ++47-7388-7331
Herman Krags veg 51-11 mailto:pe...@schneider-kamp.de
N-7050 Trondheim http://schneider-kamp.de


Suchandra Thapa

unread,
Aug 30, 2000, 3:00:00 AM8/30/00
to
Peter Schneider-Kamp <nowo...@nowonder.de> wrote:
>Lazy evaluation? Guido seems to be less reluctant to add this some
>time in the future, if I channel him correctly (what, of course, only
>Tim can do right<wink>). There has always been a lot of people proposing/
>advocating lazy evaluation.

I don't think lazy evaluation works too well in a language that
isn't purely functional. You'll never really be sure where when
a variable will be evaluated. With python's lack of referential
transparency, this will become a nightmare. E.g.
def swap(x, y):
t = y[x - 1]
y[x - 1] = x
x = t
print x','y[x-1]

x = 3
y = [3, 2, 1]
swap(x, y)

prints 3 , 3 which is probably not the programmer intended. Finding these
kinds of errors would be a nightmare. ML's use of imperative constructs is
probably one of the reasons why it doesn't employ lazy semantics.

>
>Monads? It took me almost two weeks to understand the concept. Anyone
>seriously proposing to add monads to Python should have to implement
>a Python compiler with Haskell monads first. Note that I had to
>implement a compiler for a Pascal subset using *a lot* of monads.

I don't think you really need monads in python since they're used
mainly to keep Haskell a pure functional language. Python's not aspiring
to that goal so monads would just confuse the hell out of everyone.

>
>> >Sometimes there are even technical drawbacks: if you use Stackless
>> >python (corrutines), you can use Jpython compatibility !
>

>It is not impossible to implement coroutines for JPython. It's just
>hard (or impossible) to implement them efficiently without changing
>the VM beneath.

That comment was by the first poster but I didn't realize that
continuations were possible on the current JVMs. Or is the way to get
them on the current VMs just to emulate a machine on top of it?

>> I don't anyone including Guido wants to have python become an
>> amalgam of Eiffel, Haskell, and Java since their paradigms don't mesh. I'm
>> not even sure how you would implement a pure functional object oriented
>

>There are stranger things under this sky. E.g. a functional
>language without lambda
>(http://www.eleves.ens.fr:8080/home/madore/programs/unlambda/).

Actually I think clean is a pure functional oo language. It's just
that being oo seems to imply that you have some state to your objects. Not
sure anyone would get around this paradox. As for unlambda, I don't see
anyone writing anything significant in it. I don't think anyone familiar with
functional languages would take a job requiring them to work in it. But you
never know.

Neel Krishnaswami

unread,
Aug 30, 2000, 10:59:49 PM8/30/00
to
Suchandra Thapa <sst...@harper.uchicago.edu> wrote:
>
> Actually I think clean is a pure functional oo language. It's just
> that being oo seems to imply that you have some state to your
> objects. Not sure anyone would get around this paradox.

No, the distinguishing features of OO are that a) subtyping exists,
and b) the types of objects are used to do dynamic dispatch. Mutable
state is not necessary -- in both Dylan and Ocaml, it's common to
write code in an "object-functional" style, with lots of objects but
no mutation.

In a language like ML, you can write a function to find the
length of a list like so:

fun len lst = case lst of
[] => 0
| x :: xs => 1 + len xs;

val len = fn : 'a list -> int


The type 'a list -> int means that the len function can accept a list
of any type of object -- it can find the length of a list of ints, or
strings or any other type. IOW, the 'a is a *type variable*, which
parameterizes the type declaration.

Now, if you look at the code for len, you'll notice that there is no
code that depends on the type of the elements of the list. It doesn't
matter what the type of the elements are, so you can let them be
anything. (This is called the type erasure property.) The flip side of
this is that you can't write polymorphic code that varies with the
type of the elements.

In an OO language, you might the write code for a list class like so:

class EmptyList:
def __len__(self):
return 0

class List:
def __init__(self, head, tail):
self.head = head
self.tail = tail
def __len__(self):
return 1 + len(tail)

lst = List(3, List(1, List(2, EmptyList())))

So if we call len on lst, we get

>>> len(lst)
2

So far, this is the same as in ML. Now, suppose we want to write
a linked list class that maintains its size as a field, so that
finding the length is fast. We could define in Python:

class SizedList(List):
def __init__(self, head, tail):
self.head = head
self.tail = tail
self.size = 1 + len(self.tail)
def __len__(self):
return self.size

lst2 = SizedList('a', SizedList('b', SizedList('c', EmptyList())))

Now, if we call

>>> len(lst1) # lst1 is of type List
2

>>> len(lst2) # lst2 is of type SizedList
3

The equivalent definition in ML would be something like

datatype 'a Sizedlist = Empty | Cons(int, 'a, 'a Sizedlist);

but it is impossible[*] to extend the len function to give you the
length of a Sizedlist in ML because a different method must be called,
depending on the type of the argument to len, and the overloading must
be resolved -at runtime- by the precise type of the argument. You
would need to define a new function (say sizedlist_len) in order to
get the size of a sized list.

But if subtyping and overloading *did* exist in ML, then it would be
perfectly feasible to write this deifnition -- and note that there is
no mutation anywhere in it. And in fact it *is* possible in OCaml.


Neel

[*] I'm aware of functors, but they are a subtlety that doesn't
matter for the main thrust of this post.

lob...@my-deja.com

unread,
Aug 31, 2000, 12:44:45 AM8/31/00
to
i'm commenting on this particular exchange:

> > language. Although, I think our BDFL would lean more towards a
> > Modula3+Haskell mixture =).
>
> Which only shows his good taste.
>
> pure-functional-programming-in-Modula3-is-a-pain-though-ly y'rs
> Peter

1) I'm very glad that Haskell seem to be in the picture. I'm even more
happy that list comprehensions have been added to 2.0. Adding
functional elements to Python - without sacrificing its imperative
roots - helps to differentiate it from other offerings. I hope this
trend will be continued.

2) Despite my predilection for it, functional programming is not
everything... Python is an imperative language at heart and an
interpreted one at that. It would make sense to have a quick look at
other higher level imperative languages and see what can be borrowed
from them for python.

3) My particular favourite is Icon (see www.cs.arizona.edu/icon).
Particular aspects of some interest to Python would be goal-directed
evaluation [saves helluva lot of lines of code...] and, less
importantly, generators and coexpressions. I have no idea how difficult
it would be to move these ideas to Python. But I (and many other
people) can vouch that these mechanisms are very effective in writing
very 'large' programs in a surprisingly small number of lines. I must
stress here that this happens not the way Perl does it - but rather
like in functional languages. That is: because of the built in
mechanisms of evaluation. So it is readable :-), not just terse.


>


Sent via Deja.com http://www.deja.com/
Before you buy.

Steve Holden

unread,
Aug 31, 2000, 1:17:57 AM8/31/00
to
lob...@my-deja.com wrote:
>
[Haskell praise snipped]

>
> 2) Despite my predilection for it, functional programming is not
> everything... Python is an imperative language at heart and an
> interpreted one at that. It would make sense to have a quick look at
> other higher level imperative languages and see what can be borrowed
> from them for python.
>
Possibly, though the language already has many features which would
suggest a wide-ranging review of the territory was undertaken in the
original design. I understand the BDFL takes usability seriously.

> 3) My particular favourite is Icon (see www.cs.arizona.edu/icon).

Wonderful language. For a long time Icon was my Perl-equivalent, as
the syntax is pretty clean and the things you can do in small programs
are amazing.

I even wrote a commercial system in it, involving formatting data
extracted from a relational accounting system being formatted into
the FrameMaker markup language. Got paid good money for doing it,
too!

> Particular aspects of some interest to Python would be goal-directed
> evaluation [saves helluva lot of lines of code...] and, less
> importantly, generators and coexpressions. I have no idea how difficult
> it would be to move these ideas to Python. But I (and many other
> people) can vouch that these mechanisms are very effective in writing
> very 'large' programs in a surprisingly small number of lines. I must
> stress here that this happens not the way Perl does it - but rather
> like in functional languages. That is: because of the built in
> mechanisms of evaluation. So it is readable :-), not just terse.
>

Much as I like Icon (and Python too, for that matter), I would humbly
suggest that these features of Icon just don't belong in Python. The
overhead of supporting them in the many contexts in which they aren't
really required would reduce efficiency.

Plus, despite continued demand for many new features, I believe there
has to be a point beyond which you must accept you are changing the basic
nature of the language.

Surely the best approach accepts that some problems are best for one
language, others for another, so we should just use the appropriate
language for each problem.

A better way would be to work towards allowing Python code to be mixed
with Icon, so we could easily write programs which combine the best
features of both without having to turn either into a monster unrecognisable
even to its mother :-). Then we could produce object-oriented programs
which used backtracking, generators and coexpressions.

regards
Steve


> >
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.

--
Helping people meet their information needs with training and technology.
703 967 0887 sho...@bellatlantic.net http://www.holdenweb.com/

Alex Martelli

unread,
Aug 31, 2000, 3:00:00 AM8/31/00
to
"Steve Holden" <sho...@holdenweb.com> wrote in message
news:39ADEA85...@holdenweb.com...
[snip]

> Surely the best approach accepts that some problems are best for one
> language, others for another, so we should just use the appropriate
> language for each problem.

Oh yes. And in many real applications, we can mix components that
address different aspects and can thus benefit from different
languages' strengths, thus...:

> A better way would be to work towards allowing Python code to be mixed
> with Icon, so we could easily write programs which combine the best
> features of both without having to turn either into a monster
unrecognisable
> even to its mother :-). Then we could produce object-oriented programs
> which used backtracking, generators and coexpressions.

As long as we take "combined" to mean "in the same application",
rather than "inside the same source-text", I fully agree. So,
what we need, for this task as well as many others, is a well-
balanced, cross-language, cross-platform componentization scheme.

EJB is probably too Java-centric; .NET is probably too Windows-
centric. Corba (the classic variant) seems to focus too much on
issues of distributed computing, rather than of mixing components
and deploying/reusing components. Maybe the new 'Corba 3' offers
hope? I know little about it, except that it explicitly targets
components (at long last) and introduces a new dedicated scripting
language (boo, hiss). Are there some previews/prototypes allowing
one to play with it (particularly with Python)...? What about
XPCOM -- is it enough of a COM clone to let one do components
powerfully and portably, or (as is my impression) does it lack
even more on the scripting/runtime-use-of-components than COM
proper does; and if the latter, is it still worth taking as a
starting point...?


Alex


Quinn Dunkan

unread,
Aug 31, 2000, 3:00:00 AM8/31/00
to
On Thu, 31 Aug 2000 04:44:45 GMT, lob...@my-deja.com <lob...@my-deja.com>
wrote:

>3) My particular favourite is Icon (see www.cs.arizona.edu/icon).
>Particular aspects of some interest to Python would be goal-directed
>evaluation [saves helluva lot of lines of code...] and, less
>importantly, generators and coexpressions. I have no idea how difficult
>it would be to move these ideas to Python. But I (and many other
>people) can vouch that these mechanisms are very effective in writing
>very 'large' programs in a surprisingly small number of lines. I must
>stress here that this happens not the way Perl does it - but rather
>like in functional languages. That is: because of the built in
>mechanisms of evaluation. So it is readable :-), not just terse.

Well, with stackless we get continuations, which can be used to build things
like generators, coroutines, and backtracking. They still won't be 'built in'
like they are in icon, but python has an evaluation mechanism :) Of course,
who knows if stackless will get the nod, but lotsa people like it, so maybe
maybe...

In any case (as Tim pointed out a while back when I was making haskell
noises), python classes can already do a lot. For instance, generators and
backtracking can often be obviated with a lazy list: just return a list of
successes or empty list for failure. And a lazy list can be built with a
python class (although python's lack of lexical scoping can make thunking
harder... ouch :)

Aahz Maruch

unread,
Aug 31, 2000, 11:56:58 AM8/31/00
to
In article <8oknrp$aho$1...@nnrp1.deja.com>, <lob...@my-deja.com> wrote:
>
>3) My particular favourite is Icon (see www.cs.arizona.edu/icon).
>Particular aspects of some interest to Python would be goal-directed
>evaluation [saves helluva lot of lines of code...] and, less
>importantly, generators and coexpressions.

Generators are pretty likely to make it into Python through the
Stackless implementation. In addition, they're reasonably likely to
make it in even without Stackless, assuming someone comes up with a good
implementation -- they'd be incredly useful for generic lazy "for"
loops.
--
--- Aahz (Copyright 2000 by aa...@pobox.com)

Androgynous poly kinky vanilla queer het <*> http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Goodbye dinner for Netcom shell, Thurs 9/7, 7:30pm, Mountain View, CA
e-mail me for details

Darren New

unread,
Aug 31, 2000, 12:43:41 PM8/31/00
to
Neel Krishnaswami wrote:
> No, the distinguishing features of OO are that a) subtyping exists,
> and b) the types of objects are used to do dynamic dispatch.

Alan Kay argues that garbage collection is necessary as well. Which makes
sense, since otherwise you have to know the implementation of the class to
know when it's safe to collect some reference you passed to the class
earlier.

Just so's ya know. ;-)

--
Darren New / Senior MTS & Free Radical / Invisible Worlds Inc.
San Diego, CA, USA (PST). Cryptokeys on demand.
"No wonder it tastes funny.
I forgot to put the mint sauce on the tentacles."

Neel Krishnaswami

unread,
Aug 31, 2000, 10:21:46 PM8/31/00
to
Neel Krishnaswami <ne...@brick.cswv.com> wrote:
>
> The equivalent definition in ML would be something like
>
> datatype 'a Sizedlist = Empty | Cons(int, 'a, 'a Sizedlist);

This should really be

datatype 'a Sizedlist = Empty | Cons of (int * 'a * 'a Sizedlist)

I originally wrote this in Haskell and failed to convert to ML
correctly. Sorry for the error.


Neel

Neil Schemenauer

unread,
Aug 31, 2000, 10:27:16 PM8/31/00
to
Darren New <dn...@san.rr.com> wrote:
>Alan Kay argues that garbage collection is necessary as well. Which makes
>sense, since otherwise you have to know the implementation of the class to
>know when it's safe to collect some reference you passed to the class
>earlier.

I agree. C++ is not object oriented. :)

--
Neil Schemenauer <n...@arctrix.com> http://www.enme.ucalgary.ca/~nascheme/

Suchandra Thapa

unread,
Aug 31, 2000, 11:10:22 PM8/31/00
to
Neel Krishnaswami <ne...@brick.cswv.com> wrote:
>Suchandra Thapa <sst...@harper.uchicago.edu> wrote:
>>
>> Actually I think clean is a pure functional oo language. It's just
>> that being oo seems to imply that you have some state to your
>> objects. Not sure anyone would get around this paradox.
>
>No, the distinguishing features of OO are that a) subtyping exists,
>and b) the types of objects are used to do dynamic dispatch. Mutable
>state is not necessary -- in both Dylan and Ocaml, it's common to
>write code in an "object-functional" style, with lots of objects but
>no mutation.

I forgot about the existence of OCaml when I was writing the post
but ML has imperative features and so is not a pure functional language.
The example you gave just illustrated dispatch on types. I haven't used
ML or OCaml that much but I believe that it can resolve having two functions
with the same name but different type signatures. E.g. one len function
would have a List->Int signature and the other would have a SizedList->Int
signature and the compiler should be able to determine which to use based
on the type information it has. In ML, there are at least two different +
functions that either take two integers or two reals and add them. The
compiler has no problem determining which to use at compile time. I don't
see what requires leaving the choice of len until runtime.
Incidentally, what do you mean by dynamic dispatch? ML and its
dialect OCaml are statically typed and the compiler chooses which function
to call at compile time. Would you consider OCaml to be OO then? And isn't
Dylan an imperative language but with first class functions?
But getting back to OO, I'm not sure how the essentials of OO that you
put forward mesh with a pure functional approach. In other words suppose I
had a method bar belonging to class foo. How would I distinguish using
foo.bar from using another function baz that took the same parameters and
returned the same results? It's possible to create such a baz by cutting
and pasting the function body from bar. I don't see what using objects
gain in this context aside from some syntactic sugar. Also what would the
use be in declaring two objects of the same class? They wouldn't behave any
differently.

Tim Roberts

unread,
Sep 1, 2000, 1:22:43 AM9/1/00
to
sst...@harper.uchicago.edu (Suchandra Thapa) wrote:
>With python's lack of referential
>transparency, this will become a nightmare. E.g.
> def swap(x, y):
> t = y[x - 1]
> y[x - 1] = x
> x = t
> print x','y[x-1]
>
> x = 3
> y = [3, 2, 1]
> swap(x, y)
>
>prints 3 , 3 which is probably not the programmer intended.

When I type this into Python 1.5.2, I get 1 , 3. This is what I expected.
The outer value of x is not altered, although y is. Is that what you're
referring to?

>>> swap(x,y)
1 , 3
>>> x,y
(3, [3, 2, 3])

--
- Tim Roberts, ti...@probo.com
Providenza & Boekelheide, Inc.

Alex Martelli

unread,
Sep 1, 2000, 3:50:08 AM9/1/00
to
"Darren New" <dn...@san.rr.com> wrote in message
news:39AE8B3E...@san.rr.com...

> Neel Krishnaswami wrote:
> > No, the distinguishing features of OO are that a) subtyping exists,
> > and b) the types of objects are used to do dynamic dispatch.
>
> Alan Kay argues that garbage collection is necessary as well. Which makes
> sense, since otherwise you have to know the implementation of the class to
> know when it's safe to collect some reference you passed to the class
> earlier.

Actually, I need to know the *interface semantics*, not the way the
class chooses to implement it. And garbage collection does not save
me the need to specify & understand those interface semantics.

In a language where objects can be mutable, it's semantically crucial
to know whether a certain class is allowed to keep a reference that
is passed to it: it determines whether I can freely call modifier
methods on that reference later, without side-effects on the "certain
class"'s behaviour, or whether I have to ask for copies (or deep
copies) for the purpose.

So, this needs to be specified anyway, quite apart from the "safe
to collect" issue; it's a more general question of whether it's
safe to *alter* objects through certain references ("collecting"
being just one way to perform such alterations, albeit a very
particular one).

Garbage collection may be a very desirable feature (and we can
keep talking on the neverending issues of mark-&-sweep and its
variants versus the precisely-timed-finalization enabled by
reference counting & its costs, ad nauseam) but tying the "OO"ness
of a language to it seems to be a typical case of "hidden agenda"
(I want to "sell" concept A to an audience that's all fired up on
concept B, so, I try to make it appear as if B needs/implies A).

"subtyping exists" is another contentious issue, of course; if I
removed "extends" from Java, requiring all polymorphous use to
go through "interface" and "implement" (as, e.g., Coad has suggested
as best design practice), would I "totally lose OO-ness"? Are
languages based on 'prototype objects' (with no classes at all)
"not OO"? Pah...


Alex

Darren New

unread,
Sep 1, 2000, 12:19:40 PM9/1/00
to
Alex Martelli wrote:
> So, this needs to be specified anyway, quite apart from the "safe
> to collect" issue; it's a more general question of whether it's
> safe to *alter* objects through certain references ("collecting"
> being just one way to perform such alterations, albeit a very
> particular one).

Depends on what you mean by "safe". If by "safe" you mean "according to
specificiation of the problem", then yes. If by "safe" you mean what
language designers mean, which is "defined behavior according to the
language specification", then no.

In other words, if you change an object to which I'm holding a reference,
I'll still do what you told me to. If you GC an object to which I'm still
holding a reference, you can't define the semantics of that operation. It's
like the difference between saying "indexing off an array throws an
exception" and "indexing off an array overwrites the interrupt vectors with
random garbage". The first is still "safe", while the second isn't.

> but tying the "OO"ness
> of a language to it seems to be a typical case of "hidden agenda"

Uh... As I said, that was an Alan Kay quote. If you don't know, Alan Kay
was arguably the guy who *invented* OO. It's hard to see how he could have a
hidden agenda against languages that weren't invented at the time.

> "subtyping exists" is another contentious issue, of course; if I
> removed "extends" from Java, requiring all polymorphous use to
> go through "interface" and "implement" (as, e.g., Coad has suggested
> as best design practice), would I "totally lose OO-ness"? Are
> languages based on 'prototype objects' (with no classes at all)
> "not OO"? Pah...

Well, Kay figured it was dynamic dispatch and GC. I personally think you
also need some sort of "class" concept, but I'm ambivilent on how much
inheritance you'd need. I've never used a system with classes and without
inheritance.

Manuel Gutierrez Algaba

unread,
Sep 1, 2000, 12:18:00 PM9/1/00
to
On 31 Aug 2000 Neel Krishnaswami <ne...@brick.cswv.com> wrote:
>
>No, the distinguishing features of OO are that a) subtyping exists,
>and b) the types of objects are used to do dynamic dispatch.

If you think about it. python is rather "functional" (typeless)
in its mood ( perhaps because it followed the steps of tcl and perl),
and the way it uses to produce "polymorphism" is different to java,
python uses optional params while java uses different functions,
real polymorphism.

def a(b,c=None):
if c: print b,c
else: print b

while

void a(stupidtype b) {
system.out.println(b);
}

void a(stupidtype b, stupidtype c) {
systema.out.println(b,c);
}

Both systems has advantages and disadvantages.


>Mutable
>state is not necessary -- in both Dylan and Ocaml, it's common to
>write code in an "object-functional" style, with lots of objects but
>no mutation.

I don't understand this.

[snip heavy machinery for me]
--
MGA

Manuel Gutierrez Algaba

unread,
Sep 1, 2000, 12:29:54 PM9/1/00
to
On lob...@my-deja.com <lob...@my-deja.com> wrote:
>i'm commenting on this particular exchange:
>> > language. Although, I think our BDFL would lean more towards a
>> > Modula3+Haskell mixture =).
>>
>> Which only shows his good taste.
>>
>> pure-functional-programming-in-Modula3-is-a-pain-though-ly y'rs
>> Peter
>
>1) I'm very glad that Haskell seem to be in the picture. I'm even more
>happy that list comprehensions have been added to 2.0. Adding
>functional elements to Python - without sacrificing its imperative
>roots

Sometimes "imperative" is preferred to
"the functional declarativeness". The only problem with lambda functions,
comprehension and so on is that you can build such a big and nasty
expression that you have to _run_ it several times in your
mind to understand what it does.

This goes against the very nature of python : utter simplicity.

Using comprehension , sometimes, you can write in 10 lines
something you'd need 100 lines in imperative. But, for maintenance
purposes, those 10 lines may take two hours to understand while
100 lines of "normal" python takes 10 minutes.

That's why Haskell, ML and so on will continue being "academic"
for _a long time_. Not everybody is a Ph. D. , nor is so "relaxed"
to play with long pseudo-mathematical expresions.

Even so, Haskell, ML and Lisp... should be liked for anyone with
some love for Computer Sciences.

>interpreted one at that. It would make sense to have a quick look at
>other higher level imperative languages and see what can be borrowed
>from them for python.

From which languages? and What ?


>3) My particular favourite is Icon (see www.cs.arizona.edu/icon).
>Particular aspects of some interest to Python would be goal-directed
>evaluation [saves helluva lot of lines of code...] and, less
>importantly, generators and coexpressions. I have no idea how difficult
>it would be to move these ideas to Python.

Again, although, this sounds nice. The python-spirit is :
- no magic

If I rembember well, Icon treats equally exceptions and returns from
functions. Ok, this seems a bit of magic, and it kills exceptions.
And exceptions are important enough in OO for being respected.

Again, you can't have three ways to deal with exceptions ( a la
Icon, a la Java and a la python ) without having a mess and
spiritless language.

> But I (and many other
>people) can vouch that these mechanisms are very effective in writing
>very 'large' programs in a surprisingly small number of lines.

Except COBOL, everything can be written in a very small number
of lines, it depends if the programmer knows "the art" or if
he's a hardwirer !

> I must
>stress here that this happens not the way Perl does it - but rather
>like in functional languages. That is: because of the built in
>mechanisms of evaluation. So it is readable :-), not just terse.
>

--

MGA

Alex Martelli

unread,
Sep 1, 2000, 5:38:18 PM9/1/00
to
"Suchandra Thapa" <sst...@harper.uchicago.edu> wrote in message
news:slrn8qu0jh...@hepcat.uchicago.edu...
[snip]

> Incidentally, what do you mean by dynamic dispatch? ML and its
> dialect OCaml are statically typed and the compiler chooses which function
> to call at compile time. Would you consider OCaml to be OO then? And
isn't

See http://caml.inria.fr/ocaml/htmlman/manual004.html for an
introduction to the object-oriented features of O'Caml. While the
static typing still holds, the behaviour over class methods is
polymorphic -- much as in Eiffel, C++, Sather, Java, and other
compile-time-typed OO languages, what version of a method is
actually called depends on the runtime type of the object instance
being used. Thus, dynamic dispatch, aka runtime dispatch.


Alex

Alex Martelli

unread,
Sep 1, 2000, 5:18:07 PM9/1/00
to
"Darren New" <dn...@san.rr.com> wrote in message
news:39AFD71D...@san.rr.com...

> Alex Martelli wrote:
> > So, this needs to be specified anyway, quite apart from the "safe
> > to collect" issue; it's a more general question of whether it's
> > safe to *alter* objects through certain references ("collecting"
> > being just one way to perform such alterations, albeit a very
> > particular one).
>
> Depends on what you mean by "safe". If by "safe" you mean "according to
> specificiation of the problem", then yes. If by "safe" you mean what
> language designers mean, which is "defined behavior according to the
> language specification", then no.

"safe" as in "will not make the program do things it shouldn't". Whether
a program crashes because the language specification is violated, or
because a semantic constraint (precondition) is, it's still a crash (and
not quite as bad as a program NOT crashing but corrupting persistent
data in subtly-wrong ways...).


> In other words, if you change an object to which I'm holding a reference,
> I'll still do what you told me to. If you GC an object to which I'm still
> holding a reference, you can't define the semantics of that operation.
It's

But neither can I change the semantics of what happens in other
cases, e.g. a divide by zero; whether that raises a catchable exception
or not depends on the language (and possibly on the implementation,
if the language leaves it as implementation-defined).

Will you therefore argue that a language, to be "object-oriented", must
give certain fixed semantics to divide-by-zero...?

Note that this ties back to the issue about whether I can mutate a
reference after giving it out -- i.e. whether the receiving class will
hold to that reference (and if so whether it will rely on it to stay
as it was) or make its own copies. That's part of the receiving
class's semantics; I need to know the contract obligations of the
receiving class to perform my part of the bargain. It's NOT an issue
of having to know the _implementation_ of the receiving class,
but one of design-by-contract... and please don't tell me, or B.M.,
that dbc is not OO:-).

Consider:

class datastuff:
def __init__(self,denominator, formatstring="%s"):
self.denominator=denominator
self.formatstring=formatstring

class divider:
def __init__(self,datastuff):
self.denom=getattr(datastuff,'denominator',0)
if not self.__denom:
raise PreconditionViolation
self.format=validateformat(getformat(
datastuff,'formatstring',"%s"))
self.theds=datastuff
def oper1(self,numer):
return self.format%(numer/self.__denom)
def oper2(self,numer):
return self.format%(numer/self.theds.denominator)

Can I rely on divider.oper1 not dividing-by-zero? Yes,
presumably (assuming I haven't violated contracts e.g.
by messing with the private __denom field): __init__
tests to ensure the __denom it saves is not zero, and
that's what oper1 divides by. But can I rely on oper2?
No, that's subject to what I may have been doing to
the .denominator field of the datastuff -- because the
divider class is keeping a reference to that specific
instance and using it without re-testing.


So, I need to know what divider's contract says about
keeping a reference to the datastuff instance passed to
__init__, or not. If divider is free to keep such a
reference, and to assume it stays valid by its semantic
criteria (non-zero denominator), then the client code is
not free to muck with that field; and vice versa. The
situation is not in the least different if the "mucking" is
the 'freeing' of the instance, or if it's changing the
nature of the object or its fields (a del on the
denominator field could cause an AttributeError, etc).

It's sure nice if all errors are specified to cause some
trappable exception rather than an outright crash, but
surely that's not a part of "object-orientedness" -- it's
just a nice feature that a language can specify (with a
cost, that can be huge, for its ports to platforms without
good hardware/OS support for trapping certain kinds of
invalid operations, to be sure).

But the point is that "having to know about how a
class is implemented" is not connected with the ability
to free/dereference/change an object, its fields, etc.
Rather, such freedom is ensured to client-software by
a mix of language-behaviour *and semantic contract
of the class being used*. Not *implementation*, note:
*contract*. An important difference.


> like the difference between saying "indexing off an array throws an
> exception" and "indexing off an array overwrites the interrupt vectors
with
> random garbage". The first is still "safe", while the second isn't.

But neither issue defines whether a language is OO.

> > but tying the "OO"ness
> > of a language to it seems to be a typical case of "hidden agenda"
>
> Uh... As I said, that was an Alan Kay quote. If you don't know, Alan Kay
> was arguably the guy who *invented* OO. It's hard to see how he could have
a
> hidden agenda against languages that weren't invented at the time.

He co-invented Smalltalk, but Simula had been around for a while (and
Kay had used it at University). And, is the quote dated from before the
birth of yet other languages which I'd call object-oriented? It's so
suspiciously similar to the FAQ that some guy from Geodesic used (no
more than 10 years ago, I'm sure, and _without_ quoting Kay) in a
not-so-hidden attempt to sell his company's product (rather than just
making his cases, for GC and against the RC approach to GC, he
claimed [and didn't prove] that GC was *NECESSARY* [note: there is
a HUGE gulf between "extremely useful, handy, and all-around good",
and "necessary"; I get peeved very easily by such attempts to play
semantic legerdemain...] to object-orientation).


> > "subtyping exists" is another contentious issue, of course; if I
> > removed "extends" from Java, requiring all polymorphous use to
> > go through "interface" and "implement" (as, e.g., Coad has suggested
> > as best design practice), would I "totally lose OO-ness"? Are
> > languages based on 'prototype objects' (with no classes at all)
> > "not OO"? Pah...
>
> Well, Kay figured it was dynamic dispatch and GC. I personally think you
> also need some sort of "class" concept, but I'm ambivilent on how much
> inheritance you'd need. I've never used a system with classes and without
> inheritance.

See, e.g., http://www-staff.mcs.uts.edu.au/~cotar/proto/shah.txt
"""
The terms class-based approach and object-oriented paradigm have
become almost synonymous in the computer science community and
many people think of every object-oriented system in terms of the
class-based approach. ... in fact the prototype-based approach is
simpler, and more flexible and powerful than the class-based approach.
"""

You may disagree, but I think the onus of proof is definitely on you.

Why do you think the "Self" programming language is not object
oriented, although it self-describes as such and is generally accepted
as such? http://www.sun.com/research/self/language.html for more.

I don't mind the class-ic approach to OO, but prototype-based OO
would also have its advantages, and it seems peculiar to rule it
out as being OO at all because it lacks 'some sort of class concept'.

Dynamic dispatch, polymorphism, is what I'd call *the* one and
only real discriminant of 'OO'.


Alex

Hartmann Schaffer

unread,
Sep 1, 2000, 10:13:32 PM9/1/00
to
In article <39AFD71D...@san.rr.com>, Darren New <dn...@san.rr.com> wrote:
> ...

>Uh... As I said, that was an Alan Kay quote. If you don't know, Alan Kay
>was arguably the guy who *invented* OO. It's hard to see how he could have a
>hidden agenda against languages that weren't invented at the time.

whe? before simula?

> ...

Hartmann Schaffer

Grant Griffin

unread,
Sep 2, 2000, 3:38:00 AM9/2/00
to
lob...@my-deja.com wrote:
>
...

> 3) My particular favourite is Icon (see www.cs.arizona.edu/icon).
> Particular aspects of some interest to Python would be goal-directed
> evaluation [saves helluva lot of lines of code...] and, less
> importantly, generators and coexpressions. I have no idea how difficult
> it would be to move these ideas to Python. But I (and many other
> people) can vouch that these mechanisms are very effective in writing
> very 'large' programs in a surprisingly small number of lines. I must
> stress here that this happens not the way Perl does it - but rather
> like in functional languages. That is: because of the built in
> mechanisms of evaluation. So it is readable :-), not just terse.

Speaking as an experienced programmer who has no idea what all that
schtuff is, I personally hope Python continues to do without it.
Although Python isn't explicitly intended as a "learning language", I
think one of its great strengths is its status as "executable pseudo
code". An experienced programmer can pretty-much "learn" Python in one
day, via Guido's tutorial. (Of course, like any complex system, Python
might still take years to "master".) To the extent that very powerful,
very elegant constructs creep into the language (which--remember--is
only now getting augmented assignment operators!), Python will become
harder to learn and to read.

Python has never placed any value on terseness (as far as I can tell).
This is a necessary result of one of Tim's most essential "Python
Philosophy" points: "There should be one--and prefereably only
one--obvious way to do it."

minimalism-of-code-comes-at-the-expense-of-maximalism-of-constructs;
-this-is-the-fundamental-pathogen-of-Perl-ly y'rs,

=g2
--
_____________________________________________________________________

Grant R. Griffin g...@dspguru.com
Publisher of dspGuru http://www.dspguru.com
Iowegian International Corporation http://www.iowegian.com

Aahz Maruch

unread,
Sep 2, 2000, 10:47:24 AM9/2/00
to
In article <8opehp$ret$1...@nnrp1.deja.com>, <lob...@my-deja.com> wrote:
>
>Now, what about goal directed evaluation :-) ?

It could be done with Stackless, but I don't think anyone is likely to
do it any time soon.

Just van Rossum

unread,
Sep 2, 2000, 11:55:40 AM9/2/00
to
Grant Griffin wrote:
>
> lob...@my-deja.com wrote:
> >
> ...
> > 3) My particular favourite is Icon (see www.cs.arizona.edu/icon).
> > Particular aspects of some interest to Python would be goal-directed
> > evaluation [saves helluva lot of lines of code...] and, less
> > importantly, generators and coexpressions. I have no idea how difficult
> > it would be to move these ideas to Python. But I (and many other
> > people) can vouch that these mechanisms are very effective in writing
> > very 'large' programs in a surprisingly small number of lines. I must
> > stress here that this happens not the way Perl does it - but rather
> > like in functional languages. That is: because of the built in
> > mechanisms of evaluation. So it is readable :-), not just terse.
>
> Speaking as an experienced programmer who has no idea what all that
> schtuff is, I personally hope Python continues to do without it.

I have no idea what "goal-directed evaluation" is, but generators
and co-routines are pretty straightforward. They're not any harder
to understand and use than threads. Co-routines are in fact a lot
like threads, except that context switching is explicit. One
co-routine simply tells another one "ok, now *you* continue, while
I get some rest". The best thing about them is that you can pass a
value from one co-routine to the other when you switch. In terms of
threads: if you have two threads working together, but one is
always just waiting for the other to deliver something, you really
have two co-routines. Except that when you'd write that using
actual co-routines, you minimize two things: context switching
overhead (almost zero with a proper co-routine implementation) and
thread synchronization overhead (say, the overhead of a queue
object). Your code will look cleaner while it will be faster at the
same time.

And of course, Stackless Python makes it possible..

Just

Suchandra Thapa

unread,
Sep 2, 2000, 3:18:00 PM9/2/00
to
Tim Roberts <ti...@probo.com> wrote:
>sst...@harper.uchicago.edu (Suchandra Thapa) wrote:
>>With python's lack of referential
>>transparency, this will become a nightmare. E.g.
>> def swap(x, y):
>> t = y[x - 1]
>> y[x - 1] = x
>> x = t
>> print x','y[x-1]
>>
>> x = 3
>> y = [3, 2, 1]
>> swap(x, y)
>>
>>prints 3 , 3 which is probably not the programmer intended.
>
>When I type this into Python 1.5.2, I get 1 , 3. This is what I expected.
>The outer value of x is not altered, although y is. Is that what you're
>referring to?
>
> >>> swap(x,y)
> 1 , 3
> >>> x,y
> (3, [3, 2, 3])

The example I gave would be what types of problems that would happen
if python had lazy evaluations. Currently python has eager evaluations so
the code works correctly. The discussion was in the context of extensions to
python.

Suchandra Thapa

unread,
Sep 2, 2000, 3:17:58 PM9/2/00
to
Manuel Gutierrez Algaba <th...@localhost.localdomain> wrote:
>On 31 Aug 2000 Neel Krishnaswami <ne...@brick.cswv.com> wrote:
>>
>>No, the distinguishing features of OO are that a) subtyping exists,
>>and b) the types of objects are used to do dynamic dispatch.
>
>If you think about it. python is rather "functional" (typeless)

Functional languages are usually defined by the lack of state in
the code. In other words, you can not or are heavily discouraged from
modifying the values of variables. However, the most widely used
functional languages (Haskell and ML) have very strong type systems and
are more demanding about types than languages like c/c++ and java. For
example in ML, every statement has a type and each branch of an if/else caluse
needs to have the same type.

Manuel Gutierrez Algaba

unread,
Sep 2, 2000, 3:36:43 PM9/2/00
to
On Sat, 02 , Suchandra Thapa <sst...@harper.uchicago.edu> wrote:
>>
>>If you think about it. python is rather "functional" (typeless)
>
> Functional languages are usually defined by the lack of state in
>the code. In other words, you can not or are heavily discouraged from
>modifying the values of variables. However, the most widely used
>functional languages (Haskell and ML) have very strong type systems and
>are more demanding about types than languages like c/c++ and java. For
>example in ML, every statement has a type and each branch of an if/else caluse
>needs to have the same type.

For me functional language equals to LISP, common LISP. The issue
about types was from start a matter of size of data in memory. Then
mathematicians came in and they saw it was nasty, so they founded
theories on it.

Probably , Haskell and ML have got infested with that plague.

I see functional as "flow", while OO as "matter". The duality
energy(flow)-matter rules in physical world and IT. Types could be
the size of quanta, which is the less interesting part of the story.

As long as a "language let the data flow" (without blocking in
a struct or object) the language behaves in a functional manner.
Types seems stupid attempts to give shape to a liquid, to a flow.
It's the same if the "pipe" is rounded or square, the liquid/flow
adapts to it, but you have bad times when joining different pipes.

---
MGA

lob...@my-deja.com

unread,
Sep 2, 2000, 6:25:30 PM9/2/00
to
In article <39B122CB...@letterror.com>,
ju...@letterror.com wrote:
[snip]

> I have no idea what "goal-directed evaluation" is
>
apologies. here's a short excerpt from

Tom Christopher's On-Line Icon Book
http://www.toolsofcomputing.com/iconprog.pdf

Goal-directed evaluation. The most difficult aspect of Icon for
programmers
familiar with other languages is its goal-directed evaluation, that is
to say, its
backtracking control. Most languages use a Boolean data type for
controlling
the flow of execution. In most languages, relational operators produce
Boolean
values which are tested by if’s and while’s. Icon is completely
different. We’ll
spend a lot of time explaining how Icon works in Chapter 3. Briefly, in
Icon ex-pression
evaluation can succeed or fail. If the expression succeeds, it produces
a value. If it fails, control backs up to an expression it evaluated
earlier to see if
it will generate another value. If that expression does give another
value, control
starts forwards again to see if the later expression can succeed now.

Suchandra Thapa

unread,
Sep 2, 2000, 7:27:13 PM9/2/00
to
Manuel Gutierrez Algaba <th...@localhost.localdomain> wrote:
>For me functional language equals to LISP, common LISP. The issue
>about types was from start a matter of size of data in memory. Then
>mathematicians came in and they saw it was nasty, so they founded
>theories on it.

Lisp, especially Common Lisp, isn't really considered all that
functional. Even the people in comp.lang.lisp would probably agree with
that.

>Probably , Haskell and ML have got infested with that plague.

Plague? With an expressive type system you can readily guarantee that
your variables satisfy constraints that the program requires them to. So
instead of putting asserts or testing to make sure that a variable has the
proper range of values, you can express these same conditions within the
type system and not worry about it in the code. Having this stuff done
at compile time is a bonus since it means that your program does not have
to worry about type checking at run time.

>I see functional as "flow", while OO as "matter". The duality
>energy(flow)-matter rules in physical world and IT. Types could be
>the size of quanta, which is the less interesting part of the story.

Actually, the size of the quanta is probably the more interesting
part since it corresponds to alot of the physical phenomena that we
see. For example, knowing that the electron orbitals of a molecule or
atom are discrete due to the wave properties of the electron is useful, but
knowing the size of the gaps between the orbitals is even more useful since
it allows us to identify compounds by various spectroscopic methods. In a
slightly more germane example, the bandgap in semiconductors would be
equivalent to the size of your quanta. Knowing the size of the band gap
determines how useful the semiconductor is for making chips, after all it
wouldn't be very helpful for energy on the order of kT to bump electrons
into the conduction band.

>As long as a "language let the data flow" (without blocking in
>a struct or object) the language behaves in a functional manner.
>Types seems stupid attempts to give shape to a liquid, to a flow.
>It's the same if the "pipe" is rounded or square, the liquid/flow
>adapts to it, but you have bad times when joining different pipes.

Functional languages usually imply that you write programs using
functions in behaviour similar to mathematical ones. This basically
implies that you can't use state in your programs since a mathematical
function's output is determined solely by its input. Incidentally, the
shape of the pipe is very important to the fluids. Square pipes would
induce turbelence and would cause more frictional losses at the same velocity
than a round pipe. The end result would be that you would need a more powerful
pump to get the liquid through the pipe at the same velocity.

Neel Krishnaswami

unread,
Sep 3, 2000, 9:52:53 AM9/3/00
to
Suchandra Thapa <sst...@harper.uchicago.edu> wrote:
> Neel Krishnaswami <ne...@brick.cswv.com> wrote:
> >Suchandra Thapa <sst...@harper.uchicago.edu> wrote:
> >>
> >> Actually I think clean is a pure functional oo language. It's just
> >> that being oo seems to imply that you have some state to your
> >> objects. Not sure anyone would get around this paradox.
> >
> >No, the distinguishing features of OO are that a) subtyping exists,
> >and b) the types of objects are used to do dynamic dispatch. Mutable
> >state is not necessary -- in both Dylan and Ocaml, it's common to
> >write code in an "object-functional" style, with lots of objects but
> >no mutation.
>
> I forgot about the existence of OCaml when I was writing the post
> but ML has imperative features and so is not a pure functional language.

I usually think of any of the ML/Scheme clan as functional languages,
and specify "pure" when I want to limit myself to Clean or Haskell.

> The example you gave just illustrated dispatch on types.

Specifically it illustrated dispatch on based on *subtypes*; that is
it looked at the /runtime/ type of the object to decide which method
body to call. So in a typed Python, take the function

def squarelen(lst: List) -> Integer:
return len(lst) * len(lst)

If you pass squarelen() a SizedList, then it will call the
SizedList.__len__ method, not the List.__len__ method.

> The compiler has no problem determining which to use at compile
> time. I don't see what requires leaving the choice of len until
> runtime.

We don't know the precise runtime type of the object. In the
squarelen example, all you can tell about the parameter lst is
that it is a List, or an instance of one of its subtypes, like
SizedList.

> Incidentally, what do you mean by dynamic dispatch? ML and its
> dialect OCaml are statically typed and the compiler chooses which
> function to call at compile time. Would you consider OCaml to be OO
> then?

The OO extension to Ocaml makes sure that a method exists for every
message send, but it doesn't determine which method to call at compile
time. So no "message not understood" errors can happen, but the
specific method body is chosen based on the runtime type. This is what
I mean by dynamic dispatch. It's a form of overloading constrained by
the type.

> But getting back to OO, I'm not sure how the essentials of OO that
> you put forward mesh with a pure functional approach. In other
> words suppose I had a method bar belonging to class foo. How would
> I distinguish using foo.bar from using another function baz that
> took the same parameters and returned the same results?

I'm don't I understand this question.

> It's possible to create such a baz by cutting and pasting the
> function body from bar. I don't see what using objects gain in this
> context aside from some syntactic sugar. Also what would the use be
> in declaring two objects of the same class? They wouldn't behave
> any differently.

Objects with different internal values would behave differently. It's
just that instead of modifying an object we create new ones. Eg,
imagine that every Python object were immutable, like integers and
strings.


Neel

Neel Krishnaswami

unread,
Sep 3, 2000, 10:11:20 AM9/3/00
to
Alex Martelli <ale...@yahoo.com> wrote:
>
> "subtyping exists" is another contentious issue, of course; if I
> removed "extends" from Java, requiring all polymorphous use to
> go through "interface" and "implement" (as, e.g., Coad has suggested
> as best design practice), would I "totally lose OO-ness"?

No, because you still have subtyping. You've taken away inheritance,
but that's not the same thing (see Sather for a rather graphic
illustration of the difference).

> Are languages based on 'prototype objects' (with no classes at all)
> "not OO"? Pah...

You still have types as a semantic concept in Self and Cecil, though.
A type object is one which other objects name as a delegate; grab all
the objects that delegate to it and you have all its direct instances.

The fact that an object's type can change over its lifetime isn't a
disqualification. Note that this isn't that foreign to Python, either:
you can change an object's class by setting the __class__ field, and
you can change the inheritance graph by setting __bases__.

>>> class Foo: pass
...
>>> class Bar: pass
...
>>> x = Foo()
>>> isinstance(x, Foo)
1
>>> isinstance(x, Bar)
0
>>> x.__class__ = Bar
>>> isinstance(x, Foo)
0
>>> isinstance(x, Bar)
1
>>> Bar.__bases__ = (Foo,)
>>> isinstance(x, Foo)
1
>>> isinstance(x, Bar)
1


Neel

Alex Martelli

unread,
Sep 3, 2000, 2:52:43 PM9/3/00
to
"Neel Krishnaswami" <ne...@brick.cswv.com> wrote in message
news:slrn8r53ki...@brick.cswv.com...
[snip]
> > If by subtyping you mean the java 'implements', then, yes. But to
> > call the signature-based polymorphism of Python "subtyping" is
> > starting to stretch the meaning of the word -- yet, it's at least
> > equivalent to what interface+implements lets you do, modulo
> > the compile-time checks.
>
> Not really. If you create a class (call it Foo) with all the methods
> of an interface (call it Bar) but you don't specifically declare the
> "implements" relation, then you can't use Foo in the place of a Bar.
> The classes that implement an interface are all subtypes of the type
> the interface creates, and no other classes are subtypes of that
> interface.

Right -- in Java; in Python, OTOH, you work with signature-based
commonality (no specific declaration of 'implements'). I'm saying
that signature-based commonality lets you do _at least as much_
as subtyping-via-implements, except the compile-time checks. So,
OO does *not* require subtyping (don't start claiming that compile
time checking is a prereq of OO, or you'd end up excluding CLOS,
Smalltalk, ... :-) -- QED.


> > I say that OO == polymorphism, aka, runtime dispatching. If it's
> > done through subtyping, signatures, prototypes, etc, is quite a
> > secondary issue.
>
> Mostly, yes. I would want to distinguish parametric polymorphism (a la
> ML) from the dynamic method lookup in OO languages. (It's possible to
> combine both, of course, as in Python and other dynamically typed
> languages.)

Compile-time parametric types (as in ML, Sather, Eiffel, C++) are
indeed different from runtime dispatching (which need not imply
much in terms of 'dynamic lookups', in languages sufficiently
rigid to allow vtable-like implementation). In the C++ and other
communities, 'polymorphism' equates to dynamic dispatching,
while 'genericity' is used for compile-time parameterization.


> > Similarly, say the Cecil authors, "Cecil uses a classless
> > (prototype-based) object model" (e.g., cf
> > http://www.objs.com/x3h7/cecil.htm). And one must not confuse
> > argument-specializers, with type-declarations (the latter do not
> > affect method-lookup in Cecil).
>
> What I mean by "type" is equivalent to the Cecil "isa" keyword. This
> is just the descendant-of relation.

So, it's not what Cecil calls type (which is far closer to the classic
notion of 'type'!). And, since in the real world being a descendant
IS a rather permanent situation, I don't particularly like naming
such a transient relation as 'descendant-of'; 'delegates-to' seems
far closer to the mark.

> I disagree quite strongly. The notion of type is extremely useful even
> in prototype based languages. It allows us to formalize the notion of
> (for example) mode switching, which is an extremely elegant style of
> programming.

The notion of delegation is indeed very useful; what I'm disputing
is that it's particularly useful to confuse it with (what Cecil calls,
and is classically known as) 'type'.

Many Pythoneers dislike the type/class distinction that holds in
Python, and it may be a bother when writing a C extension, but
I think it's useful at the Python-programming level. An instance's
class is not its type...


Alex

Aahz Maruch

unread,
Sep 3, 2000, 3:53:41 PM9/3/00
to
In article <8ou6q...@news2.newsguy.com>,

Alex Martelli <ale...@yahoo.com> wrote:
>
>Many Pythoneers dislike the type/class distinction that holds in
>Python, and it may be a bother when writing a C extension, but
>I think it's useful at the Python-programming level. An instance's
>class is not its type...

That last sentence is worthy of being preserved.

Alex Martelli

unread,
Sep 4, 2000, 12:38:13 PM9/4/00
to
"Suchandra Thapa" <sst...@harper.uchicago.edu> wrote in message
news:slrn8r5lof....@hepcat.uchicago.edu...
[snip]

> >Objects with different internal values would behave differently. It's
> >just that instead of modifying an object we create new ones. Eg,
> >imagine that every Python object were immutable, like integers and
> >strings.
>
> But there wouldn't be any link between class objects and their methods
in
> a pure functional language aside from a logical one in the programmer's
mind.
> Methods in a class would have to explicitly be passed class objects. In
effect
> you're essentially working with structs and functions on them.

Not so. Conceptually, you can imagine that somewhere exists a
correspondence table mapping, for each function F with arity X,
(X-tuple of types) -> specific-body-of-function. The dispatcher
(I'm imagining multidispatch for generality) roots through the
table and find the best-match between the tuple of actual
arguments' actual-runtime-types, and the tuples-of-types in
the table, and calls that specific-body.

(You'll have to have rules guaranteeing that one and only one
best-match always exist, if you want this process to always
result in some call to a specific-body; there are several ways
one could do that; or else, ambiguity in best-matching could
raise exception, in a more dynamic/fluid, less compile-time
typesafe, language).

The fact that the methods thus dispatched are explicitly passed
the objects they work on is no problem -- that's what you have
in Python today ("foo.bar(baz)" to call, and foo's class has a
"def bar(self,baz)" to explicitly receive the 'self', i.e. foo).
Multi-dispatching basically requires this, anyway.

There is no need for objects to be mutable for this style of
programming to be quite handy & practical, particularly with
some language support for the dispatcher part (so that the
various polymorphic versions of F can be specified at suitable
different points in the program)...


Alex

Alex Martelli

unread,
Sep 5, 2000, 3:53:42 AM9/5/00
to
"Manuel Gutierrez Algaba" <th...@localhost.localdomain> wrote in message
news:slrn8r8cl...@localhost.localdomain...
[snip]
> So types go against "functional".

Most functional languages have (good) typesystems -- Haskell's being
definitely worth studying, for example -- so this claim is peculiar
on the fact of it (of course, Erlang shows you don't *need* [static]
types in a functional language, but in the same way Haskell shows
they most definitely don't *hurt* the functional aspects, either!).

To continue your pipes/fluids analogy: not all fluids are safe to
run in every kind of pipe. A language with types basically *lets*
you state that (some *make* you state that, but "good" typesystem
implies some measure of type-inferencing, by definition of 'good':-).

A typeless language just doesn't model that particular aspect of
safety, relying on runtime checks for it exclusively; a dynamically
typed language (more common than 'typeless'... BCPL is an example
of 'typeless', and it's hard to find many others...) may structure
the checking for you (to some extent) though it's still runtime.

So, back to the analogy: I declare (or the system infers) that
this here pipe is made of pressed cardboard (using 'type' analogy
to 'what matter is it made of'). A very good type system will infer
that it's unsafe to route a fluid through it if that fluid's "type"
is, e.g., concentrated sulphuric acid; a decent one will let me
_state_ that if it's unable to infer it; a maybe-too-rigid one may
*force* me to state what families of fluids are allowed to pass
through this pipe. A typeless system (and to some extent a fully
dynamically-typed one) is equivalent to a plumbing-modeling system
that does not model materials-issues -- which doesn't mean it's
*useless*, absolutely not: it's still very useful to model a fluid
flow system even in a more abstract way, without consideration of
materials' properties; but it would be silly to claim that being
able to model materials as well (or, even, being _forced_ to do so)
makes a fluid-flow simulation system useless or innatural.

The analogy can be stretched -- a big risk for naive users of
good typesystems (or anal-retentive ones) is the *illusion of
safety* they may engender; by offering some guaranteed checks
about fluid-material to pipe-material compatibility, they may
delude the naive user into believing fluid-to-pipe compatibility
checks may be relaxed, tests may be skimpier, etc. No way.

Water *as a material* may be ok for a pipe whose material is
pressed-cardboard *from a purely materials-science POV*, but
compatibility still needs to be tested and checked: if the
water's temperature is 90 Celsius, or the cardboard's thickness
is 0.1 mm, big runtime trouble is still in store if that fluid
is every allowed to flow in that pipe. The designer still needs
to design-in the appropriate checks, and run lots of tests...

Somewhat-Haskellishly:

myPipe :: (Noncorrosive a) => [a] -> [a]
myPipe [] = []
myPipe (x:xs)
| temperature x < 90 = x : myPipe xs
| temperature x >= 90 = error "too hot!"

Here, I could rely on an assumed 'Noncorrosive' type class to
ensure against, e.g., sulphuric acid ever being run through
my poor little pipe -- but, since in the analogy types model
materials (immutable characteristics of certain objects), the
temperature-test I've had to perform explicitly (as temperature
is an "accident" in the Aristotelic sense, i.e. an aspect that
changes from time to time, from case to case, while material
is here modeled as "substance", intrinsic to an object's
nature/identity).

I may have come (after decades of using many different
languages, mostly rather typed and pretty anal-retentive
about it) to a personal preference for dynamic typing, but
that doesn't stop me from recognizing that being able to
diagnose SOME errors at compile-time (and thus prevent
them) also has its use...! In functional languages just
as well as in imperative or mixed ones.


Alex

Robin Becker

unread,
Sep 5, 2000, 6:12:15 AM9/5/00
to
In article <8p0gda$d7l$1...@panix3.panix.com>, Aahz Maruch <aa...@panix.com>
writes
...
>Note that Tim said nothing about coroutines or threads above. We were
>talking strictly about generators, which are pretty easy to do -- we
>just need to agree on an implementation and interface for integrating
>them into the langauge.
I would still be interested in seeing how one goes from an enumerative
routine to a generator in current python. I 'claim' that this is easy
with 'suspend', but don't grok Structure and Interpretation of Computer
Programs by Abelson and Sussman too well so I find the other approaches
quite hard. Perhaps programming all those matrix algorithms in Fortran
has left me brain damaged.
--
Robin Becker

Alex Martelli

unread,
Sep 5, 2000, 7:47:32 AM9/5/00
to
"Robin Becker" <ro...@jessikat.fsnet.co.uk> wrote in message
news:pPJxiEA$bMt5...@jessikat.demon.co.uk...

I'm not sure a generator can be made in Python by just 'wrapping' an
"enumerative routine" -- you'd need to rework the latter to add
explicit state/resumption, I think. But it seems to me that Aahz
was not talking of Python, the language, as it stands today -- he
is, after all, taking of 'agreeing on implementation and interface'
to have this _addition_ to the language. E.g., a new hypothetical
Python might have a new keyword, similar to return, that returns
a pair -- the result being returned, and an 'internal state' needed
for resumption at the Python instruction just after the newreturn;
the latter could be a callable-object (to be called without args).

That might let generators be implemented without, however, allowing
the full generality of coroutines. I dunno, I have not looked at
the current Python sources, but I imagine this is the sort of thing
they may be mooting...?


Alex

Robin Becker

unread,
Sep 5, 2000, 9:26:44 AM9/5/00
to
In article <8p2nj...@news1.newsguy.com>, Alex Martelli
<ale...@yahoo.com> writes
....

>> I would still be interested in seeing how one goes from an enumerative
>> routine to a generator in current python. I 'claim' that this is easy
>> with 'suspend', but don't grok Structure and Interpretation of Computer
>> Programs by Abelson and Sussman too well so I find the other approaches
>> quite hard. Perhaps programming all those matrix algorithms in Fortran
>> has left me brain damaged.
>

I guess I'm not following the upcoming changes well enough. If suspend
resume semantics are available then I agree we can do generators easily
enough and presumable a lot of other things that are presently quite
hard.

>I'm not sure a generator can be made in Python by just 'wrapping' an
>"enumerative routine" -- you'd need to rework the latter to add
>explicit state/resumption, I think. But it seems to me that Aahz
>was not talking of Python, the language, as it stands today -- he
>is, after all, taking of 'agreeing on implementation and interface'
>to have this _addition_ to the language. E.g., a new hypothetical
>Python might have a new keyword, similar to return, that returns
>a pair -- the result being returned, and an 'internal state' needed
>for resumption at the Python instruction just after the newreturn;
>the latter could be a callable-object (to be called without args).
>
>That might let generators be implemented without, however, allowing
>the full generality of coroutines. I dunno, I have not looked at
>the current Python sources, but I imagine this is the sort of thing
>they may be mooting...?
>
>
>Alex
>
>
>

--
Robin Becker

Manuel Gutierrez Algaba

unread,
Sep 5, 2000, 12:21:05 PM9/5/00
to
On Tue, 5 Alex Martelli <ale...@yahoo.com> wrote:
>"Manuel Gutierrez Algaba" < wrote in message

>news:slrn8r8cl...@localhost.localdomain...
> [snip]
>> So types go against "functional".
>
>Most functional languages have (good) typesystems -- Haskell's being
>definitely worth studying,

I have the doc printed, ready to be studied, but Haskell syntax
is almost as complex as Python 3000 syntax, so it's in the
waiting queue.

>To continue your pipes/fluids analogy: not all fluids are safe to
>run in every kind of pipe. A language with types basically *lets*
>you state that (some *make* you state that, but "good" typesystem
>implies some measure of type-inferencing, by definition of 'good':-).

A type is a way to prevent fluids to go thru wrong pipes. But,
the question is what happens when we are preventing it all the time.
We have building extremely limited functions , very robust,...
but not so robust if you have to write several versions of
the same function for processing different types .

At the end , you get lot of particular things, of particular
redirections for determined fluids. It's better to have general
treatments for general functionality. If I focus my effort in
general functionality what I'm doing is focusing in functionality.
If I focus my effort in types, what I'm doing is low form
of prerrequisites for every function, wasting the possibilities
some functions may provide. If I think in terms of types, It's
harder to think in terms of general functionality if we're focusing
all the time of particular fluids.

What's more? Ah yes, the cast. If you've ever programmed
in Java, you'll realize that 50 % of the time you're doing
cast (type conversions), ...


>that does not model materials-issues -- which doesn't mean it's
>*useless*, absolutely not: it's still very useful to model a fluid
>flow system even in a more abstract way, without consideration of
>materials' properties;

Abstract way === Functional way !!!!

> but it would be silly to claim that being
>able to model materials as well (or, even, being _forced_ to do so)
>makes a fluid-flow simulation system useless or innatural.
>

>Somewhat-Haskellishly:
>
>myPipe :: (Noncorrosive a) => [a] -> [a]
>myPipe [] = []
>myPipe (x:xs)
> | temperature x < 90 = x : myPipe xs
> | temperature x >= 90 = error "too hot!"
>

It seems to me that everyones speaks Haskell in c.l.p!!

>I may have come (after decades of using many different
>languages, mostly rather typed and pretty anal-retentive
>about it) to a personal preference for dynamic typing, but
>that doesn't stop me from recognizing that being able to
>diagnose SOME errors at compile-time (and thus prevent
>them) also has its use...! In functional languages just
>as well as in imperative or mixed ones.
>

Having types is liking trying to run a 100 m hurdles race with
a a heavy chain of iron in your feet. types are useful but
I don't want the extreme lack of freedom.

---
MGA

Darren New

unread,
Sep 5, 2000, 1:45:24 PM9/5/00
to
Alex Martelli wrote:
> That's the function of analysis and design: "defining the right `should`"
> for
> each program. Preconditions, postconditions, invariants.

Right. But not the job of a language designer. That's my point.

> You can set things up (depending on the environment) so that a
> program survives 'crashes' (ideally, with a rollback of whatever
> atomic transactions are in progress, of course); whether you do

Then it isn't a crash. It's a caught exception. But then you knew that, or
you wouldn't have put "crash" in quotes there. :-)

> I reiterate my claim that it makes no significant difference
> whether the cause of the crash (whether it's caught or not)
> lies in violating a language spec (that isn't enforced before
> runtime), or in a precondition which the language did not
> specify (e.g. because it's inherent in a certain FP unit, but
> not all FP hardware can enforce it, and the language's specs
> chose to let the language be run decently on a variety of
> hardware).

I already answered this, for the OO case. You chose to ignore it.

> > Well, I'd say that one of the basic tennants of OO programming is that the
> > only operations that change the private data of a class are the methods of
> > that class, yes?
>
> No, that's only one style of OO programming. In some OO styles, data
> never changes (O'Haskell, for example: it's both OO and pure functional,
> so all data are immutable -- let's *please* forget monads for the moment,
> as they're quite something else again).

Which meets my critera, then. The point was that a programming error should
not corrupt unrelated data. That breaks the data encapsulation, the
association of data to objects.

> In others, encapsulation is not enforced.

But you still need (supposedly) a reference to the object to access it. It
can be viewed as the object exporting setter and getter methods by default.

> In particular, it's very common for object-persistence
> frameworks (and similar doodads) to be allowed special licence to
> persist _and depersist_ data of any object whatsoever

Which, of course, breaks all talk about OO to start with, especially if
you're going to start invoking DbC and such.

> Anyway, it's a popular style in OO to have this encapsulation, whether
> enforced or by convention: an object 'owns' some data (state) and
> that data is only accessed by going through the object.

Yes. And I expect that Alan Kay saw that as one of the fundamental
properties of OO at the time he made that quote in the early 70's.

> > Or at least that you need to operate on a reference of an
> > instance in order to change the value of the instance, which definition
> > allows for things like Python and Java and such.
>
> You need to obtain an accessor from the instance-reference (if
> encapsulation is to hold)

Right. And my point is that if you allow one routine to free memory another
routine is holding to, you've lost the accessor to the instance-reference
and possibly replaced it with something entirely different and potentially
illegal according to the language specs.

> > Now, if sending a "divide" message to the integer "5" causes it to corrupt
> > the "Oracle_system_table_interface" object, I'd say you've got a bit of a
> > data hiding problem.
>
> You have a problem, maybe, but it need not be one of data-hiding. Of
> course, if anything is "corrupted", then, by definition of "corruption",
> it's a problem. But consider the unreliable-timing of (asynchronous)
> floating-point exceptions: if one gets triggered, after you have executed
> a few more (you don't know how many more...) operations, some
> of those operations (who have now left the system in an invalid
> overall state) may have been in methods of whatever other instance.
> This has little to do with data hiding...

Yes? And?

> > Yes. That semantics, however, is allowed to be throwing an exception, or
> > "returns a random number", or "exits the program", just as rand() and
> exit()
> > are allowed to do. What it's not allowed to do is "changes private memory
> of
> > other instances" or "modifies control flow in ways disallowed by the
> > language specification."
>
> Then, by your (absurd, I surmise) definition, no "object oriented
> language" can ever run effectively on an architecture whose floating
> point exceptions are asynchronous and non-deterministic -- unless
> it forces execution to terminate on any such exception (how could
> allowable control flow be specified under such circumstances?).

Nope. It might not be as efficient as you like, but you can do it. You just
have to have the language deal with handling the interrupt and doing the
right thing. It's no more difficult than any other asynchronous interrupt in
something like C. When I program in C, I don't sit there worrying about
hardware interrupts making me jump out of a for loop in the middle, because
the OS and such takes care of putting me back where I started. If your
floating point chip can generate interrupts at various times, you need to
build the libraries to handle it without scribbling over *whatever* floating
point number my code *happens* to be using at the time.

What would you suggest instead? That if I have a piece of code like

a := b * c
d := e * f
i := 8
print i
that's allowed to print 27? That doesn't seem like a very useful language.

> What about other exceptions yet -- must a language fully specify
> behaviour on floating-point underflow, overflow, etc etc, to be
> "object oriented" by your definition?

Nope. It just has to stick within the language definition. Namely, if
there's overflow,underflow, etc, it has to affect only the operands that
overflowed. If it's allowed to affect other random components of the system,
then it's not very OO.

> What if somebody (or some accident) functionally removes a
> piece of writable disk on which your program is currently paging --
> must your very hypothetical language, to gain the coveted title
> of "object-oriented", fully specify THAT, too, and NOT allow
> any recovery...? There is really no difference between such
> occurrences and other kinds of asynchronous exceptions...

Now you're just being silly. Does "x += 1" really increment x if the CPU is
on fire?

> *REAL* languages, fully including any object-oriented ones, are
> fortunately designed (most of the time) with some input from
> people who know a little about such issues. As a consequence,
> they _explicitly_ allow *UNDEFINED BEHAVIOUR* under these
> and similar kinds of circumstances: specific implementations of
> the language are allowed to do *WHATEVER IS MOST USEFUL*
> in the context of their environment when such-and-such things
> happen. This lets effective, speedy implementations exist _as
> much as feasible given hardware, OS, etc_. Even Java, who
> opts for full specification most of the time (cutting itself off
> from hardware that won't satisfy that spec), has a bit more
> flexibility than you allow -- as it is, after all, a real-world language.

As *I* allow? You're setting up all kinds of strawmen yourself. :-)

I'm only talking about operations that happen with functioning hardware
inside the allowable semantics of the language. You're the one asking
whether a program running on a flaming CPU is capable of being OO.

> No. It's about the *CONTRACT*: the SEMANTIC SPECIFICATION of
> the receiving class. Is it ALLOWED to make a copy of the object
> I'm passing to it? Is it REQUIRED to?

We'll have to agree to disagree. If the desire of the routine is "keep this
bitmap refreshed on the screen", then whether the routine is allowed to make
a copy of the bitmap would be irrelevant, I would think. (Keep a copy and
use that in future refreshes perhaps would be relevant.) In a language with
Observers and weak pointers, the semantic specification of whether we're
allowed to hold a pointer to the bitmap, or to an observer who holds a
pointer to the bitmap, or whatever, becomes irrelevant to the
implementation. With weak pointers, one could even have a contract that says
"I'll keep the bitmap refreshed until you free the bitmap."

> Get it? _Specification_ constrains _implementation possibilities_.

And vica versa, which is the part you're ignoring. It's rather pointless to
specify something that implementation possibilities disallow, is it not?

> And, specification constrains *how the so-specified class can be
> used*. *NOT* the other way around! The implementation does
> NOT constrain anything.

Yes, it does, in that wonderful real world you're talking about. Take, for
example, your FPU. The implementation of the hardware constrains the
specification of languages that can run on it efficiently. Isn't that what
all your examples are about?

Ideally, the implementation would match only the nice specification. In
reality, some specifications are too strict to be implemented, so the
implementation changes the specification.

Indeed, cryptography is to a great extent all about specifying things in a
way that makes it impossible to do other things based on implementation
constraints.

> Specification always constrainst both uses and implementation.

And vica versa. You try not to specify things you can't implement. You try
not to use things that aren't specified. I've never successfully used *any*
program that's not implemented.

> changes that aren't contractually allowed. And I strongly doubt
> Mr Kay could have failed to see this obvious point, which has
> nothing to do with 'leaking information'.

Then perhaps you'd care to offer an alternative reason why Mr Kay thought
that GC was one of only two fundamental properties of OO? Seriously, if you
answer any of this, answer this one. I only have the quote. He didn't
include his reasoning, and it took me a few weeks before I figured out *why*
I thought he thought GC was fundamental.

> Many things in life are very nice, but having or lacking them is an
> orthogonal issue to "being object oriented".

I would suspect that the variety of things that "being object oriented" can
mean has evolved considerably in the last 30 years. Everyone has their own
definition. If it bugs you that someone famous for OO language design
thought that GC was a fundamental part of OO-ness, then... like... use C++
or something. :-) I personally would prefer to say that C++ is, say, loosely
OO, whereas Smalltalk is, say, strictly OO. By which I mean that C++ has OO
concepts, but does not enforce them, while Smalltalk has OO concepts and
does enforce them.

> > If you free the reference, and later attempts to use it cause a
> > "ReferenceFreedException" (for example), then I think I'd agree. But that
> > isn't how C or C++ (for example) works.
>
> It causes *undefined behaviour*, because of performance
> considerations: see above.

Right. That makes it unsafe. Nothing wrong with that in the right
situations. It lets you violate most or all precepts of OO, just as it lets
you violate most or all precepts of structured programming.

> Does Smalltalk specify *synchronous* exceptions for every possible
> occurrence?

As far as I know, yes. I haven't looked at Smalltalk in a while, either. If
it allows asynchronous events, I'd expect it would do it in a safe and OO
way.

For example, upon catching a floating-point overflow, the interrupt invokes
the FloatingPointOverflow message on the value that is the result of the
overflow. This allows anyone using the value in a later step to know it
overflowed. Of course, if you use the value *before* the overflow interrupt
is handled, you're pretty screwed anyway, I'd think.

> > But the *contract* is based on the *implementation*. I believe that you'll
>
> ***NOOOOO***!!!! This is utterly absurd. Do you REALLY program
> that way -- you choose an implementation, then you build your specs
> around it?! *SHUDDER*.

No. But when I find that my spec is impossible to implement or too
inefficient to be useful, I change the specification and make it a bit less
agressive. Just like you do. Note that I didn't say that "all contracts are
derived from implementations". I said "the particular aspect of the contract
for this particular hypothetical routine under consideration is a result of
a particular implementation choice." Trying to generalize it to a universal
qualifier and then saying "See? Here's a counter-example" doesn't disprove
the existance proof.

> > find *some* cases where the only reason that information is exposed in the
> > contract is because of your implementation choice.
>
> A *WELL-DESIGNED* specification will carefully leave things
> *EXPLICITLY UNDEFINED* where this is the best compromise
> between freedom for the client-code and for the implementer.

So "explicitly undefined" is not part of the contract? If it *is* part of
the contract, then you're exposing information because of the implementation
choices/limitations.

> The "immutable" tidbit here is the key: you've rigged the dice by
> specifying that there is exactly one mutation possible, 'freeing'.

I've simplified. In any case, if the bitmap is mutable, then it's allowed to
control whether (say) the refresh routine sees the changes, etc. I.e., all
the OO stuff is still there. If you've discarded the bitmap, then there's no
OO, no polymorphic dispatch on the reference to that object, etc.

> I know of no language that works that way; functional languages,
> with immutable data, invariably have no concept of 'freeing'.

Uh... "const" anyone? Or maybe strings in numerous languages? Tuples in
Python, just to get things back on topic? Or bitmaps that are explicitly
coded with only a getter and a creation routine?

> So, say that data ARE mutable, as in almost all of today's languages.

Of course *some* data are mutable.

Look, I am giving an example of where the ability to free memory explicitly
means you have expose an implementation detail at the specification level
where with GC you wouldn't need to do so. You can't go changing the example
to prove that in other situations you wouldn't need to, and draw conclusions
from that. That just isn't how logic works. I'm saying "there exists X" and
you're arguing "No it doesn't, because not all X."

> If it's compatible with OO to have to specify this when data are
> mutable, it surely doesn't suddenly become incompatible with OO
> when you decide the only mutation is 'freeing'; just as it would
> not if the only mutation was 'rotation', say. It's exactly the same
> kind of issue, after all; whether all mutations are allowed, or
> just some specific subset, is quite clearly secondary.

I disagree. Because if I can mutate the bitmap, the code will still
correctly do what I programmed it to do, even if that puts undesired results
on the screen. If I free the bitmap, the code will *not* continue to
correctly do what I told it to do, as there isn't anything I could tell it
to do that would be correct.

> > form the same basis for extensions and inheritance and such that classes
> do.
> > Not in the same way, of course.
>
> _Definitely_ "not in the same way"...

No, not. Right. OK, to clarify, by "some sort of class concept" I meant a
mechanism whereby code can be associated with more than one instance, and
get to the proper instance data automatically. Fair? Sheesh.

I.e., "OO" code where you can only have one instance that uses each method
is something I'd have to think about (and use) before I could decide whether
I'd call it OO; it clearly could have polymorphic dispatch. Of course,
since *you* are already the definitive source of knowledge on what is and is
not OO, I'd just have to ask you I guess. ;-)

Alex Martelli

unread,
Sep 5, 2000, 4:19:35 PM9/5/00
to
"Manuel Gutierrez Algaba" <th...@localhost.localdomain> wrote in message
news:slrn8rae8...@localhost.localdomain...

> On Tue, 5 Alex Martelli <ale...@yahoo.com> wrote:
> >"Manuel Gutierrez Algaba" < wrote in message
> >news:slrn8r8cl...@localhost.localdomain...
> > [snip]
> >> So types go against "functional".
> >
> >Most functional languages have (good) typesystems -- Haskell's being
> >definitely worth studying,
>
> I have the doc printed, ready to be studied, but Haskell syntax
> is almost as complex as Python 3000 syntax, so it's in the

Complex? The *syntax* of Haskell?! Why, it even uses
indent and linebreak Pythonically...


> >To continue your pipes/fluids analogy: not all fluids are safe to
> >run in every kind of pipe. A language with types basically *lets*
> >you state that (some *make* you state that, but "good" typesystem
> >implies some measure of type-inferencing, by definition of 'good':-).
>
> A type is a way to prevent fluids to go thru wrong pipes. But,
> the question is what happens when we are preventing it all the time.
> We have building extremely limited functions , very robust,...
> but not so robust if you have to write several versions of
> the same function for processing different types .

So a language with a decent typesystem lets you do polymorphic
and 'generic' stuff, obviously. Look at a Java dialect that adds just
a *little* bit of a good type system -- Hindley/Milner, like ML or
Haskell, and a little pattern-matching; it's called 'Pizza', and
although it's no longer maintained you can still have a look
at http://www.cis.unisa.edu.au/~pizza/ -- and, I think, download
and try out their compiler (the same group currently develops
'GJ', 'Generic Java', a perhaps more modest system that's maybe
due for incorporation in some future version of Java itself).

Just one little example...:

class Tree<A> implements Set<A> {
case Empty;
case Branch(A elem, Tree<A> left, Tree<A> right);
}

which as you see has Java-ish syntax (with the obvious <A>
annotations for parametric types), but semantics close to:

data Tree a = Empty
| Branch (elem :: a, left, right :: Tree a)


With just a touch of this 'genericity', you mostly stop having to
"write several versions of the same function".


Dynamic typing has its place (or I wouldn't be such a
Python fan!-), but I suggest you don't knock static typing
until you've tried a _good_ implementation (ideally, a
Hindley/Milner typesystem). Understanding the _true_
pluses and minuses of various typing styles will in any
case make you a better designer and coder even in just
one language, whatever its typesystem (or lack thereof,
if you ever find yourself faced with BCPL:-) happens to
be -- much like, say, being familiar with different structured
programming systems (if/while, etc) makes one a better
programmer even in bare machine code.


> At the end , you get lot of particular things, of particular
> redirections for determined fluids. It's better to have general
> treatments for general functionality. If I focus my effort in

It's best for everything to be as general as possible, but
not more general than that -- if you'll pardon the Einstein
paraphrase.

> general functionality what I'm doing is focusing in functionality.
> If I focus my effort in types, what I'm doing is low form
> of prerrequisites for every function, wasting the possibilities
> some functions may provide. If I think in terms of types, It's
> harder to think in terms of general functionality if we're focusing
> all the time of particular fluids.

Nope: typeclasses of fluid materials -- Noncorrosive, for example.

You do have the notion of typeclasses in Python: "a sequence",
for example, or "a file-like object"; you just lack the way to
*EXPRESS* this notion simply, directly, and explicitly _in the
language itself_. So you express it in docs & comments, but
those are often not as precise as one would wish (and as an
expression in a formal language might make things): when
somebody says 'you pass a file-like object', what methods are
going to be called on it -- write? read? writelines? readlines?
close? _Good_ docs would say that -- but why not have a
way to say it in the language itself?

To take a concrete example -- it's been true "since ages" in
Python that the for-statement:
for item in foo:
# whatever
accepts any 'sequencelike' foo -- it specifically requires foo
to have a __getitem__ that is called with an integer, from 0
upwards, and to have __getitem__ raise IndexError when
an item with a too-high index is first requested. Further,
this specific protocol also applies to built-in functions such
as min and max; and to infix-operator in (unless, in the
most recent version, foo specifically implements another
special method, which is called just for that case).

These requirements, AFAIK, are _still_ nowhere to be
found in the docs that are distributed with Python -- I
hope I'm wrong regarding the latest doc release, but it
sure was not clearly documented there last time I had
to check, a few months ago. Fortunately, this newsgroup
remedied this (and keeps remedying it as other people
have the same need to know). But who will supply the
equivalent information for libraries that are nowhere as
popular or well-documented as the Python language and
its builtin libraries...?

Now THIS is 'wasting the possibilities some functions
may provide' -- because the language is not expressive
enough to supply a notation for the typesystem it is
in fact using... dynamic typing has pluses (through not
being _forced_ to express what for some reason you
don't _want_ to express, say), but the _inability_ to
express type-features is definitely not a strength...


> What's more? Ah yes, the cast. If you've ever programmed
> in Java, you'll realize that 50 % of the time you're doing
> cast (type conversions), ...

Yes, through a combination of lack of genericity and a
debatable choice regarding coercion syntax. Visual Basic,
for all of its defects, is defensible in letting the QI call
be implicit when syntactically needed, e.g.
Dim theGunSlinger as new GunSlinger
Dim thePainter as Painter
...
Set thePainter = theGunSlinger
This has the same implications as the explicit (Painter)
cast would have in the Java equivalent (and lets you
call the two distinct functions thePainter.Draw() and
theGunSlinger.Draw() -- just as you could in Java, but
might have problems doing if you _couldn't_ express
QueryInterface in some way or other:-).


> >that does not model materials-issues -- which doesn't mean it's
> >*useless*, absolutely not: it's still very useful to model a fluid
> >flow system even in a more abstract way, without consideration of
> >materials' properties;
>
> Abstract way === Functional way !!!!

No, 'more abstract' modeling of a physical system only equals 'more
abstract', that's all. Fluid flow, in particular, is interesting and
important enough to model at many different abstraction levels;
some important results will already emerge even in highly-abstract
models assuming incompressible fluids and non-turbulent flow --
as you add more aspects to your model, such as turbulence, the
model will be progressively less abstract and quite possibly you
will find out more about the system you're modeling (at the main
price of higher computational cost for a given size of system).

It's perfectly feasible to write simulations at widely varying levels
of abstraction, in either functional or imperative languages (or
in-between ones of various stripes), with typesystems of
whatever kind.


> Having types is liking trying to run a 100 m hurdles race with
> a a heavy chain of iron in your feet. types are useful but
> I don't want the extreme lack of freedom.

Having a *good* typesystem need be nothing as confining as
that; you don't seem to have much experience using such
typesystems -- I strongly suggest you try out a few before
expressing such trenchant judgments.


Alex

Manuel Gutierrez Algaba

unread,
Sep 5, 2000, 5:57:09 PM9/5/00
to
On Tue, 5 Sep Alex Martelli <ale...@yahoo.com> wrote:
>"Manuel Gutierrez Algaba" wrote in message
>news:slrn8rae8...@localhost.localdomain...
>> On Tue, 5 Alex Martelli <ale...@yahoo.com> wrote:
>> >"Manuel Gutierrez Algaba" < wrote in message

>


>> general functionality what I'm doing is focusing in functionality.
>> If I focus my effort in types, what I'm doing is low form
>> of prerrequisites for every function, wasting the possibilities
>> some functions may provide. If I think in terms of types, It's
>> harder to think in terms of general functionality if we're focusing
>> all the time of particular fluids.
>
>Nope: typeclasses of fluid materials -- Noncorrosive, for example.
>
>You do have the notion of typeclasses in Python: "a sequence",
>for example, or "a file-like object"; you just lack the way to
>*EXPRESS* this notion simply, directly, and explicitly _in the
>language itself_. So you express it in docs & comments, but
>those are often not as precise as one would wish (and as an
>expression in a formal language might make things): when
>somebody says 'you pass a file-like object', what methods are
>going to be called on it -- write? read? writelines? readlines?
>close? _Good_ docs would say that -- but why not have a
>way to say it in the language itself?

No, an object that implements those methods required to be
considered a sequence is not a type(object )= sequence

Fortunately, the notion of objects that accomplish certain
behaviours is far more general and flexible than types.

>have the same need to know). But who will supply the
>equivalent information for libraries that are nowhere as
>popular or well-documented as the Python language and
>its builtin libraries...?
>
>Now THIS is 'wasting the possibilities some functions
>may provide' -- because the language is not expressive
>enough to supply a notation for the typesystem it is
>in fact using...

Ok, in python apart from ints and char , we don't use types,
not at all, we don't need them and they're ugly. If we really
need to do some checkings we can check the properties of the
object, that is much flexible and exhaustive than type.


>debatable choice regarding coercion syntax. Visual Basic,
>for all of its defects, is defensible in letting the QI call
>be implicit when syntactically needed, e.g.

I wouldn't defend Visual Basic even if it were defensible.

>> >*useless*, absolutely not: it's still very useful to model a fluid
>> >flow system even in a more abstract way, without consideration of
>> >materials' properties;
>>
>> Abstract way === Functional way !!!!
>
>No, 'more abstract' modeling of a physical system only equals 'more
>abstract', that's all.

This is an end point. You say not and I say yes.

>
>> Having types is liking trying to run a 100 m hurdles race with
>> a a heavy chain of iron in your feet. types are useful but
>> I don't want the extreme lack of freedom.
>
>Having a *good* typesystem need be nothing as confining as
>that; you don't seem to have much experience using such
>typesystems -- I strongly suggest you try out a few before
>expressing such trenchant judgments.
>

I have programmed in C for countless years, a bit in C++ and
a bit in Java. And in Pascal . And yes, I won't like types, ever.

The two brightest languages that come to my mind ( LISP, python)
don't use types.

While three of the worst languates have types (ADA, Java and C++).

It's suspicious that Smalltalkers don't like types . Somewhat
types represent a "hackerish" (low form of Computer Science)
way of doing thing rights.

Sure, types are a good tool for certain things. But, sure, we
have better tools for that sort of things.

---
MGA

Suchandra Thapa

unread,
Sep 5, 2000, 9:18:03 PM9/5/00
to
Manuel Gutierrez Algaba <th...@localhost.localdomain> wrote:
>On Tue, 5 Sep Alex Martelli <ale...@yahoo.com> wrote:
>>
>>You do have the notion of typeclasses in Python: "a sequence",
>>for example, or "a file-like object"; you just lack the way to
>>*EXPRESS* this notion simply, directly, and explicitly _in the
>>language itself_. So you express it in docs & comments, but
>>those are often not as precise as one would wish (and as an
>>expression in a formal language might make things): when
>>somebody says 'you pass a file-like object', what methods are
>>going to be called on it -- write? read? writelines? readlines?
>>close? _Good_ docs would say that -- but why not have a
>>way to say it in the language itself?
>
>No, an object that implements those methods required to be
>considered a sequence is not a type(object )= sequence
>
>Fortunately, the notion of objects that accomplish certain
>behaviours is far more general and flexible than types.

I think Alex was trying to make the point that the notion
of a type class captures the idea of specifying that a certain
object needs to have certain properties. For example, in
Haskell type classes are used to specify what types can be tested
for equality and to define how to test these types for
equality (through the use of an instance declaration).

>>Now THIS is 'wasting the possibilities some functions
>>may provide' -- because the language is not expressive
>>enough to supply a notation for the typesystem it is
>>in fact using...
>
>Ok, in python apart from ints and char , we don't use types,
>not at all, we don't need them and they're ugly. If we really
>need to do some checkings we can check the properties of the
>object, that is much flexible and exhaustive than type.

But typeclasses capture the notion of needing certain
properties for given types. With static type checking, you
gain something over a runtime check since you don't need to
keep type information and don't need to deal with runtime errors.
Even with your concept of just checking for properties, you still
need to insert code into the compiled version that deals with the case
where you don't have the proper method.

>>Having a *good* typesystem need be nothing as confining as
>>that; you don't seem to have much experience using such
>>typesystems -- I strongly suggest you try out a few before
>>expressing such trenchant judgments.
>>
>
>I have programmed in C for countless years, a bit in C++ and
>a bit in Java. And in Pascal . And yes, I won't like types, ever.
>
>The two brightest languages that come to my mind ( LISP, python)
>don't use types.

C/C++, Java and Pascal don't really have good type systems.
The compiler can't figure out types without explicit declarations.
With a good type system, your compiler can use type inference to
figure out what types to expect and just needs explicit type declarations
at a few places as hints. For example in Haskell code like,

data Color = Red | Green | Blue
deriving Eq

permute x | x == Red = Green
| x == Green = Blue
| x == Blue = Red

asking for the type of permute returns permute :: Color -> Color. The compiler
is smart enough to figure out that x needs to belong to the color type and
that permute returns a color. This is in contrast to C/C++ where you would
need to define permute with explicit type declarations. If you compare this
to the equivalent python or lisp code, they are remarkably similar.
I'm not sure what you mean by Lisp not using types. Lisp compilers
and run time systems maintain type information for all values that they
encounter. This can easily be demonstrated by the existence of the declaim
and declare statements which assert that certain variables have a given type.
In fact, Lisp inspired the creation of tagged type architectures where the cpu
reserves a few bits in each register to hold type information in allow
hardware assisted runtime type checking.


>It's suspicious that Smalltalkers don't like types . Somewhat
>types represent a "hackerish" (low form of Computer Science)
>way of doing thing rights.
>
>Sure, types are a good tool for certain things. But, sure, we
>have better tools for that sort of things.

What tools do we have that are better than types for the things
that types are good at? It seems that the tools that your idea of
encoding the properties of object and just checking that is basically
a restatement of types and type classes but with different names.
For example to capture the need for objects used in for to have Sequence
attributes, you can easily express it with types and typeclasses as

class Sequence a where
seq :: Num b -> a -> [c]

so that with a definition of for similar to

for y f = let z = seq y
in map f z

any object y is allowed as long as you declare that y supports a seq method.

Alex Martelli

unread,
Sep 6, 2000, 4:58:35 AM9/6/00
to
"Suchandra Thapa" <sst...@harper.uchicago.edu> wrote in message
news:slrn8rb7aa....@hepcat.uchicago.edu...

> Manuel Gutierrez Algaba <th...@localhost.localdomain> wrote:
> >On Tue, 5 Sep Alex Martelli <ale...@yahoo.com> wrote:
> >>
> >>You do have the notion of typeclasses in Python: "a sequence",
> >>for example, or "a file-like object"; you just lack the way to
> >>*EXPRESS* this notion simply, directly, and explicitly _in the
> >>language itself_. So you express it in docs & comments, but
> >>those are often not as precise as one would wish (and as an
> >>expression in a formal language might make things): when
> >>somebody says 'you pass a file-like object', what methods are
> >>going to be called on it -- write? read? writelines? readlines?
> >>close? _Good_ docs would say that -- but why not have a
> >>way to say it in the language itself?
> >
> >No, an object that implements those methods required to be
> >considered a sequence is not a type(object )= sequence
> >
> >Fortunately, the notion of objects that accomplish certain
> >behaviours is far more general and flexible than types.
>
> I think Alex was trying to make the point that the notion
> of a type class captures the idea of specifying that a certain
> object needs to have certain properties. For example, in

Of course -- it's ONE of the thing it does. Further (a point
I had not mentioned in the above quote), a typeclass offers one
of the best and most elegant way to do 'mixin' kinds of jobs:
help the programmer when N methods may be defined in terms of
each other. The technique is well-known, and well worth
knowing since it also applies to Python, so, here's an example:

In Python, you could do this (relying on inheritance of
implementation, and overriding, exclusively):

class WritableFileMixin:
def write(self, stuff):
self.writelines((stuff,))
def writelines(self, stuff):
for stiff in stuff:
self.write(stiff)

These two mutually recursive methods do not appear very
useful, do they?-) But, they *are*! The usage
convention is that a user class, that wants to implement
a writable-file-like-object, inherits from the mixin and
overrides EITHER of the two methods; automatically, then,
the other method will be defined in terms of the overridden
one. It's of course OK to override both, but then it's the
programmer's concern to ensure the semantic consistency is
kept; this might be useful basically for efficiency in
some cases (avoiding an unneeded indirectness).

Deriving this while overriding neither method is a violation
of protocol that will lead to a serious runtime error (a
stack-overflow due to unbounded mutual recursion) if either
method is ever called. Pity there is no way to express this
protocol in the Python language: it must be left to docs
and/or comments *exclusively* (hmmm, I wonder if a metaclass
or extensionclass might in fact be used to optionally check
the protocol -- it would still be at runtime, of course, but,
at least, the usage error would be diagnosed much earlier and
in a possibly more direct/usable way; this bears thinking...).

Anyway, a typeclass would directly both express 'the type
implements these methods', AND offer implementation help
when 'deriving' a type from the typeclass.


> Haskell type classes are used to specify what types can be tested
> for equality and to define how to test these types for
> equality (through the use of an instance declaration).

I think you're one metalevel off here -- it's not that
_types_ can be tested for equality, but rather _two
objects both belonging to a type_ (which instances Eq) can
be so tested. It's unfortunately easy to verbally slip
between metalevels when discussing this (I'm sure I've
committed several such errors myself in these discussions).


> >>Now THIS is 'wasting the possibilities some functions
> >>may provide' -- because the language is not expressive
> >>enough to supply a notation for the typesystem it is
> >>in fact using...
> >
> >Ok, in python apart from ints and char , we don't use types,
> >not at all, we don't need them and they're ugly. If we really
> >need to do some checkings we can check the properties of the
> >object, that is much flexible and exhaustive than type.
>
> But typeclasses capture the notion of needing certain
> properties for given types. With static type checking, you
> gain something over a runtime check since you don't need to
> keep type information and don't need to deal with runtime errors.

Well, you still do need to deal with SOME 'runtime errors', aka
(synchronous) 'exceptions', but if the compiler can do some of
that work for you earlier, it does lessen your runtime burden
a bit, yes. However, in a Python context I don't see the
runtime-efficiency as being the paramount consideration; rather,
"expressive power" seems to me to be more important and Pythonic.

Explicit is better than implicit -- one of the Python mantras.

Being able to say *in the language* "and this here argument will
need to implement these here methods" would let the programmer
make explicit what currently must be left implicit -- I would
see it as _more Pythonic_, even if only modest code-generation
benefits ensued (Python is so nicely fluid & dynamic -- features
we *DON'T* want to lose! -- that compile-time ensuring things
is in general a very hard problem; so the compiler would have
to insert the runtime-checks -- still a bit better than having
to do them by hand, mind you:-). Something akin to a typeclass
would be a pretty nice, usable way to let these kind of things
be explicitly expressed.

> Even with your concept of just checking for properties, you still
> need to insert code into the compiled version that deals with the case
> where you don't have the proper method.

Yes, and in practice full checking may well turn out to be too
hard -- it's not enough that 'foo' has a 'write' method, that
method must be callable with exactly one argument, for example.
In practice, a try/except around the foo.write(whatever) call
may be better (assuming you have any idea of what to do when
the call attempt fails in various ways -- a non-Pythonic but
practical mantra is "don't check for errors that you don't
know what to do about":-).


> >>Having a *good* typesystem need be nothing as confining as
> >>that; you don't seem to have much experience using such
> >>typesystems -- I strongly suggest you try out a few before
> >>expressing such trenchant judgments.
> >>
> >
> >I have programmed in C for countless years, a bit in C++ and
> >a bit in Java. And in Pascal . And yes, I won't like types, ever.
> >
> >The two brightest languages that come to my mind ( LISP, python)
> >don't use types.
>
> C/C++, Java and Pascal don't really have good type systems.

Agreed (and neither does Eiffel IMHO -- too much burden being
placed on the single idea of 'class', no type-inference, &c).

> The compiler can't figure out types without explicit declarations.
> With a good type system, your compiler can use type inference to
> figure out what types to expect and just needs explicit type declarations
> at a few places as hints. For example in Haskell code like,
>
> data Color = Red | Green | Blue
> deriving Eq
>
> permute x | x == Red = Green
> | x == Green = Blue
> | x == Blue = Red
>
> asking for the type of permute returns permute :: Color -> Color. The
compiler
> is smart enough to figure out that x needs to belong to the color type and
> that permute returns a color. This is in contrast to C/C++ where you
would
> need to define permute with explicit type declarations. If you compare
this
> to the equivalent python or lisp code, they are remarkably similar.

Plus, in Haskell you *MAY* add an explicit assertion about permute's type
IF YOU WANT: it's up to you, as a programmer, to decide whether here it
is worth being explicit (for clarity, to help future readers of your
code understand what's going on, for example) or whether it's better to
just let type-inference work by itself.


Alex

Piet van Oostrum

unread,
Sep 6, 2000, 8:09:45 AM9/6/00
to
>>>>> "Alex Martelli" <ale...@yahoo.com> (AM) writes:

AM> Complex? The *syntax* of Haskell?! Why, it even uses
AM> indent and linebreak Pythonically...

I find the complete language defintion of Haskell quite complicated (and I
learned Algol 68 from the Reports, the original and the Revised, and
several intermediate versions).
But for normal use you can stick with a large subset that is quite simple.
--
Piet van Oostrum <pi...@cs.uu.nl>
URL: http://www.cs.uu.nl/~piet [PGP]
Private email: P.van....@hccnet.nl

Alex Martelli

unread,
Sep 6, 2000, 8:07:27 AM9/6/00
to
"Manuel Gutierrez Algaba" <th...@localhost.localdomain> wrote in message
news:slrn8rb1v...@localhost.localdomain...
[snip]

> >You do have the notion of typeclasses in Python: "a sequence",
> >for example, or "a file-like object"; you just lack the way to
[snip]

> No, an object that implements those methods required to be
> considered a sequence is not a type(object )= sequence

Not 'equals', of course! That would be as silly as
checking foo.__class__==bar when what is wanted is
isinstance(foo,bar).

But the point is, the concept is *vaguely defined*.

In certain cases, an object must supply a __length__
method to be usable as a sequence; in other cases, a
__getitem__ method sequentially callable with argument
0, 1, ... suffices (and it must raise IndexError when
called with a too-high argument). Are you sure you
known which case applies to various occasions...? It's
not clearly documented AFAIK.

I see the root cause of this problem in the fact that
there is no way *in Python itself* to express the
concept (crucially important in Python, and often
used) of 'I need this object to provide these methods
with these signatures and this semantic connection' --
i.e., a typeclass. As there is no such specific and
unambiguous way to express the exact requirements for
an object to be usable in a certain role (in a
statement, built-in function, other function, etc),
we're left with natural language -- comments, docs,
etc, which can well be out of line with what the code
really does; in many places you will see the docs
claim that, e.g., "a list" is needed here, when in
fact any 'sequence' (in either of the above senses)
will be OK. Or, one can read the C sources of the
built-in function's implementation and try to guess
which of its implementation-aspects are substance,
and which are accidents (liable to be changed at the
next minor-release:-).

I'm not claiming there's a magic-bullet solution --
a perfect way to express the "weakest precondition"
we need. I *am* claiming that a good-type-system
can be a very substantial help to a language's
expressive power, specifically for the purpose of
such explicit statement of preconditions -- *without*
any necessary connection to compiletime *enforcement*
of them, please note; that part, if and where feasible,
is just icing on the cake.


> Fortunately, the notion of objects that accomplish certain
> behaviours is far more general and flexible than types.

Definitely, if you don't include the little tidbit of
information "to which class does this instance belong
to" in the concept of "type". But if you consider a
"typing-system" that better matches most people's
intuition and typical usage (e.g., takes account of
class-membership &c:-), the difference is not of any
real substance _in practice_.

Yes, you *could* make one object behave differently
from all other instances of the same class, e.g. by
setting the instance's attribute foo to a different
method...:

class Tricky:
def foo(self):
print "foo here!"
def bar(self):
print "bar here!"

t1=Tricky()
t2=Tricky()
t3=Tricky()
t2.foo=t3.bar

Now, while object t2 'claims' to be the same type (and
class:-) as objects t1 and t3, its .foo method has
become very different from that of other instances of
the same class -- including a very peculiar behavior
regarding "self" (try also printing self in the
methods...:-). And you can do better with an
include new
and suitable calls to functions in that module, if
you're truly the fearless type.

How much of your Python programming hinges on such
subtle treachery, where one instance is subverted,
without over type (class) changes, to behave in such
a peculiar way? If more than 1%, then, I think
'Viper' might be more appropriate than 'Python' --
it's definitely "venomonous":-).

More frequent is to _delegate_ -- normally in an
explicit way, but, thanks to 'Acquisition' and
other tricks made possible by extensionclasses and
metaclasses, sometimes implicitly. I'll be the
last one to pan delegation -- I mourn for Self and
Cedric, the prototype-based languages, which don't
seem to be actively developed any more.

But this doesn't mean that the notion of object to
behavior correspondence is more general than _any_
typing system -- it only means that *classes* (and
concepts even more rigid, such as Python's "type")
are not the one and only appropriate way to structure
a typing system! Who ever said they were?

Just like, e.g., double-precision floating-point
numbers are NOT the one and only appropriate way
to do computation -- there are other number systems,
and very often those other number kinds are a better
fit to one's problems.
That they're not all-comprehensive doesn't mean
it isn't QUITE useful for a programming language
to offer them, you know? I want them *as well
as* fixnums, unbounded-precision integers,
rationals, _and_ complex (Python gives me 4 out
of 5, and building rationals is not too bad, just
as building complex numbers wouldn't be if they
were not 'native' in Python).

My position on typing-systems is similar: I'm
very glad Python's typing is so dynamic and fluid,
but I wish it ALSO gave me the expressive power
of (essentially...) Haskell's typeclasses to use,
at my choice, in that _vast_ majority of cases
where they model well what I'm doing. I'm not
holding my breath, but neither will I countenance
blind "anti-type" fanaticism, particularly when it
appears to be largely based on misconceptions. Vide:

> >have the same need to know). But who will supply the
> >equivalent information for libraries that are nowhere as
> >popular or well-documented as the Python language and
> >its builtin libraries...?
> >
> >Now THIS is 'wasting the possibilities some functions
> >may provide' -- because the language is not expressive
> >enough to supply a notation for the typesystem it is
> >in fact using...
>
> Ok, in python apart from ints and char , we don't use types,

...surely you MUST be joking?! 'char' is not even a Python
type (there are only strings; a "character" is actually
a string of length one -- a peculiarity, that a string is
a sequence of things which are themselves strings, though
in practice it's a relatively innocuous oddity). And
besides int, you have long, float, complex, array, etc;
have you ever heard of the built-in function called
'coerce'? Or of the one called 'type'...?

Python's typesystem is in fact pretty rich and strong.
Types always inhere in objects, never in references to
them, which is an excellent idea. There are, however,
some lacks -- first and foremost, the lack of a way to
give *explicit* expression to the key idea "this object
adheres to this protocol" (aka interface, typeclass,
signature, abstraction, and so on synonimizing).

The idea is widely *USED*, but is not subject to *EXPLICIT
DECLARATIVE SPECIFICATION* in the language. So, the
specification is left to docs, comments, and executable
code (often in C, in the case of statements and builtin
or extension functionality...). You cannot easily and
smoothly assert 'this object is-a sequence' or easily
and smoothly test for the fact, as you can (both assert
and test) for, say, "this number is positive".

So, we DO use types -- and how! However, we cannot MAKE
IT EXPLICIT _in the language itself_ that we're using a
certain type, for the extremely important case where
that 'certain type' is 'anything following yonder
protocol' (interface/typeclass/signature/abstraction).
This inability to express in an explicit way such an
important concept is definitely not a *strength*...


> not at all, we don't need them and they're ugly. If we really
> need to do some checkings we can check the properties of the
> object, that is much flexible and exhaustive than type.

It's bad style in Python to ask if type(x)==type(y), yes,
because it doesn't allow for any subtyping. But checking
'the properties' in executable code is "exhaustive" just
in the sense that it *will* exhaust you. Just checking
how many arguments a callable object can be called with
is not feasible in the general case -- the metainformation
is lacking for an object of a type that's implemented by
an extension! Just about the only thing you can do is
try to call, and catch the exception if any. But what if
you need to do some state-alteration to object a BEFORE
the actual call to object b? Then you need to ensure
that the alteration is transactional and that you can
roll it back on an exception. A pretty big bother!


> >debatable choice regarding coercion syntax. Visual Basic,
> >for all of its defects, is defensible in letting the QI call
> >be implicit when syntactically needed, e.g.
>
> I wouldn't defend Visual Basic even if it were defensible.

Yeah, that's the impression I'm getting -- that you argue
based on prejudice and spite, rather than on actual facts.
If a good idea emerged in VB, then, you would hate it just
because you hate VB (or MS, or whatever), rather than base
your evaluation on how good or bad the idea itself is -- at
least, such is the impression such observations give.

The world is full of people like this (idea A is bad because
it came from Microsoft, idea B is good because it's Stallman's
[or vice versa], ...). Fortunately, this is definitely not
Python's spirit -- Python, both the core and its major
extensions, has a very good track record of "playing well
with others", and taking good ideas from various sources if
and when they can fit well with the language's gestalt.


> >> >*useless*, absolutely not: it's still very useful to model a fluid
> >> >flow system even in a more abstract way, without consideration of
> >> >materials' properties;
> >>
> >> Abstract way === Functional way !!!!
> >
> >No, 'more abstract' modeling of a physical system only equals 'more
> >abstract', that's all.
>
> This is an end point. You say not and I say yes.

But you're attaching totally arbitrary meaning to the commonly
used word "functional" (as in "functional language"), while I
use it the way it's commonly used by practitioners in the field.
Go tell any Haskell or OCAML user that they can't use their
language to model a materials-science problem, or that they're
not doing functional programming when they are?!

Feel free to keep using words to mean whatever you want them
to mean, neither more nor less, dear Humpty-Dumpty, but realize
that this is just not the way human communication *works*.


> >> Having types is liking trying to run a 100 m hurdles race with
> >> a a heavy chain of iron in your feet. types are useful but
> >> I don't want the extreme lack of freedom.
> >
> >Having a *good* typesystem need be nothing as confining as
> >that; you don't seem to have much experience using such
> >typesystems -- I strongly suggest you try out a few before
> >expressing such trenchant judgments.
>
> I have programmed in C for countless years, a bit in C++ and
> a bit in Java. And in Pascal . And yes, I won't like types, ever.

So you've never used a proper (mathematically-sound) typesystem
(ML, Haskell, even humble Pizza), yet you already know what it
will be like to use one. Must be fun, being as close-minded as
that: no need to experience the world, you can know everything
based on your current prejudices.


> The two brightest languages that come to my mind ( LISP, python)
> don't use types.

Wrong (for both cases), as is easily shown. See above regarding
Python's use of types, for example.

> While three of the worst languates have types (ADA, Java and C++).
>
> It's suspicious that Smalltalkers don't like types . Somewhat

The Smalltalkers I know love their classes, which are a form
of typing-system. But, yes, the typing is looser than in
Python (I prefer Python's way, personally; else, I'd be on
a Smalltalk group, not here:-).

What you appear to be so foaming-at-the-mouth about is any
form of *STATICITY* to the type-system, which is a different
issue than having a solid and complete typesystem in the
first place. As you've never experienced a good system, able
to do compile-time type-inferencing, etc, it's understandable
that you loathe the idea -- less forgivable, however, that
your mind is so closed about it.

It's like somebody who's met only 6 French people in all of
their life, and each of them was disagreeable to him, so
he has firmly fixed the idea that all French people are
horrible, that he will never like a French person. This is
called "prejudice". Forming a _working hypothesis_ on the
basis of a small and biased sample is quite understandable
(we have to work on such flimsy bases in our lives...), but
one must also stay conscious of how low is the degree of
"solidity" of the hypothesis, and be accordingly ready to
revise it -- keeping one's mind open to possible contrary
evidence. Exactly the contrary of your "I won't like types,
ever" attitude, in other words.

Go ahead -- your loss, not mine...! I'm pretty sure other
readers of this thread, whose ideas were not already fixed
by prejudice one way or another, have picked up a lot of
pointers here if they want to explore further, anyway.


Alex

Alex Martelli

unread,
Sep 6, 2000, 9:24:32 AM9/6/00
to
"Piet van Oostrum" <pi...@cs.uu.nl> wrote in message
news:wzg0nd3...@sunshine.cs.uu.nl...

> >>>>> "Alex Martelli" <ale...@yahoo.com> (AM) writes:
>
> AM> Complex? The *syntax* of Haskell?! Why, it even uses
> AM> indent and linebreak Pythonically...
>
> I find the complete language defintion of Haskell quite complicated (and I

Yes, the language is. But, its _syntax_...?

> learned Algol 68 from the Reports, the original and the Revised, and
> several intermediate versions).

My hat is off to you -- *those* were way above my head.

> But for normal use you can stick with a large subset that is quite simple.

And what _syntactical_ features are out of that subset...?


Alex

Manuel Gutierrez Algaba

unread,
Sep 6, 2000, 12:26:44 PM9/6/00
to
On 06 Sep 2000 14:09:45 +0200, Piet van Oostrum <pi...@cs.uu.nl> wrote:
>>>>>> "Alex Martelli" <ale...@yahoo.com> (AM) writes:
>
>AM> Complex? The *syntax* of Haskell?! Why, it even uses
>AM> indent and linebreak Pythonically...
>
>I find the complete language defintion of Haskell quite complicated (and I
>learned Algol 68 from the Reports, the original and the Revised, and
>several intermediate versions).
>But for normal use you can stick with a large subset that is quite simple.

Well, it's funny Piet van Oostrum is a TeXpert and a pythoneer. I'm
a TeX learner and a pythoneer.

---
MGA

Manuel Gutierrez Algaba

unread,
Sep 6, 2000, 12:32:11 PM9/6/00
to
On Wed, 6 Sep 2000 Alex Martelli <ale...@yahoo.com> wrote:
>"Manuel Gutierrez Algaba" <th...@localhost.localdomain> wrote in message
>news:slrn8rb1v...@localhost.localdomain...
> [snip]
>
>Go ahead -- your loss, not mine...! I'm pretty sure other
>readers of this thread, whose ideas were not already fixed
>by prejudice one way or another, have picked up a lot of
>pointers here if they want to explore further, anyway.
>

A bit of primitiveness and wilderness is good, as Rosseau showed us.
Nice words and reasonings may have a hidden part, uncatchable by
intelligence but easily recognizable by stupid prejudices.

Many people thought Hitler was a nice guy in 1933.

Of course, I have to improve my knowledges to keep uptodate in
this high level newsgroup,.... but I shall never forget "my
prejudices", they pay .... in gold.

If I were fully guided by my common sense, I might have abandonned
python and focus __all__ my effort in Java. My bet in python
is still a bit irrational,.... fortunatelly I follow my instincts.

---
MGA

Tim Peters

unread,
Sep 7, 2000, 5:21:17 AM9/7/00
to pytho...@python.org
[Alex Martelli]

> I'm not sure a generator can be made in Python by just 'wrapping' an
> "enumerative routine" -- you'd need to rework the latter to add
> explicit state/resumption, I think. But it seems to me that Aahz
> was not talking of Python, the language, as it stands today -- he
> is, after all, taking of 'agreeing on implementation and interface'
> to have this _addition_ to the language. E.g., a new hypothetical
> Python might have a new keyword, similar to return, that returns
> a pair -- the result being returned, and an 'internal state' needed
> for resumption at the Python instruction just after the newreturn;
> the latter could be a callable-object (to be called without args).
>
> That might let generators be implemented without, however, allowing
> the full generality of coroutines. I dunno, I have not looked at
> the current Python sources, but I imagine this is the sort of thing
> they may be mooting...?

Right. I imagine a PEP will be written eventually with all the gory
details. In brief, Python already allocates its own "stack frames" off the
heap as just another kind of Python object, so the primary trick in
implementing Icon-style generators with the current VM is simply to refrain
from decrementing the frame's refcount at a return! The frame object
already contains (almost) all the state it needs to resume from where it
left off.

This is indeed not general enough to support coroutines; the essential
simplifying feature of an Icon-style generator is that it *always*
"suspends" to whoever invoked it (and so, unlike a coroutine, does not
*name* to whom it suspends); in this way it remains strictly stack-like; for
people who like buzzwords <wink>, Knuth calls Icon-style generators
"semi-coroutines"; I've sometimes, and with a bit of success, tried to call
them "resumable functions" on comp.lang.python, to take away some of the
mystery. In a sense, they do for function-local control flow what C
"static" does for function-local data: preserves it "across
calls/resumptions".

The implementation is easy to explain, so it may be a good idea <wink>.

Just van Rossum

unread,
Sep 7, 2000, 8:46:06 AM9/7/00
to Tim Peters
[ Generators ]

Tim Peters wrote:
> Right. I imagine a PEP will be written eventually with all the gory
> details. In brief, Python already allocates its own "stack frames" off the
> heap as just another kind of Python object, so the primary trick in
> implementing Icon-style generators with the current VM is simply to refrain
> from decrementing the frame's refcount at a return! The frame object
> already contains (almost) all the state it needs to resume from where it
> left off.
>
> This is indeed not general enough to support coroutines; the essential
> simplifying feature of an Icon-style generator is that it *always*
> "suspends" to whoever invoked it (and so, unlike a coroutine, does not
> *name* to whom it suspends); in this way it remains strictly stack-like; for
> people who like buzzwords <wink>, Knuth calls Icon-style generators
> "semi-coroutines"; I've sometimes, and with a bit of success, tried to call
> them "resumable functions" on comp.lang.python, to take away some of the
> mystery. In a sense, they do for function-local control flow what C
> "static" does for function-local data: preserves it "across
> calls/resumptions".

How would you spell resuming a suspended function?

Just

Alex Martelli

unread,
Sep 7, 2000, 6:53:34 AM9/7/00
to
"Manuel Gutierrez Algaba" <th...@localhost.localdomain> wrote in message
news:slrn8rd39...@localhost.localdomain...
[snip]

> A bit of primitiveness and wilderness is good, as Rosseau showed us.
> Nice words and reasonings may have a hidden part, uncatchable by
> intelligence but easily recognizable by stupid prejudices.
>
> Many people thought Hitler was a nice guy in 1933.

By Godwin's Law as commonly understood, this thread is now over,
and you've lost. See e.g. the Jargon File, e.g. at
http://www.ccil.org/jargon/

"""
:Godwin's Law: /prov./ [Usenet] "As a Usenet discussion grows
longer, the probability of a comparison involving Nazis or Hitler
approaches one." There is a tradition in many groups that, once
this occurs, that thread is over, and whoever mentioned the Nazis
has automatically lost whatever argument was in progress. Godwin's
Law thus practically guarantees the existence of an upper bound on
thread length in those groups.
"""

For many more details on Godwin's Law, see for example:
http://www.landfield.com/faqs/usenet/legends/godwin/


Incidentally, nobody who let reason (rather than "primitiveness
and wilderness" [?]) guide him could possibly "think Hitler
was a nice guy in 1933" -- "Mein Kampf" had been published in
1925, laying out his plans & ideas in detail. No *nice* "words
and reasonings" in it, but all the "stupid prejudices" you
might possibly like.

And Rousseau... as Bertran Russell wrote in 1945, Rousseau was
*exactly* "the inventor of the political philosophy of the
pseudo-democratic dictatorships" -- "Hitler is the outcome of
Rousseau", as Russell continues. The parallels between "la
volonté générale" and Hitler's "Volk" are crucially important.
"Freedom" as "total surrender to the service of the State"...


Alex

Suchandra Thapa

unread,
Sep 7, 2000, 4:13:26 PM9/7/00
to
Alex Martelli <ale...@yahoo.com> wrote:
>> Haskell type classes are used to specify what types can be tested
>> for equality and to define how to test these types for
>> equality (through the use of an instance declaration).
>
>I think you're one metalevel off here -- it's not that
>_types_ can be tested for equality, but rather _two
>objects both belonging to a type_ (which instances Eq) can
>be so tested. It's unfortunately easy to verbally slip
>between metalevels when discussing this (I'm sure I've
>committed several such errors myself in these discussions).

Yeah, that's what I meant. I've been sort of using types
to talk about types proper and objects of a type. Just some
laziness and bad notation on my part.

>
>
>> >>Now THIS is 'wasting the possibilities some functions
>> >>may provide' -- because the language is not expressive
>> >>enough to supply a notation for the typesystem it is
>> >>in fact using...
>> >
>> >Ok, in python apart from ints and char , we don't use types,
>> >not at all, we don't need them and they're ugly. If we really
>> >need to do some checkings we can check the properties of the
>> >object, that is much flexible and exhaustive than type.
>>
>> But typeclasses capture the notion of needing certain
>> properties for given types. With static type checking, you
>> gain something over a runtime check since you don't need to
>> keep type information and don't need to deal with runtime errors.
>
>Well, you still do need to deal with SOME 'runtime errors', aka
>(synchronous) 'exceptions', but if the compiler can do some of
>that work for you earlier, it does lessen your runtime burden
>a bit, yes. However, in a Python context I don't see the
>runtime-efficiency as being the paramount consideration; rather,
>"expressive power" seems to me to be more important and Pythonic.

I wasn't really thinking about exceptions since I meant runtime
type error when I wrote runtime error. More laziness on my part.

Neel Krishnaswami

unread,
Sep 7, 2000, 8:33:47 PM9/7/00
to
Just van Rossum <ju...@letterror.com> wrote:
> Tim Peters wrote:
> > [ Generators ]

> >
> > people who like buzzwords <wink>, Knuth calls Icon-style generators
> > "semi-coroutines"; I've sometimes, and with a bit of success, tried
> > to call them "resumable functions" on comp.lang.python, to take
> > away some of the mystery. In a sense, they do for function-local
> > control flow what C "static" does for function-local data:
> > preserves it "across calls/resumptions".
>
> How would you spell resuming a suspended function?

Here's one possible spelling -- this a generator for the first n
squares, using the relation that n**2 = 1 + 3 + 5 + ... + (2n-1),
and "suspend" as the magic keyword.

>>> def squares(n):
... i = 1
... s = 1
... while i <= n:
... i = i + 1
... s = s + (2*i - 1)
... suspend s
...
>>> for s in squares(5):
... print s
1
4
9
16
25

One nice thing about generators is that they have exactly the right
semantics to represent iterating over a sequence of values, so the
ordinary for loop can be the spelling of iterating over the values of
a generator.


Neel

Just van Rossum

unread,
Sep 8, 2000, 6:29:16 AM9/8/00
to ne...@alum.mit.edu
I wrote:
> How would you spell resuming a suspended function?

Neel Krishnaswami wrote:
> Here's one possible spelling -- this a generator for the first n
> squares, using the relation that n**2 = 1 + 3 + 5 + ... + (2n-1),
> and "suspend" as the magic keyword.
>
> >>> def squares(n):
> ... i = 1
> ... s = 1
> ... while i <= n:
> ... i = i + 1
> ... s = s + (2*i - 1)
> ... suspend s
> ...
> >>> for s in squares(5):
> ... print s
> 1
> 4
> 9
> 16
> 25
>
> One nice thing about generators is that they have exactly the right
> semantics to represent iterating over a sequence of values, so the
> ordinary for loop can be the spelling of iterating over the values of
> a generator.

Looks nice, but how does it work? does suspend really return a
sequence-like object? Or does the compiler know simply by the fact
that it contains a suspend statement that squares() should return a
sequence-like object? Could your example be rewritten like this,
and still be equivalent:

gen = squares(5)
for s in gen:
print s

?

What exactly is 'gen' here?

Just

Manuel Gutierrez Algaba

unread,
Sep 4, 2000, 5:41:17 PM9/4/00
to
On 2 Sep 2000 23:43:44 +0200, Carel Fellinger <cfel...@iae.nl> wrote:
>Manuel Gutierrez Algaba <th...@localhost.localdomain> wrote:
>> I see functional as "flow", while OO as "matter". The duality
>> energy(flow)-matter rules in physical world and IT. Types could be
>> the size of quanta, which is the less interesting part of the story.
>
>> As long as a "language let the data flow" (without blocking in
>> a struct or object) the language behaves in a functional manner.
>> Types seems stupid attempts to give shape to a liquid, to a flow.
>> It's the same if the "pipe" is rounded or square, the liquid/flow
>> adapts to it, but you have bad times when joining different pipes.
>
>To me Types don't relate to the pipe, but to the content. And nasty
>things can happen if you just mingle all kind of liquids:) So to me
>types make sence, even in a functional language.
>

Ok, see this function:

def a(b,c):
e(b,c,4)
print b,c

What does this ? It merely takes "fluid" b and c,
it directs it to the pipe "e" and applies the print statement on both.

So, what is "a" ? It's a fork of flows . It's just a "pattern" or
"scheme" of "flow" redirections !

So types go against "functional".


---
MGA

Greg Ewing

unread,
Sep 10, 2000, 9:31:44 PM9/10/00
to
Neel Krishnaswami wrote:
>
> >>> def squares(n):
> ...
> ... suspend s

Wouldn't "yield" sound better than "suspend"?
It's not s which is being suspended!

--
Greg Ewing, Computer Science Dept, University of Canterbury,
Christchurch, New Zealand
To get my email address, please visit my web page:
http://www.cosc.canterbury.ac.nz/~greg

Alex Martelli

unread,
Sep 11, 2000, 4:02:53 AM9/11/00
to
"Greg Ewing" <s...@my.signature> wrote in message
news:39BC3600...@my.signature...

> Neel Krishnaswami wrote:
> >
> > >>> def squares(n):
> > ...
> > ... suspend s
>
> Wouldn't "yield" sound better than "suspend"?
> It's not s which is being suspended!

True. The "suspend" verb has a precedent in the Icon programming
language, though (and possibly others...?). "suspend_returning" is
what we MEAN (suspend THIS procedure, returning s). Maybe "yield"
is indeed a good verb for this... it's surely more concise:-).

As usual (as I remember, you're not enthusiastic about this:-) we
must take care about breaking perfectly good existing Python code
which uses yield (or suspend) as identifiers. Some languages are
designed to be able to add unlimited amounts of keywords (Perl
comes to mind: that is part of why all identifier uses have to be
"stropped" in Perl -- marked with $ or & or @ etc -- it's so that
"barewords" can be reserved for possible future keywords...); some
aren't, and Python falls in this latter category (and I wouldn't
have it otherwise, personally:-).


Alex

Christian Tismer

unread,
Sep 12, 2000, 6:54:45 AM9/12/00
to Peter Schneider-Kamp, s-t...@uchicago.edu, pytho...@python.org

Peter Schneider-Kamp wrote:
>
> Suchandra Thapa wrote:
...
> > >Sometimes there are even technical drawbacks: if you use Stackless
> > >python (corrutines), you can use Jpython compatibility !
>
> It is not impossible to implement coroutines for JPython. It's just
> hard (or impossible) to implement them efficiently without changing
> the VM beneath.

But that's the point:
JPython does not have an extra VM, but emits java VM
code. That means: Having coroutines for JPython is
the same question as having coroutines in Java: Is
it possible without changing the Java VM?

I'm proetty shure it is impossible by creating java scourcecode,
but I'm no longer sure that the VM is incapable to do it.
This code would just have no Java equivalent.
Yet another decompile barrier :-)

ciao - chris

--
Christian Tismer :^) <mailto:tis...@appliedbiometrics.com>
Applied Biometrics GmbH : Have a break! Take a ride on Python's
Kaunstr. 26 : *Starship* http://starship.python.net
14163 Berlin : PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
where do you want to jump today? http://www.stackless.com

Bristol

unread,
Sep 12, 2000, 10:34:54 AM9/12/00
to
Christian Tismer wrote:
>
> [...] having coroutines in Java: Is

> it possible without changing the Java VM?
>
> I'm proetty shure it is impossible by creating java scourcecode,
> but I'm no longer sure that the VM is incapable to do it.
> This code would just have no Java equivalent.

Quite the suspense. I am sure I'm not the only one that
would be delighted just by the principle of it before
even the practice. A computer-language equivalent to
famous "sealed" litterature. True wizardry ;-)

BTW, I wonder if the time has not come to transpose
Unicode from the case of language alphabets to that of
VM alphabets - e.g. "VM bytecode strings". Imagine a lib
giving access to a bunch of different VMs.

BTW^2 : does someone possess a pointer to a summary
of .NET ?... I have no heart to forage ms.com for it.

- B.

Tim Peters

unread,
Sep 12, 2000, 3:44:11 PM9/12/00
to Christian Tismer, s-t...@uchicago.edu, pytho...@python.org
[Christian Tismer]

> But that's the point:
> JPython does not have an extra VM, but emits java VM
> code. That means: Having coroutines for JPython is
> the same question as having coroutines in Java: Is
> it possible without changing the Java VM?
>
> I'm proetty shure it is impossible by creating java scourcecode,
> but I'm no longer sure that the VM is incapable to do it.
> This code would just have no Java equivalent.
> Yet another decompile barrier :-)

Jcon is to Icon as JPython is to CPython. While Icon's coexpressions are
spottily implemented across "real machines" <wink>, via excruciating
platform-specific assembler to fool the C stack, Jcon simply implements them
on top of Java threads. So the JVM doesn't really enter into it for them
(no more than the PVM enters into my ancient thread-based coroutine pkg for
Python).

you-didn't-say-anything-about-being-usably-fast<wink>-ly y'rs - tim

Neel Krishnaswami

unread,
Sep 12, 2000, 9:00:49 PM9/12/00
to
Tim Peters <tim...@email.msn.com> wrote:
>
> Jcon is to Icon as JPython is to CPython. While Icon's
> coexpressions are spottily implemented across "real machines"
> <wink>, via excruciating platform-specific assembler to fool the C
> stack, Jcon simply implements them on top of Java threads. So the
> JVM doesn't really enter into it for them (no more than the PVM
> enters into my ancient thread-based coroutine pkg for Python).
>
> you-didn't-say-anything-about-being-usably-fast<wink>-ly y'rs - tim

If I understand things correctly (which I very well may not), most
Java implementations have "green threads" by default -- ie, threads
that don't map 1-for-1 to OS threads. So I don't see any fundamental
reason why these threads should be any slower than, say, microthreads
for Stackless.

This would imply a different implementation strategy for J-Stackless
vis-a-vis C-Stackless, but so what? Am I misunderstanding something
basic?


Neel

Thomas Wouters

unread,
Sep 20, 2000, 3:00:00 AM9/20/00
to pytho...@python.org
On Mon, Sep 04, 2000 at 08:58:36AM -0400, Alex wrote:

> I got the idea from Robert Heinlein during my long, now somewhat
> embarrassing, adolescent fascination with his stories. Although in the

What, embarrasing ? Why in heavens name for ? I still get more enjoyment out
of most of Heinleins stories than out of any other (sci)fi writer, no matter
how good they are (and I do consider Asimov, Herbert, Crighton and Simmons
close seconds -- but not close enough.) Why should I be embarrased ? :)

--
Thomas Wouters <tho...@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


Alex

unread,
Sep 20, 2000, 3:00:00 AM9/20/00
to

> What, embarrasing ? Why in heavens name for ? I still get more
> enjoyment out of most of Heinleins stories than out of any other
> (sci)fi writer, no matter how good they are (and I do consider Asimov,
> Herbert, Crighton and Simmons close seconds -- but not close enough.)
> Why should I be embarrased ? :)

Well, there's no accounting for (my) tastes, I guess. Sorry.

Have you read "The Day After Tomorrow?"

Alex.

--
Speak softly but carry a big carrot.


Aahz Maruch

unread,
Sep 20, 2000, 3:00:00 AM9/20/00
to
In article <2000092023...@xs4all.nl>,

Thomas Wouters <tho...@xs4all.net> wrote:
>
>What, embarrasing ? Why in heavens name for ? I still get more enjoyment out
>of most of Heinleins stories than out of any other (sci)fi writer, no matter
>how good they are (and I do consider Asimov, Herbert, Crighton and Simmons
>close seconds -- but not close enough.) Why should I be embarrased ? :)

Because you call it "(sci)fi" instead of "SF", "SF/F", or "science
fiction".
--
--- Aahz (Copyright 2000 by aa...@pobox.com)

Androgynous poly kinky vanilla queer het <*> http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Member of the Groucho Marx Fan Club --Aahz

0 new messages