Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Range Operation pre-PEP

2,362 views
Skip to first unread message

Roman Suzi

unread,
May 8, 2001, 3:44:20 AM5/8/01
to pytho...@python.org, tho...@xs4all.net
Hello!

What is below is a half-baked proposal for new built-in
Python operation. If anybody wants to raise this flag
again and fill gaps, please do so.

(I have not studied PEP howto much, so probably I missed
something important.)

I hope the idea of ".." is quite simple: make special syntactic
form for xrange (range). I am not into C to reference-implement this
feature, so if anybody could do it...

All in all, I think the feature is most wanted/useful by/for beginners.

---------------------------

PEP: ???
Title: Range Operation
Version: $Revision: 1.0 $
Author: r...@onego.ru (Roman Souzi), derived from tho...@xs4all.net (Thomas
Wouters)'s rejected PEP 204
Status:
Type: Standards Track
Python-Version: 2.0
Created:
Post-History:

Introduction

This PEP describes the `range operation' proposal for Python 2.?.
This PEP tracks the status and ownership of this feature, slated
for introduction in Python 2.?. It contains a description of the
feature and outlines changes necessary to support the feature.
This PEP summarizes discussions held in mailing list forums.

The feature is needed in Python because it allows beginners
to learn for loop before learning range/xrange functions
and adds more clarity to the program because of similarity
with mathematical notation.

List ranges

Ranges are sequences of numbers of a fixed stepping, often used in
for-loops. The Python for-loop is designed to iterate over a
sequence directly:

>>> l = ['a', 'b', 'c', 'd']
>>> for item in l:
... print item
a
b
c
d

However, this solution is not always prudent. Firstly, problems
arise when altering the sequence in the body of the for-loop,
resulting in the for-loop skipping items. Secondly, it is not
possible to iterate over, say, every second element of the
sequence. And thirdly, it is sometimes necessary to process an
element based on its index, which is not readily available in the
above construct.

For these instances, and others where a range of numbers is
desired, Python provides the `range' builtin function, which
creates a list of numbers. The `range' function takes three
arguments, `start', `end' and `step'. `start' and `step' are
optional, and default to 0 and 1, respectively.

The `range' function creates a list of numbers, starting at
`start', with a step of `step', up to, but not including `end', so
that `range(10)' produces a list that has exactly 10 items, the
numbers 0 through 9.

Using the `range' function, the above example would look like
this:

>>> for i in range(len(l)):
... print l[i]
a
b
c
d

Or, to start at the second element of `l' and processing only
every second element from then on:

>>> for i in range(1, len(l), 2):
... print l[i]
b
d

There are disadvantages with this approach:

- Clarity of notation: beginners need to remember function name,
its difference from xrange while other languages use syntactical
construct for the same purpose, not a function.

- what else?

The Proposed Solution

The proposed implementation uses new syntactic
entity to specify range operation, as shown below:

>>> for i in 1 .. 5:
... print i
1
2
3
4
5

Or in extended form to specify a step:

>>> for i in (1, 3) .. 5:
... print i
1
3
5

The new operation ".." generates xrange object
on by following rule:

a1 .. an

(a1, a2) .. an

is equivalent to:

xrange(a1, an+1[, a2-a1])

or other analog of xrange is that is applicable to the
type.

N.B. Operation ".." deviates from slice notations.
This is so in accordance with mathematical notation
of ranges.

There are no implicit forms of "..", that is, a1 and an must
be present.

To overload ".." operation in user-defined classes,
specific type must implement method
__range__(self, a1, an, a2=1). This method must return either
iterator object or a list.

In the table of operation priorities, .. operation must have less
(or equal) priority than lambda but more priority than
comma-separated display.

The implementation of range-literals will need changes
backward-compatible changes to built-in xrange() function
to allow calling __range__ if the type is not a built-in
object. (or is it already done?)

Otherwise, ".." is syntactical "sugar" for xrange().

Reference Implementation

".." is a binary operation which if the first argument is 2-tuple,
uses xrange() with 3 arguments (as shown above) and if it is
a 1-tuple or other object, then it evaluates xrange with two
arguments (step=1).

TODO. Anybody?

Open issues

- I'd liked to see 1, 3 .. n syntax instead of (1, 3) .. n, but
the later will interfere with comma-separated lists.

- How does will couple with iterators?

???

Copyright

This document has been based on the PEP 204 and is also
placed in the Public Domain.


------------------------

Sincerely yours, Roman A.Suzi
--
- Petrozavodsk - Karelia - Russia - mailto:r...@onego.ru -

Thomas Wouters

unread,
May 8, 2001, 10:17:15 AM5/8/01
to Roman Suzi, pytho...@python.org
On Tue, May 08, 2001 at 11:44:20AM +0400, Roman Suzi wrote:

> What is below is a half-baked proposal for new built-in
> Python operation. If anybody wants to raise this flag
> again and fill gaps, please do so.

> (I have not studied PEP howto much, so probably I missed
> something important.)

Just one thing: you should send it to Barry (ba...@wooz.org) at least if you
want it submitted as a real PEP :)

> I hope the idea of ".." is quite simple: make special syntactic
> form for xrange (range). I am not into C to reference-implement this
> feature, so if anybody could do it...

It's not too hard to do, though the syntax might require some fiddling. (the
'.' is currently eaten by the tokenizer as part of the number.) I could do
it myself, but I'm so busy right now, I think Tim has more time than I do
<0.4 wink>

> The proposed implementation uses new syntactic
> entity to specify range operation, as shown below:

> >>> for i in 1 .. 5:
> ... print i
> 1
> 2
> 3
> 4
> 5

I like, though the endpoint is debatable. Maybe Greg W. wants to do some
usage testing ? :)

> Or in extended form to specify a step:

> >>> for i in (1, 3) .. 5:
> ... print i
> 1
> 3
> 5

I don't like this. If anything, it should be the other way 'round
("1 .. (5, 3)") but even better would be something more obvious, such as

(1 .. 5) % 3

> The new operation ".." generates xrange object
> on by following rule:

I'd suggest it create an Iterator rather than an xrange object. Iterators
are new in the CVS tree, and will be in Python 2.2. Very neat things ;)


> There are no implicit forms of "..", that is, a1 and an must
> be present.

If .. created an iterator, it could be open-ended instead.

> Open issues

How about ranges of longs ? floats ? strings ? How about mixed types ?

--
Thomas Wouters <tho...@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!

Carlos Ribeiro

unread,
May 8, 2001, 6:49:17 PM5/8/01
to Thomas Wouters, Roman Suzi, pytho...@python.org
At 16:17 08/05/01 +0200, Thomas Wouters wrote:
> > >>> for i in 1 .. 5:
> > ... print i
> > 1
> > 2
> > 3
> > 4
> > 5
>
>I like, though the endpoint is debatable. Maybe Greg W. wants to do some
>usage testing ? :)

I think the syntax is clear enough to avoid discussions like
does-it-start-from-one-or-zero? Let the a..b syntax delimit the range
*including* both the minimum and maximum values. BTW it's pretty close to
the range syntax in other languages (such as Pascal, for enumerated
constants and sets).

> > Or in extended form to specify a step:
>
> > >>> for i in (1, 3) .. 5:
> > ... print i
> > 1
> > 3
> > 5
>
>I don't like this. If anything, it should be the other way 'round
>("1 .. (5, 3)") but even better would be something more obvious, such as
>
>(1 .. 5) % 3

This one is a little bit harder. Where did you take the 3 on the example
above? I could not understand the logic of '(1 .. 5) % 3'. I think that the
range operator should specify the step; something like '1 .. 5 % 2' is
clearer in my opinion. The use of '%' as 'step' operator is highly
debatable, though. Why not use any of the characters [!@#$&] ? <wink>

(if we keep going this way '%' is going to be the most overloaded operator
in the history of programming languages :-)

> > The new operation ".." generates xrange object
> > on by following rule:
>
>I'd suggest it create an Iterator rather than an xrange object. Iterators
>are new in the CVS tree, and will be in Python 2.2. Very neat things ;)

Agreed. In fact, xrange could be internally substituted by iterators.

As for other types of range constructors: in Pascal, you can use the syntax
above to construct ranges for enumerated types or sets. The catch is that
only scalar types can be used. This makes sense in Pascal, because the same
syntax is also used to specify sets. In Python, similarly, the same syntax
could be used (in the future) to implement set libraries. OTOH, ranges
built with floats may experience problems caused by the limited precision,
so that's a good reason to avoid it. Fixed point decimals don't suffer from
the same problems, though, and are a better candidate.


Carlos Ribeiro

Greg Ewing

unread,
May 9, 2001, 1:06:56 AM5/9/01
to
If you want to get this considered, you'll have to
make sure your PEP explicitly addresses all the reasons
for the rejection of PEP 204. (That might not be easy,
since the rejection notice attached to PEP 204 is
rather sketchy!)

One thing that comes to mind is the interpretation
of the endpoint. The syntax strongly suggests that the
endpoint is inclusive, as you propose. But this is
not the most useful meaning in Python. Most of the
time, range() is used in constructs such as

for i in range(len(mylist)):

which would become

for i in 0..len(mylist)-1:

The need to include the -1 is a nuisance and
potential source of error.

The alternative, making the endpoint exclusive,
would make the meaning of the .. construct
somewhat unintuitive.

--
Greg Ewing, Computer Science Dept, University of Canterbury,
Christchurch, New Zealand
To get my email address, please visit my web page:
http://www.cosc.canterbury.ac.nz/~greg

Thomas Wouters

unread,
May 9, 2001, 3:22:48 AM5/9/01
to Carlos Ribeiro, Roman Suzi, pytho...@python.org
On Tue, May 08, 2001 at 07:49:17PM -0300, Carlos Ribeiro wrote:

> Agreed. In fact, xrange could be internally substituted by iterators.

No, it could not. xrange(1,10)[3] works, iter(range(1,10))[3] does not:

>>> xrange(1,10)[3]
4
>>> iter(range(1,10))[3]
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: unsubscriptable object
>>>

xrange isn't an iterator, it's a 'generator' (or a 'lazy list'). I agree
that xrange should adhere to the iteration protocol, but making it *just* an
iterator isn't enough.

Roman Suzi

unread,
May 9, 2001, 4:18:19 AM5/9/01
to pytho...@python.org
On Wed, 9 May 2001, Greg Ewing wrote:

>If you want to get this considered, you'll have to
>make sure your PEP explicitly addresses all the reasons

I read them and found that almost all of them are caused
by using slice-like notation.

There are no more reasons to reject ".." than there are
reasons to obsolete range/xrange, because its just
a more newbie-frienly way to write them.


>for the rejection of PEP 204. (That might not be easy,
>since the rejection notice attached to PEP 204 is
>rather sketchy!)

I understand the rejection very well:

":" notation is very overused by the PEP 204.
(makes to much confusion).
IMHO, it's the main reason for Guido to reject it.


>One thing that comes to mind is the interpretation
>of the endpoint. The syntax strongly suggests that the
>endpoint is inclusive, as you propose. But this is
>not the most useful meaning in Python.

The reason under Python incl-excl nature is that
these are intermediate points, needed for slice-operations
to be natural like in:

a[0:0] = [1,2,3]


There is no reason to bring this into ".." notation, because
its different from ":" even visuall and it is more naturally
to use convenient incl-incl ranges.

Most of the
>time, range() is used in constructs such as
>
> for i in range(len(mylist)):

These must be eliminated by use of maps, apply, etc -
by functional style.

>which would become
>
> for i in 0..len(mylist)-1:
>
>The need to include the -1 is a nuisance and
>potential source of error.
>
>The alternative, making the endpoint exclusive,
>would make the meaning of the .. construct
>somewhat unintuitive.

I agree.

The fact, that there will be TWO ways for ranges doesn't
bother me, because the reason to include ".." into
Python is allowing beginners to appreciate more naturally
looking code than range(1,101).

I even think there is a need to rename ".." from range operation
to something other.

Sincerely yours, Roman Suzi
--
_/ Russia _/ Karelia _/ Petrozavodsk _/ r...@onego.ru _/
_/ Wednesday, May 09, 2001 _/ Powered by Linux RedHat 6.2 _/
_/ "Always remember no matter where you go, there you are." _/


Roman Suzi

unread,
May 9, 2001, 4:25:38 AM5/9/01
to Thomas Wouters, pytho...@python.org
On Wed, 9 May 2001, Thomas Wouters wrote:

>On Tue, May 08, 2001 at 07:49:17PM -0300, Carlos Ribeiro wrote:
>
>> Agreed. In fact, xrange could be internally substituted by iterators.
>
>No, it could not. xrange(1,10)[3] works, iter(range(1,10))[3] does not:
>
>>>> xrange(1,10)[3]
>4
>>>> iter(range(1,10))[3]
>Traceback (most recent call last):
> File "<stdin>", line 1, in ?
>TypeError: unsubscriptable object
>>>>
>
>xrange isn't an iterator, it's a 'generator' (or a 'lazy list'). I agree
>that xrange should adhere to the iteration protocol, but making it *just* an
>iterator isn't enough.

I think, ".." will be just another way to write xrange.
If it ever will be an iterator - so will be "..".

Or it could be some yet-to-implement irange, which
can be read as "inclusive range" or "iterator-creating range"
-- this must not stop from proposing ".." notation now.

This small feature will make Python's for loops much
friendlier to beginners.

I do not fear

for i in 0..len(mylist)-1

because this is _explicit_ writing of the fact mylist
is indexed from 0. If there will be errors, they will
cause IndexErrors and not some subtle logical errors.

Alex Martelli

unread,
May 9, 2001, 4:59:46 AM5/9/01
to
"Greg Ewing" <s...@my.signature> wrote in message
news:3AF8D070...@my.signature...
[snip]

> The alternative, making the endpoint exclusive,
> would make the meaning of the .. construct
> somewhat unintuitive.


for x in seq[a:b]:
print x


versus


for i in a..b:
print seq[i]


My own intuition would expect the same behavior
from these two loops. I would consider it VERY
unintuitive to have them behave differently!!!

Maybe this should be accepted and reinforced
with a different syntax for the range, e.g.

for i in [a:b]:
print seq[i]

with the existing brackets-and-colons tokens
in lieu of the new proposed '..' token. Now
the fact that b is excluded, just as in s[a:b],
should perhaps be even more obvious...?


Alex

Duncan Booth

unread,
May 9, 2001, 6:22:53 AM5/9/01
to
Carlos Ribeiro <crib...@mail.inet.com.br> wrote in
<mailman.989362093...@python.org>:

>>(1 .. 5) % 3
>
> This one is a little bit harder. Where did you take the 3 on the
> example above? I could not understand the logic of '(1 .. 5) % 3'. I
> think that the range operator should specify the step; something like
> '1 .. 5 % 2' is clearer in my opinion. The use of '%' as 'step'
> operator is highly debatable, though. Why not use any of the characters
> [!@#$&] ? <wink>
>

Why not add a 'step' method to iterators and sequence types?
step(self, step, start=0)
Returns a new iterator or sequence containing every 'step'
value from index 'start' in the original sequence.
For iterators this would return a new iterator, for sequences it would
return a new sequence. If you want a step iterator from a sequence you just
have to do: iter(sequence).step(n)

So:

>>> range(1, 11).step(2)
[1, 3, 5, 7, 9]
>>> (1..10).step(2)
<some iterator type returning 1, 3, 5, 7, 9 successively>
>>> range(1, 11).step(2, 1)
[2, 4, 6, 8, 10]
>>> lst = range(1, 11)
>>> zip(lst.step(2), lst.step(2, 1))
[(1, 2), (3, 4), (5, 6), (7, 8), (9, 10)]

--
Duncan Booth dun...@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?

Alex Martelli

unread,
May 9, 2001, 6:01:16 AM5/9/01
to
"Roman Suzi" <r...@onego.ru> wrote in message
news:mailman.989396953...@python.org...
...

> I do not fear
>
> for i in 0..len(mylist)-1
>
> because this is _explicit_ writing of the fact mylist
> is indexed from 0. If there will be errors, they will
> cause IndexErrors and not some subtle logical errors.

I disagree. Inclusive-lower-bound, exclusive-upper-bound
is a VERY important and useful idiom, which we should
strive to make as widespread as we possibly can: by the
very fact of being invariably used everywhere lower and
upper bounds are specified, it helps a LOT against
off-by-one errors! I first saw it explained in Koenig's
excellent "C traps and pitfalls" book, by the way.

Python does it pretty well today:

for item in seq[lower:upper]:
print item

and

for index in range(lower,upper):
print seq[index]

behave the same way, just as they should. If instead:

for index in lower..upper:
print seq[index]

did behave differently, I would could that as a very
serious inconsistency in the language. I believe it
would induce many off-by-one errors in apparently
unrelated situations, such as slices, in newbies, as
well as being a nasty tripwire for experienced users.


Alex

Alex Martelli

unread,
May 9, 2001, 6:23:56 AM5/9/01
to
"Roman Suzi" <r...@onego.ru> wrote in message
news:mailman.989396968...@python.org...
...

> >One thing that comes to mind is the interpretation
> >of the endpoint. The syntax strongly suggests that the
> >endpoint is inclusive, as you propose. But this is
> >not the most useful meaning in Python.
>
> The reason under Python incl-excl nature is that
> these are intermediate points, needed for slice-operations
> to be natural like in:
>
> a[0:0] = [1,2,3]

That is far from being the only reason!

> There is no reason to bring this into ".." notation, because
> its different from ":" even visuall and it is more naturally
> to use convenient incl-incl ranges.

"convenience" is probably the excuse brought for the worst
pieces of software design today, although the older one of
"optimization" is still going strong.

Upper-bound-excluded ranges are in fact more convenient, and
help avoid off-by-one errors. Having DIFFERENT behavior
between seq[a:b] and [seq[x] for x in a..b] would, moreover,
be PARTICULARLY horrid.


> >time, range() is used in constructs such as
> >
> > for i in range(len(mylist)):
>
> These must be eliminated by use of maps, apply, etc -
> by functional style.

s/must/may often/. Iterating over a range of indices
will remain an important idiom even when a newbie has
fully digested map and apply. For example, often we
need to operate on items in a sequence depending on
the item that comes immediately before or after. To
say that the approach of zipping the sequence to its
"shifted" copy "must", or even "should", eliminate the
simple and obvious approach of index-loops smacks of
very substantial hubris.

For example, let's say we need the sequence of all
characters that come right after a '!' in string s.


Zip-approach:

def afterbang1(s):
result = []
for previous, thisone in zip(s, s[1:]):
if previous == '!':
result.append(thisone)
return result

versus index-approach:

def afterbang2(s):
result = []
for i in range(1,len(s)):
if s[i-1] == '!':
result.append(s[i])
return result

versus keep-state approach:

def afterbang3(s):
result = []
previous = s[0]
for c in s[1:]:
if previous == '!':
result.append(c)
previous = c
return result

versus "map, apply, etc" approach:

def afterbang4(s):
return map(lambda x,y: y,
filter(lambda x,y: x=='!',
map(None, s[:-1], s[1:])))


Personally, I find afterbang2 clearest, and
therefore best. If concision is a goal, the
first two approaches can easily be rewritten
as list comprehensions, of course:

Zip-approach with LC:

def afterbang1_LC(s):
return [ thisone
for previous, thisone in zip(s, s[1:])
if previous == '!' ]

versus index-approach with LC:

def afterbang2_LC(s):
return [ s[i]
for i in range(1, len(s))
if s[i-1] == '!' ]

Again, it's a subtle matter of taste between
these two, but I think the second one is better.


I definitely wouldn't mind having a
for i in 1..len(s)
alternative to use in afterbang2 and 2_LC,
but I'd never consider having to use
for i in 1..len(s)+1
as even remotely approaching acceptability.


> >The alternative, making the endpoint exclusive,
> >would make the meaning of the .. construct
> >somewhat unintuitive.
>
> I agree.

I don't.


> The fact, that there will be TWO ways for ranges doesn't
> bother me, because the reason to include ".." into
> Python is allowing beginners to appreciate more naturally
> looking code than range(1,101).

Getting beginners used to inclusive-upper-end idioms
and then having them trip over exclusive-upper-end
ones elsewhere later is NOT doing them any favour.


Alex

Carlos Ribeiro

unread,
May 9, 2001, 9:14:08 AM5/9/01
to Alex Martelli, pytho...@python.org
At 12:23 09/05/01 +0200, Alex Martelli wrote:
>Getting beginners used to inclusive-upper-end idioms
>and then having them trip over exclusive-upper-end
>ones elsewhere later is NOT doing them any favour.

Ok, I'm in brainstorm mode. You're warned :-) Some weird ideas are just
popping out of my mind:

1) Have both ':' and '..' as range operators. ':' excludes the last
element, '..' includes it. This is a very simple rule-of-thumb. A beginner
may be confused, but it's not something hard to learn.

2) Force the construct to be specified inside brackets or parenthesis,
pretty much like list comprehensions.

3) If (2) is approved, then the step operator becomes a natural extension
of the list comprehension syntax:

>>> [1..5 step 2]
[1,3,5]
>>> [1:5 step 2]
[1,3]

It's clear, readable and unambiguous.


Carlos Ribeiro

Alex Martelli

unread,
May 9, 2001, 9:15:54 AM5/9/01
to pytho...@python.org, Carlos Ribeiro
"Carlos Ribeiro" <crib...@mail.inet.com.br> writes:
> At 12:23 09/05/01 +0200, Alex Martelli wrote:
> >Getting beginners used to inclusive-upper-end idioms
> >and then having them trip over exclusive-upper-end
> >ones elsewhere later is NOT doing them any favour.
>
> Ok, I'm in brainstorm mode. You're warned :-) Some weird ideas are just
> popping out of my mind:
>
> 1) Have both ':' and '..' as range operators. ':' excludes the last
> element, '..' includes it. This is a very simple rule-of-thumb. A beginner
> may be confused, but it's not something hard to learn.

I think it might be -- I see no graphical suggestion of the
behavior differences.

> 2) Force the construct to be specified inside brackets or parenthesis,
> pretty much like list comprehensions.

I like the ideas of brackets for that.


for i in [a:b]

just looks so neat...:-).

> 3) If (2) is approved, then the step operator becomes a natural extension
> of the list comprehension syntax:
>
> >>> [1..5 step 2]
> [1,3,5]
> >>> [1:5 step 2]
> [1,3]
>
> It's clear, readable and unambiguous.

Sure does. I wonder if 'step' would have to become a keyword here
(presumably a no-go) or if it could be "fudged" the way non-keyword
"as" was added to import and from statements.


Alex

Thomas Heller

unread,
May 9, 2001, 10:12:56 AM5/9/01
to
> > At 12:23 09/05/01 +0200, Alex Martelli wrote:
> > >Getting beginners used to inclusive-upper-end idioms
> > >and then having them trip over exclusive-upper-end
> > >ones elsewhere later is NOT doing them any favour.
> >
> > Ok, I'm in brainstorm mode. You're warned :-) Some weird ideas are just
> > popping out of my mind:
> >
for i in [0:10):
print i

for i in [0:10]:
print i

Thomas

Roman Suzi

unread,
May 9, 2001, 10:30:48 AM5/9/01
to pytho...@python.org
On Wed, 9 May 2001, Carlos Ribeiro wrote:

>At 12:23 09/05/01 +0200, Alex Martelli wrote:
>>Getting beginners used to inclusive-upper-end idioms
>>and then having them trip over exclusive-upper-end
>>ones elsewhere later is NOT doing them any favour.
>
>Ok, I'm in brainstorm mode. You're warned :-) Some weird ideas are just
>popping out of my mind:
>

>1) Have both ':' and '..' as range operators. ':' excludes the last
>element, '..' includes it. This is a very simple rule-of-thumb. A beginner
>may be confused, but it's not something hard to learn.

I agree.

>
>2) Force the construct to be specified inside brackets or parenthesis,
>pretty much like list comprehensions.

This is what rejected PEP 204 is about...

>3) If (2) is approved, then the step operator becomes a natural extension
>of the list comprehension syntax:
>
> >>> [1..5 step 2]
>[1,3,5]
> >>> [1:5 step 2]
>[1,3]
>
>It's clear, readable and unambiguous.

Well, I do not pretend (1, 2) .. 5 is the only way.
But "a .. b step c" is too... verbose.

>Carlos Ribeiro

Bjorn Pettersen

unread,
May 9, 2001, 11:52:46 AM5/9/01
to pytho...@python.org
> From: Thomas Heller [mailto:thomas...@ion-tof.com]

>
> > > At 12:23 09/05/01 +0200, Alex Martelli wrote:
> > > >Getting beginners used to inclusive-upper-end idioms
> > > >and then having them trip over exclusive-upper-end
> > > >ones elsewhere later is NOT doing them any favour.
> > >
> > > Ok, I'm in brainstorm mode. You're warned :-) Some weird
> ideas are just
> > > popping out of my mind:
> > >
> for i in [0:10):
> print i
>
> for i in [0:10]:
> print i
>
> Thomas

If all we want to do is get rid of

for i in range(len(seq)):
...

why not overload range so it can take a sequence directly:

for i in range(seq):
...

If we really want range literals (which I don't see as particularly useful
outside this application although I'm happy to be proven wrong <wink>), we
can't use x:y since Guido has allready nixed it, and we shouldn't use x..y
since the established meaning interfers with common usage leading to
unappealing constructs like 0..len(seq)-1. We could certainly come up with
other syntax, e.g. x -> y meaning from x up to, but not including, y:

for i in 0 -> len(seq):
...

I still favor overloading range though...

-- bjorn

Terry Reedy

unread,
May 9, 2001, 12:01:13 PM5/9/01
to

"Thomas Heller" <thomas...@ion-tof.com> wrote in message
news:9dbj8j$h83n5$1...@ID-59885.news.dfncis.de...

> > > Ok, I'm in brainstorm mode. You're warned :-) Some weird ideas are
just
> > > popping out of my mind:
> > >
> for i in [0:10):
> print i
>
> for i in [0:10]:
> print i

Since this is standard math notation for ranges, with : substituted for ,
to avoid syntax conflict, I like it the best of all proposals in this
thread. A step value, is present, could go either in middle or end.

Carlos Ribeiro

unread,
May 9, 2001, 12:05:38 PM5/9/01
to Bjorn Pettersen, pytho...@python.org
At 09:52 09/05/01 -0600, Bjorn Pettersen wrote:
>If we really want range literals (which I don't see as particularly useful
>outside this application although I'm happy to be proven wrong <wink>) ...

I would really like to perform set operations in Python. Some good
libraries already provide support for this (kjBuckets comes to mind). The
a..b syntax is a nice way to specify arbitrary set constants, such as in:

if some_number in [1..3,7..9]:
do_something()

There are several uses for constructs like the one above. I keep thinking
that '..' is clearly inclusive, so why not give it a try?


Carlos Ribeiro

Rainer Deyke

unread,
May 9, 2001, 1:27:49 PM5/9/01
to
"Bjorn Pettersen" <BPett...@NAREX.com> wrote in message
news:mailman.989423654...@python.org...

> for i in 0 -> len(seq):
> ...
>
> I still favor overloading range though...

I favor leaving Python as-is, since this entire thread is about solving a
non-problem. If you want a shorter way of writing 'range(len(x))', you can
always write your own function. If there is really a need of such a
function in '__builtin__', I would prefer that it have a new name. How
about 'indexes'?


--
Rainer Deyke (ro...@rainerdeyke.com)
Shareware computer games - http://rainerdeyke.com
"In ihren Reihen zu stehen heisst unter Feinden zu kaempfen" - Abigor


Roman Suzi

unread,
May 9, 2001, 12:55:11 PM5/9/01
to pytho...@python.org

Please, recall the main reason for the new feature:

- to allow beginners learn loops BEFORE they learn
lists and functions (which are needed now to explain
for-loop).

Even recursion could be explained without lists in Python,
why for-loops need this prerequisite?

It is not too expensive to write range(a,b,c) or
something similar, but cp4e needs cleaner way to do for loops.

If not ".." syntax, then any other, but with EXPLICIT
mentioning of the including/excluding nature of
ranges.

eg

for i from 1 to 10

for i from 1 before 11

for j from 1 to 100 step 2


-- these could be also used in list comprehensions consistently:

[i for i from 1 to 10]

*

But our discussion showed that there is probably no
much need in pushing ".." any more... :-(
because I see ".." as an only-for-novice feature
and we (who discuss) are no novices to judge...

Andrew Dalke

unread,
May 9, 2001, 2:20:56 PM5/9/01
to

Carlos Ribeiro wrote:
>The a..b syntax is a nice way to specify arbitrary set
>constants, such as in:
>
>if some_number in [1..3,7..9]:
> do_something()

Something like that can already be done without introducing
new syntax, as in

>>> import types
>>> class Set:
... def __getitem__(self, terms):
... data = []
... for term in terms:
... if type(term) == types.SliceType:
... for x in range(term.start, term.stop):
... data.append(x)
... else:
... data.append(term)
... return data
...
>>>
>>> Set = Set()
>>> Set[1:3, 7:9]
[1, 2, 7, 8]
>>> Set[1, 4:10]
[1, 4, 5, 6, 7, 8, 9]
>>>

Andrew
da...@acm.org

Douglas Alan

unread,
May 9, 2001, 2:25:40 PM5/9/01
to
Why don't we just nip this creeping-featurism in the bud and just say
"no" to a new range syntax.

(And if it were to exist, which it shouldn't, Alex is right that it
*has* to be a semi-closed interval. Anything else would be an
abomination.)

|>oug

Fredrik Lundh

unread,
May 9, 2001, 3:23:46 PM5/9/01
to
Roman Suzi wrote:
> Please, recall the main reason for the new feature:
>
> - to allow beginners learn loops BEFORE they learn
> lists and functions (which are needed now to explain
> for-loop).
>
> Even recursion could be explained without lists in Python,
> why for-loops need this prerequisite?

done much python training lately, or are you just making
things up as you go?

(in my experience, you don't have to understand much
about functions and lists to learn how to *use* range in
simple for-in loops...)

Cheers /F


Denys Duchier

unread,
May 9, 2001, 3:51:03 PM5/9/01
to
"Alex Martelli" <ale...@yahoo.com> writes:

> I like the ideas of brackets for that.
> for i in [a:b]
> just looks so neat...:-).

That was the subject of PEP 204 (Range Literals) which was rejected.
I don't know why. (the stated open issues should simply all be
answered with: "no magic!").

It seems to me that it would be rather neat to extend the list
comprehension idea to be able to create an iterator rather than a
list. I shudder at the idea of proposing syntax for that, but using
[| ... |] instead of [ ... ] might work.

Now excuse me while I run for cover... :-)

--
Dr. Denys Duchier Denys....@ps.uni-sb.de
Forschungsbereich Programmiersysteme (Programming Systems Lab)
Universitaet des Saarlandes, Geb. 45 http://www.ps.uni-sb.de/~duchier
Postfach 15 11 50 Phone: +49 681 302 5618
66041 Saarbruecken, Germany Fax: +49 681 302 5615

Walter Moreira

unread,
May 9, 2001, 3:10:39 PM5/9/01
to pytho...@python.org
On Wed, May 09, 2001 at 09:52:46AM -0600, Bjorn Pettersen wrote:
> If all we want to do is get rid of
>
> for i in range(len(seq)):
> ...
>
> why not overload range so it can take a sequence directly:
>
> for i in range(seq):
> ...

Isn't more pythonic the recipe (i think it is from Alex):

for i, a in Indexed(seq):
...

where indexed is just:

class Indexed:
def __init__(self, seq):
self.seq = seq
def __getitem__(self, i):
return i, self.seq[i]

? Do we really need syntatic changes (..) and semantic changes also? Doesn't
things like '..' make the language more difficult to learn for begginners?

The reference manual must be bigger and bigger with each addition of this
kind. Other changes, like the elimination of the type/class differences make
the manual smaller. I think these are the things to be focused on.

Regards:
Walter

--
--------------
Walter Moreira <> Centro de Matematica <> Universidad de la Republica
email: wal...@cmat.edu.uy <> HomePage: http://www.cmat.edu.uy/~walterm
+-----------------------------------------------------
/OD\_ | Contrary to popular belief, Unix is user friendly.
O o . |_o_o_) | It just happens to be very selective about who its
| friends are. -- Kyle Hearn
--+--

Fredrik Lundh

unread,
May 9, 2001, 4:43:16 PM5/9/01
to
Douglas Alan wrote:
> Why don't we just nip this creeping-featurism in the bud and just say
> "no" to a new range syntax.

like we did with those pesky macros, you mean?

Cheers /F


Douglas Alan

unread,
May 9, 2001, 5:56:27 PM5/9/01
to
"Fredrik Lundh" <fre...@pythonware.com> writes:

> > Why don't we just nip this creeping-featurism in the bud and just say
> > "no" to a new range syntax.

> like we did with those pesky macros, you mean?

Yup, except that I think "we" should think more carefully about the
difference between powerful abstraction mechanisms and
creeping-featurism. The former helps prevent the latter.

(Besides, I never lobbied for Python to include macros. I only stated
that it would be a better language if it had them. But sometimes
better is worse.)

|>oug

Douglas Alan

unread,
May 9, 2001, 7:32:49 PM5/9/01
to
Speaking of creeping-featurism, how come we have list comprehension,
but not tuple comprehension? Seems inconsistent to me.

|>oug

Ben Hutchings

unread,
May 9, 2001, 8:27:22 PM5/9/01
to
Douglas Alan <nes...@mit.edu> writes:

> Speaking of creeping-featurism, how come we have list comprehension,
> but not tuple comprehension? Seems inconsistent to me.

A list contains a variable number of homogeneous values, e.g. the
lines of a file. Lists are like arrays in other languages. A tuple
contains a fixed number of heterogeneous values where each element has
a distinct meaning e.g. (year, month, day) for a date. Tuples are
like data structures or product types in other languages, except that
their types and fields are nameless. Comprehensions work with a
variable number of homogeneous values, so they produce lists.

--
Any opinions expressed are my own and not necessarily those of Roundpoint.

Joshua Marshall

unread,
May 9, 2001, 9:00:51 PM5/9/01
to
Ben Hutchings <ben.hu...@roundpoint.com> wrote:

> A list contains a variable number of homogeneous values, e.g. the
> lines of a file.

Homogeneousness isn't required, though I'd expect it's how they're
often used.

Delaney, Timothy

unread,
May 9, 2001, 8:20:00 PM5/9/01
to Roman Suzi, pytho...@python.org
> ":" notation is very overused by the PEP 204.
> (makes to much confusion).
> IMHO, it's the main reason for Guido to reject it.
>
>
> >One thing that comes to mind is the interpretation
> >of the endpoint. The syntax strongly suggests that the
> >endpoint is inclusive, as you propose. But this is
> >not the most useful meaning in Python.
>
> The reason under Python incl-excl nature is that
> these are intermediate points, needed for slice-operations
> to be natural like in:
>
> a[0:0] = [1,2,3]
>
>
> There is no reason to bring this into ".." notation, because
> its different from ":" even visuall and it is more naturally
> to use convenient incl-incl ranges.

Well, there would be a simple way to make ranges that conformed to both. Use
the mathematical notation for describing inclusive and exclusive bounds
(although it would be using '..' instead of ',').

If I remember correctly ...

[0..10] - range is 0 to 10
[0..10) - range is 0 to 9
(0..10] - range is 1 to 10
(0..10) - range is 1 to 9

There - everyone is happy, and an unadorned 0..10 is an error.

This has the advantage of looking more like a sequence as well.

It also has the disadvantage that it may not be immediately obvious to
someone what the range is - '(' and '[' aren't *that* dissimilar when right
near each other.

The other consideration is how this would work with steps. Perhaps
overloading ':' here would work, since 0..10 unadorned would be invalid ...

[0..10:3] - range is 0 to 10, step 3
(10..0:-2] - range is 9 to 0, step -2

Anyway, I'm sure there are lots of objections to this, but I'd like to hear
them :)

Tim Delaney

Douglas Alan

unread,
May 9, 2001, 11:33:26 PM5/9/01
to
Ben Hutchings <ben.hu...@roundpoint.com> writes:

> Douglas Alan <nes...@mit.edu> writes:

> > Speaking of creeping-featurism, how come we have list comprehension,
> > but not tuple comprehension? Seems inconsistent to me.

> A list contains a variable number of homogeneous values, e.g. the
> lines of a file. Lists are like arrays in other languages. A tuple
> contains a fixed number of heterogeneous values where each element
> has a distinct meaning e.g. (year, month, day) for a date.

I think you have some other language in mind, rather than Python. In
Python, the only semantic difference between a list and a tuple is
that a tuple is immutable.

> Tuples are like data structures or product types in other languages,
> except that their types and fields are nameless. Comprehensions
> work with a variable number of homogeneous values, so they produce
> lists.

filter() on a tuple returns a tuple. The length of the tuple cannot
be known in advance. map(), on the other hand, always returns a list.
But map can take multiple sequence arguments, so it wouldn't always be
obvious to know which type of sequence to duplicate. I suppose the
same is true for comprehension. But, it seems to me that Python could
allow

(x*2 for x in t)

to make a tuple as easily as it allows

[x*2 for x in l]

to make a list.

|>oug

Greg Ewing

unread,
May 10, 2001, 12:11:38 AM5/10/01
to
Douglas Alan wrote:
>
> Speaking of creeping-featurism, how come we have list comprehension,
> but not tuple comprehension?

Who says we don't?

tuple([2*i for i in [1,2,3]])

:-)

--
Greg Ewing, Computer Science Dept, University of Canterbury,
Christchurch, New Zealand
To get my email address, please visit my web page:
http://www.cosc.canterbury.ac.nz/~greg

Greg Ewing

unread,
May 10, 2001, 12:15:14 AM5/10/01
to
Alex Martelli wrote:
>
> Maybe this should be accepted and reinforced
> with a different syntax for the range, e.g.
>
> for i in [a:b]:

But that's exactly what PEP 204 proposed, and
Guido rejected it.

(The PEPs are supposed to record such decisions
so that these discussions don't keep recurring.
But that only works if people read the PEPs. :-)
Maybe we need a bot that posts all the rejected
PEPs to c.l.py every month? :-) :-)

Greg Ewing

unread,
May 10, 2001, 12:37:18 AM5/10/01
to
Roman Suzi wrote:
>
> Please, recall the main reason for the new feature:
>
> - to allow beginners learn loops BEFORE they learn
> lists and functions

I don't see anything wrong with learning about
lists before for-loops. Playing with lists is
much more fun than just playing with numbers
(which non-mathematically-inclined people may
find boring).

And functions don't come into it at all,
either way.

> But our discussion showed that there is probably no
> much need in pushing ".." any more... :-(
> because I see ".." as an only-for-novice feature
> and we (who discuss) are no novices to judge...

More importantly, I don't think there should be
any syntax that is there *just* for novices. A
piece of syntax needs to earn its keep much better
than that before it's worth keeping around.

Roman Suzi

unread,
May 9, 2001, 2:20:29 PM5/9/01
to pytho...@python.org
On Wed, 9 May 2001, Rainer Deyke wrote:

>"Bjorn Pettersen" <BPett...@NAREX.com> wrote in message
>news:mailman.989423654...@python.org...
>> for i in 0 -> len(seq):
>> ...
>>
>> I still favor overloading range though...
>
>I favor leaving Python as-is, since this entire thread is about solving a
>non-problem.

The problem is not to make range(len(seq)) shorter, but
to make for loop easier to learn:

right now FUNCTIONS and LISTS are needed to understand
for loop. (Even recursion is "easier" with just FUNCTIONS
to know ;-)

So, I started the thread to:

1. Discuss if the raised problem is "no problem"
2. If the 1. is not true, to discuss syntactic solution.

We seem to drift to no. 2 without questioning no. 1.

>If you want a shorter way of writing 'range(len(x))', you can
>always write your own function. If there is really a need of such a
>function in '__builtin__', I would prefer that it have a new name. How
>about 'indexes'?

Sincerely yours, Roman Suzi

Steve Holden

unread,
May 9, 2001, 10:00:09 PM5/9/01
to
"Delaney, Timothy" <tdel...@avaya.com> wrote ...
[ ... ]

> Well, there would be a simple way to make ranges that conformed to both.
Use
> the mathematical notation for describing inclusive and exclusive bounds
> (although it would be using '..' instead of ',').
>
> If I remember correctly ...
>
> [0..10] - range is 0 to 10
> [0..10) - range is 0 to 9
> (0..10] - range is 1 to 10
> (0..10) - range is 1 to 9
>
Correct me if I'm wrong, but don't the concepts of open and closed intervals
belong to analysis and not discrete mathematics -- in other words, they
apply to infinite sets but not finite ones. An open interval (x,y) is the
set of all numbers strictly greater than x and strictly less than y.
Dedekind will be turning in his grave...

This proposed notation (and you aren't the first to propose it) would be
bizarrely misunderstood by newcomers. I certainly don't want to have to
explain that the same object can be described in four different ways, and
suggest when [1..9] is more appropriate than (0..10) is more appropriate
than [1..10) is more appropriate than (0..9].

So, why have four representations for the same thing?

there-should-be-one-obvious-way-to-do-it-ly y'rs - steve

Steve Holden

unread,
May 10, 2001, 1:29:04 AM5/10/01
to
"Roman Suzi" <r...@onego.ru> wrote in ...

> On Wed, 9 May 2001, Rainer Deyke wrote:
>
[ ... ]

>
> right now FUNCTIONS and LISTS are needed to understand
> for loop. (Even recursion is "easier" with just FUNCTIONS
> to know ;-)
>
I would argue you could teach for loops by using existing lists without any
function usage at all, as in (for example):

for dir in sys.path:
print dir

regards
STeve


Roman Suzi

unread,
May 10, 2001, 1:31:31 AM5/10/01
to pytho...@python.org
On Wed, 9 May 2001, Fredrik Lundh wrote:

> Roman Suzi wrote:
> > Please, recall the main reason for the new feature:
> >
> > - to allow beginners learn loops BEFORE they learn

> > lists and functions (which are needed now to explain
> > for-loop).
> >
> > Even recursion could be explained without lists in Python,
> > why for-loops need this prerequisite?
>
> done much python training lately, or are you just making

Yes. I taught programming classes a year ago.
That is where I had some trouble with for-loops.

> things up as you go?
>
> (in my experience, you don't have to understand much
> about functions and lists to learn how to *use* range in
> simple for-in loops...)
>
> Cheers /F


Sincerely yours, Roman A.Suzi
--
- Petrozavodsk - Karelia - Russia - mailto:r...@onego.ru -

Delaney, Timothy

unread,
May 10, 2001, 2:07:56 AM5/10/01
to pytho...@python.org
Firstly, I don't personally feel there is a need for a new notation, but it
is fun to work out what would work best were we indeed to have one ...

> > Well, there would be a simple way to make ranges that
> conformed to both.
> Use
> > the mathematical notation for describing inclusive and
> exclusive bounds
> > (although it would be using '..' instead of ',').
> >
> > If I remember correctly ...
> >
> > [0..10] - range is 0 to 10
> > [0..10) - range is 0 to 9
> > (0..10] - range is 1 to 10
> > (0..10) - range is 1 to 9
> >
> Correct me if I'm wrong, but don't the concepts of open and
> closed intervals
> belong to analysis and not discrete mathematics -- in other
> words, they
> apply to infinite sets but not finite ones. An open interval
> (x,y) is the
> set of all numbers strictly greater than x and strictly less than y.
> Dedekind will be turning in his grave...

Well, there are *no* infinite sets in computing, so it's not generally
applicable here (the largest long integer in Python for example must be
bound by the amount of memory available to the Python process).

That may well be true (been a few years since I've done any formal maths
work/study), however it is fine system (which is an infinite set) 9 is the
greatest integer strictly less than 10.

It's is perfectly valid to define the set you are using - this notation
would only be valid for the set of integers (including long integers
naturally).

> This proposed notation (and you aren't the first to propose
> it) would be
> bizarrely misunderstood by newcomers. I certainly don't want
> to have to
> explain that the same object can be described in four
> different ways, and
> suggest when [1..9] is more appropriate than (0..10) is more
> appropriate
> than [1..10) is more appropriate than (0..9].
>
> So, why have four representations for the same thing?

Again, I personally agree ... you may have noticed my comment "There -
everyone is happy" ... ;) OTOH, I think inclusive lower-bound, optional
inclusive upper bound i.e. [] and [) would be useful. Having optional
exclusive lower bound was just a bit of fun.

Also, remember that it is quite possible to create a range counting down ...
in some cases I could see exclusive lower bound, inclusive upper bound being
useful (except that's really ex upper, in lower, but in reverse ...).

Tim Delaney

Fredrik Lundh

unread,
May 10, 2001, 3:11:44 AM5/10/01
to
Roman Suzi wrote:
>
> > done much python training lately, or are you just making
>
> Yes. I taught programming classes a year ago.
> That is where I had some trouble with for-loops.

and you don't think you can adjust the training material to cover
lists and functions briefly (from a use, don't create perspective)
before going into details about what for-in really does?

adding inconsistencies to the language to work around a problem
someone once had when explaining the current situation doesn't
sound that appealing...

Cheers /F


Alex Martelli

unread,
May 10, 2001, 4:48:04 AM5/10/01
to
"Roman Suzi" <r...@onego.ru> wrote in message
news:mailman.989472800...@python.org...
...

> > > Even recursion could be explained without lists in Python,
> > > why for-loops need this prerequisite?
> >
> > done much python training lately, or are you just making
>
> Yes. I taught programming classes a year ago.
> That is where I had some trouble with for-loops.

Personally, I've found that for-loops can easily
be taught without having introduced lists yet,
because they work on ANY sequence -- and a datatype
you will surely have introduced VERY early in any
Python course is "string"! A string is a sequence.
This is of limited (though non-null) usefulness in
most Python use, but it SURE comes in handy when
teaching Python to a totally-raw beginner...:

"""
For example, let's print the vowels, only, from
the string the user has introduced. This is easy
because we can look at any string one character
at a time with the for statement.

astring = raw_input("Please enter a string: ")
print "Vowels:",
nvowels = 0
for c in astring:
if c in 'aeiouAEIOU':
print c,
nvowels = nvowels + 1
print
print nvowels,"vowels in total"
if nvowels:
print "The last vowel was",c
"""

then you can go on to explain break, for/else,
and so on, all based on this simple toy example
about looping character by character on a string.

I do think one probably needs to have introduced
raw_input, strings, integers, assignment,
print, and if, before loops can fruitfully be
presented to a raw beginner. But having some
nice special syntax for range literals would not
alter that, it seems to me. Lists, if you wish,
can still come later, anyway.


Alex

Alex Martelli

unread,
May 10, 2001, 6:32:52 AM5/10/01
to
"Douglas Alan" <nes...@mit.edu> wrote in message
news:lc1ypym...@gaffa.mit.edu...
...

> (Besides, I never lobbied for Python to include macros. I only stated
> that it would be a better language if it had them. But sometimes
> better is worse.)

Didn't you write "Python should have procedural macros like Lisp."
(on 2001-04-16 14:20:05 PST -- google doesn't make it easy to give
other and more precise message identification)?

"Lobbying" no doubt connotes much more and better organized effort
than just posting netnews messages. But you didn't just state that
Python would be a better language if it had macros: you did write it
*SHOULD* have them (nor did I ever read anything from you to the
tone of "sorry, I was wrong, it SHOULDN'T have them" -- lots that
might be interpreted as attempts at backing off without actually
doing so, but never any apology or retraction).


Just to clarify by analogy and example: I opine, for example, that
Python would be a (marginally) better language if it didn't have
`expression` as a shorthand for repr(expression) -- it's one extra
(little) piece of syntactic baggage which IMHO is not "carrying
its weight". But from this opinion does not follow that Python
*shouldn't* have `expression` -- that would be quite a stronger
assertion (personally, there is exactly one construct in Python
today of which I feel I could assert Python "shouldn't have it").


Alex

Roman Suzi

unread,
May 10, 2001, 7:38:38 AM5/10/01
to Fredrik Lundh, pytho...@python.org

There is no demand for syntactically supported ranges?

Roman Suzi

unread,
May 10, 2001, 7:41:21 AM5/10/01
to Alex Martelli, pytho...@python.org
On Thu, 10 May 2001, Alex Martelli wrote:

> "Roman Suzi" <r...@onego.ru> wrote in message
> news:mailman.989472800...@python.org...
> ...
> > > > Even recursion could be explained without lists in Python,
> > > > why for-loops need this prerequisite?
> > >
> > > done much python training lately, or are you just making
> >
> > Yes. I taught programming classes a year ago.
> > That is where I had some trouble with for-loops.
>
> Personally, I've found that for-loops can easily
> be taught without having introduced lists yet,
> because they work on ANY sequence -- and a datatype
> you will surely have introduced VERY early in any
> Python course is "string"! A string is a sequence.
> This is of limited (though non-null) usefulness in
> most Python use, but it SURE comes in handy when
> teaching Python to a totally-raw beginner...:

Thank you Alex! This idea have not occured to me
(about strings as sequences instead of lists).

Carlos Ribeiro

unread,
May 10, 2001, 8:11:10 AM5/10/01
to Roman Suzi, Fredrik Lundh, pytho...@python.org
At 15:38 10/05/01 +0400, Roman Suzi wrote:
>There is no demand for syntactically supported ranges?

Yes, I think it would be a nice addition, and I'm sure we're not alone -
many people on the group contributed with ideas on this topic. While I
understand that there are some issues with the proposal - including or
excluding the upper limit, using braces to delimit the range, or how to
specify the step operator - I believe that the concerns are mostly in the
reaction-to-change field.

So, if we can propose a new syntax that:

- is clear for both novices and experienced Python programmers
- does not break anyone's code
- is consistent with related features (slicing and list comprehensions)

... there will be no real reason not to make it.


p.s. I was one of the guys that wrote about 'creeping featurism' a few
months ago. I don't think that it is a good idea, and many PEPs were really
weird. Now, this seems to be a very simple addition to me, because we can
meet the three criteria above. I'm happy to be proved wrong, but please,
let's talk about facts, not emotions, ok?


Carlos Ribeiro

Rainer Deyke

unread,
May 10, 2001, 11:19:26 AM5/10/01
to
"Carlos Ribeiro" <crib...@mail.inet.com.br> wrote in message
news:mailman.989496615...@python.org...

> So, if we can propose a new syntax that:
>
> - is clear for both novices and experienced Python programmers
> - does not break anyone's code
> - is consistent with related features (slicing and list comprehensions)
>
> ... there will be no real reason not to make it.

Only if you don't value simplicity. If this is the case, why are you using
Python at all?


--
Rainer Deyke (ro...@rainerdeyke.com)
Shareware computer games - http://rainerdeyke.com
"In ihren Reihen zu stehen heisst unter Feinden zu kaempfen" - Abigor


Rainer Deyke

unread,
May 10, 2001, 11:19:25 AM5/10/01
to
"Roman Suzi" <r...@onego.ru> wrote in message
news:mailman.989469107...@python.org...

> On Wed, 9 May 2001, Rainer Deyke wrote:
> The problem is not to make range(len(seq)) shorter, but
> to make for loop easier to learn:
>
> right now FUNCTIONS and LISTS are needed to understand
> for loop. (Even recursion is "easier" with just FUNCTIONS
> to know ;-)

Making a language more complicated makes it harder to learn, not easier.
You don't need to teach 'def' before teaching the use of built-in functions.
Training wheels do not belong in a language which is used for non-learning
purposes.

Roman Suzi

unread,
May 10, 2001, 11:05:12 AM5/10/01
to Carlos Ribeiro, pytho...@python.org
On Thu, 10 May 2001, Carlos Ribeiro wrote:

>At 15:38 10/05/01 +0400, Roman Suzi wrote:
>>There is no demand for syntactically supported ranges?
>
>Yes, I think it would be a nice addition, and I'm sure we're not alone -
>many people on the group contributed with ideas on this topic. While I
>understand that there are some issues with the proposal - including or
>excluding the upper limit, using braces to delimit the range, or how to
>specify the step operator - I believe that the concerns are mostly in the
>reaction-to-change field.
>

>So, if we can propose a new syntax that:
>
>- is clear for both novices and experienced Python programmers

Then it must be asymmetrical, like "->" to remind that
right end is exclusive.

>- does not break anyone's code

Its up to the syntax. I hope, solution from [;.:=!@#$%^&|\-_~]{2}
will be found, not some word.

>- is consistent with related features (slicing and list comprehensions)

Yes, this will be automatically if it will be incl-excl range
and will not interfer with [, :, ]

>... there will be no real reason not to make it.
>

>let's talk about facts, not emotions, ok?

Sincerely yours, Roman Suzi


--
_/ Russia _/ Karelia _/ Petrozavodsk _/ r...@onego.ru _/

_/ Thursday, May 10, 2001 _/ Powered by Linux RedHat 6.2 _/
_/ "COFFEE.EXE Missing - Insert Cup and Press Any Key" _/


Douglas Alan

unread,
May 10, 2001, 12:28:04 PM5/10/01
to
"Alex Martelli" <ale...@yahoo.com> writes:

> "Douglas Alan" <nes...@mit.edu> wrote: in message

>> (Besides, I never lobbied for Python to include macros. I only stated


>> that it would be a better language if it had them. But sometimes
>> better is worse.)

> Didn't you write "Python should have procedural macros like Lisp."
> (on 2001-04-16 14:20:05 PST

Yes, and Joe's Diner should have higher ceilings and my brother should
be shorter.

I shouldn't think that expressing how something should be should
necessarily be interpreted as recommending that something that already
exists should be changed. On the other hand, you should probably
infer correctly what I would say about what should be included in
Python 3k.

|>oug

Ben Hutchings

unread,
May 10, 2001, 3:27:50 PM5/10/01
to
Douglas Alan <nes...@mit.edu> writes:

> Ben Hutchings <ben.hu...@roundpoint.com> writes:
>
> > Douglas Alan <nes...@mit.edu> writes:
>
> > > Speaking of creeping-featurism, how come we have list comprehension,
> > > but not tuple comprehension? Seems inconsistent to me.
>
> > A list contains a variable number of homogeneous values, e.g. the
> > lines of a file. Lists are like arrays in other languages. A tuple
> > contains a fixed number of heterogeneous values where each element
> > has a distinct meaning e.g. (year, month, day) for a date.
>
> I think you have some other language in mind, rather than Python. In
> Python, the only semantic difference between a list and a tuple is
> that a tuple is immutable.

Lists also have count() and index() methods, which tuples do not.
Doesn't this suggest a difference in intended purpose to you?

> > Tuples are like data structures or product types in other languages,
> > except that their types and fields are nameless. Comprehensions
> > work with a variable number of homogeneous values, so they produce
> > lists.
>
> filter() on a tuple returns a tuple. The length of the tuple cannot
> be known in advance.

<snip>

Presumably filter() only requires its argument to be a sequence.

Ben Hutchings

unread,
May 10, 2001, 3:31:15 PM5/10/01
to
Joshua Marshall <jmar...@mathworks.com> writes:

I did not, of course, mean that the values must have the same concrete
type. What I meant was that lists are normally used in a way that
allows each position in a given list to hold a value from the same set
of types/classes; there are no special rules for particular positions
in the list.

Andrew Maizels

unread,
May 10, 2001, 5:52:57 PM5/10/01
to
Alex Martelli wrote:

> I disagree. Inclusive-lower-bound, exclusive-upper-bound
> is a VERY important and useful idiom, which we should
> strive to make as widespread as we possibly can: by the
> very fact of being invariably used everywhere lower and
> upper bounds are specified, it helps a LOT against
> off-by-one errors! I first saw it explained in Koenig's
> excellent "C traps and pitfalls" book, by the way.

I can see where consistency is important, but why does Python do the
inclusive-lower-bound, exclusive-upper-bound thing?

From my point of view, there's one correct behaviour for range(1,5), and
it's NOT [1, 2, 3, 4]!

Andrew.
--
There's only one game in town.
You can't win.
You can't break even.
You can't quit the game. -- The four laws of thermodynamics.

Douglas Alan

unread,
May 10, 2001, 6:17:36 PM5/10/01
to
Ben Hutchings <ben.hu...@roundpoint.com> writes:

> Lists also have count() and index() methods, which tuples do not.
> Doesn't this suggest a difference in intended purpose to you?

I also see that tuples support "in" and "+" and "*" and slicing and
len() and min() and max(). In light of this, it seems that the fact
that they are missing count() and index() should only been seen as an
unfortunate oversight.

>>> Tuples are like data structures or product types in other
>>> languages, except that their types and fields are nameless.
>>> Comprehensions work with a variable number of homogeneous values,
>>> so they produce lists.

> > filter() on a tuple returns a tuple. The length of the tuple cannot
> > be known in advance.

> Presumably filter() only requires its argument to be a sequence.

Yes, if its sequence argument is a tuple, then it returns a tuple. If
you were right, you should never want to run filter on a tuple, and if
you were so foolish to use filter() on a tuple, it should return a
list to show you the errors of your ways. Or actually, tuples
shouldn't be sequences at all, since you should never treat a tuple as
a sequence, rather than just as a record.

|>oug

Ben Hutchings

unread,
May 10, 2001, 7:11:52 PM5/10/01
to
Douglas Alan <nes...@mit.edu> writes:

> Ben Hutchings <ben.hu...@roundpoint.com> writes:
>
> > Lists also have count() and index() methods, which tuples do not.
> > Doesn't this suggest a difference in intended purpose to you?
>
> I also see that tuples support "in" and "+" and "*" and slicing and
> len() and min() and max().

OK, you're right.

> In light of this, it seems that the fact that they are missing
> count() and index() should only been seen as an unfortunate
> oversight.

Based on the above - yes.

I'd be happier if tuples only supported len() though - because
this large overlap in tuple and list capabilities means that it's
less obvious which is a good choice for some particular purpose.

> >>> Tuples are like data structures or product types in other
> >>> languages, except that their types and fields are nameless.
> >>> Comprehensions work with a variable number of homogeneous values,
> >>> so they produce lists.
>
> > > filter() on a tuple returns a tuple. The length of the tuple cannot
> > > be known in advance.
>
> > Presumably filter() only requires its argument to be a sequence.
>
> Yes, if its sequence argument is a tuple, then it returns a tuple. If
> you were right, you should never want to run filter on a tuple, and if
> you were so foolish to use filter() on a tuple, it should return a
> list to show you the errors of your ways.

Well I think it should do that. Oh well.

> Or actually, tuples shouldn't be sequences at all, since you should
> never treat a tuple as a sequence, rather than just as a record.

It's necessary to treat tuples generically sometimes, so they need to
be sequences.

Tim Peters

unread,
May 10, 2001, 9:23:47 PM5/10/01
to pytho...@python.org
[Ben Hutchings]

>> Lists also have count() and index() methods, which tuples do not.
>> Doesn't this suggest a difference in intended purpose to you?

[Douglas Alan]


> I also see that tuples support "in" and "+" and "*" and slicing and
> len() and min() and max().

As explained in the docs, all sequence types are supposed to support those
specific operations (along w/ a few others).

> In light of this, it seems that the fact that they are missing
> count() and index() should only been seen as an unfortunate oversight.

But those aren't part of the sequence protocol (or "interface", if you like)
defined by the docs. Sequence types may or may not choose to implement them.
Tuples choose not to, and Guido has firmly rejected at least one working
patch that sought to add those methods to tuples. Ben is channeling Guido's
intent accurately here!

>>> filter() on a tuple returns a tuple. The length of the tuple cannot
>>> be known in advance.

>> Presumably filter() only requires its argument to be a sequence.

> Yes, if its sequence argument is a tuple, then it returns a tuple.

Unfortunate but true. Strings are special-cased by filter() too. Nothing
else is -- and a maze of inconsistent special cases is un-Pythonic on the
face of it.

> If you were right, you should never want to run filter on a tuple,

Indeed, I never have <0.1 wink>.

> and if you were so foolish to use filter() on a tuple, it should
> return a list to show you the errors of your ways.

That would be better. The filter() implementation was contributed code, and
snuck in along with map(), reduce() and lambda. If Guido had it to do over
again, I doubt he'd accept that patch; they're his answer to the question
"what do you like least about Python?".

> Or actually, tuples shouldn't be sequences at all, since you should
> never treat a tuple as a sequence, rather than just as a record.

As above, the Language Reference manual's idea of what "a sequence" is
doesn't match yours; so while you're entitled to say tuples shouldn't be
considered to be Douglas-sequences, claiming they're Python-sequences isn't
really open to debate.

hmm-now-i'm-wondering-what-this-msg-was-about<wink>-ly y'rs - tim


Douglas Alan

unread,
May 10, 2001, 11:21:58 PM5/10/01
to
"Tim Peters" <tim...@home.com> writes:

> As explained in the docs, all sequence types are supposed to support those
> specific operations (along w/ a few others).

Well, then, if tuples are sequences, then one shouldn't say that it is
inappropriate to treat them as sequences. If it is inappropriate to
perform sequences operations on them, then they shouldn't be
sequences. If a tuple isa sequence, then all sequence operations
should be considered appropriate.

Personally, I consider a tuple to be a full-fledged sequence. It's an
immutable sequence, while a list is a mutable sequence. Mutability is
the important distinction between a list and a tuple.

> > In light of this, it seems that the fact that they are missing
> > count() and index() should only been seen as an unfortunate
> > oversight.

> But those aren't part of the sequence protocol (or "interface", if
> you like) defined by the docs. Sequence types may or may not choose
> to implement them. Tuples choose not to, and Guido has firmly
> rejected at least one working patch that sought to add those methods
> to tuples. Ben is channeling Guido's intent accurately here!

And what's that intent? Ben claimed that tuples should only be used
when you know how many elements there will be, and that lists should
only be used for homogeneous data of unknown length. The more
straight forward conclusion would be that tuples should be used when
you want an immutable sequence and that lists should be used when you
want a mutable sequence.

> > Or actually, tuples shouldn't be sequences at all, since you should
> > never treat a tuple as a sequence, rather than just as a record.

> As above, the Language Reference manual's idea of what "a sequence"
> is doesn't match yours; so while you're entitled to say tuples
> shouldn't be considered to be Douglas-sequences, claiming they're
> Python-sequences isn't really open to debate.

You misunderstand. I think that that a tuple *should* be considered
sequence. A full-fledged one. Not one restricted to data of length
known in advance. Not restricted to heterogeneous data. A tuple
should be considered a full-fledged immutable sequence of arbitrary
Python objects. No more, no less.

|>oug

Aahz Maruch

unread,
May 10, 2001, 11:45:08 PM5/10/01
to
In article <3AFB0DB9...@one.net.au>,

Andrew Maizels <and...@one.net.au> wrote:
>
>I can see where consistency is important, but why does Python do the
>inclusive-lower-bound, exclusive-upper-bound thing?

Because it makes loops more likely to work. E.g.:


l = [1,4,9,16]
for i in range(len(l)):
print l[i]
--
--- Aahz <*> (Copyright 2001 by aa...@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"Everyone is entitled to an *informed* opinion." --Harlan Ellison

Rainer Deyke

unread,
May 11, 2001, 12:26:25 AM5/11/01
to
"Douglas Alan" <nes...@mit.edu> wrote in message
news:lcd79gy...@gaffa.mit.edu...

> And what's that intent? Ben claimed that tuples should only be used
> when you know how many elements there will be, and that lists should
> only be used for homogeneous data of unknown length.

And what a ridiculous claim that was. Consider:

def f(*args):
pass

'args' is a tuple. A tuple is used precisely because you *don't* know in
advance how many elements there are. Otherwise you would list the elements
individually. The contents of 'args' are almost certainly homogeneous.

And what about the case where you want to use a homogeneous sequence, length
unknown, of arbitrary Python objects as a dictionary key>

Greg Ewing

unread,
May 11, 2001, 1:32:24 AM5/11/01
to
Roman Suzi wrote:
>
> I taught programming classes a year ago.
> That is where I had some trouble with for-loops.

Maybe that's because you were trying to teach
Python for-loops as though they were Pascal for-loops,
which are designed to generate a sequence of
integers.

That's not what Python for-loops are designed for.
The purpose of a Python for-loop is to do something
for each element of a sequence, and I believe that's
how they should be presented.

It follows from this that there's no reason to
introduce for-loops until you've introduced some
kind of sequence, e.g. a list of strings.

If you want to introduce a looping structure for
some other purpose before then, don't use a for
loop, use a while loop.

Roman Suzi

unread,
May 10, 2001, 3:07:54 PM5/10/01
to pytho...@python.org
On Thu, 10 May 2001, Rainer Deyke wrote:

>"Carlos Ribeiro" <crib...@mail.inet.com.br> wrote in message
>news:mailman.989496615...@python.org...
>> So, if we can propose a new syntax that:
>>
>> - is clear for both novices and experienced Python programmers
>> - does not break anyone's code
>> - is consistent with related features (slicing and list comprehensions)
>>
>> ... there will be no real reason not to make it.
>
>Only if you don't value simplicity. If this is the case, why are you using
>Python at all?

I do not see how syntactical ranges could spoil simplicity!

Christian Tanzer

unread,
May 11, 2001, 2:28:43 AM5/11/01
to pytho...@python.org

Roman Suzi <r...@onego.ru> wrote :

> I do not see how syntactical ranges could spoil simplicity!

That's exactly the problem <wink>.

--
Christian Tanzer tan...@swing.co.at
Glasauergasse 32 Tel: +43 1 876 62 36
A-1130 Vienna, Austria Fax: +43 1 877 66 92


Fredrik Lundh

unread,
May 11, 2001, 3:28:09 AM5/11/01
to
Douglas Alan wrote:
>
> I also see that tuples support "in" and "+" and "*" and slicing and
> len() and min() and max(). In light of this, it seems that the fact
> that they are missing count() and index() should only been seen as an
> unfortunate oversight.

more likely, it's because you don't know your python well enough.

the core sequence interface (PySequenceMethods) includes the
following methods:

__len__
__add__ (concat)
__mul__ (repeat)
__getitem__, __setitem__, __delitem__
__getslice__, __setslice__, __delslice__

like most other operations that work on sequences, min() and max()
only require you to have working __len__ and __getitem__ methods.

none of the methods provided by list objects are part of the core
sequence protocol.

> Yes, if its sequence argument is a tuple, then it returns a tuple

which probably is an unfortunate oversight, since it returns lists
for all other sequences:

>>> from UserList import UserList
>>> L = UserList((1, 2, 3, 4))
>>> filter(None, L)
[1, 2, 3, 4]
>>> type(filter(None, L))
<type 'list'>

> If you were right, you should never want to run filter on a tuple, and if
> you were so foolish to use filter() on a tuple, it should return a
> list to show you the errors of your ways. Or actually, tuples
> shouldn't be sequences at all, since you should never treat a tuple as
> a sequence, rather than just as a record.

since ben is right, I'm glad you're not in charge of python's design.

Cheers /F


Fredrik Lundh

unread,
May 11, 2001, 3:28:09 AM5/11/01
to
Rainer Deyke wrote:

> And what a ridiculous claim that was.

you're attacking douglas's misrepresentation of ben's original claim
with an argument that has nothing whatsoever to do with what ben
was talking about. that's pretty ridiculous too, if you're asking me.

> The contents of 'args' are almost certainly homogeneous.

erm. since when are the arguments to a function almost certainly
homogenous?

ben's right. tuples are records, lists are containers. anyone who
has written (or studied) real-life python programs knows that.

Cheers /F


Douglas Alan

unread,
May 11, 2001, 4:39:32 AM5/11/01
to
"Fredrik Lundh" <fre...@pythonware.com> writes:

> Rainer Deyke wrote:

> > And what a ridiculous claim that was.

> you're attacking douglas's misrepresentation of ben's original claim
> with an argument that has nothing whatsoever to do with what ben was
> talking about. that's pretty ridiculous too, if you're asking me.

In what way did I misrepresent Ben's original claim? I did not.

>> The contents of 'args' are almost certainly homogeneous.

> erm. since when are the arguments to a function almost certainly
> homogenous?

Excess arguments are typically (but not always) homogenous.

> ben's right. tuples are records, lists are containers. anyone who
> has written (or studied) real-life python programs knows that.

That's quite the claim. Are you saying that my Python programs are
not "real-life" Python programs?

I refer you to the Python reference manual which says quite plainly
that a tuple is an immutable sequence of arbitrary Python objects. It
says nothing about them being "records" or not suitable as
"containers".

|>oug

Andrew Maizels

unread,
May 11, 2001, 4:46:24 AM5/11/01
to
Aahz Maruch wrote:
>
> In article <3AFB0DB9...@one.net.au>,
> Andrew Maizels <and...@one.net.au> wrote:
> >
> >I can see where consistency is important, but why does Python do the
> >inclusive-lower-bound, exclusive-upper-bound thing?
>
> Because it makes loops more likely to work. E.g.:
>
> l = [1,4,9,16]
> for i in range(len(l)):
> print l[i]

OK, next question: why does Python start indexes at zero? Your example
would work perfectly well if the range returned [1, 2, 3, 4] and the
list was indexed starting with 1. Basically, range(4) has to produce a
list of four items, we just differ on what those items should be.

I'm not just being difficult; I'm trying to design my own language, and
this is one of the things I have different to Python. If I've missed
something where the Python way is superior, then I might want to change
my mind.

The way I have things at the moment, in Pixy (my language), array
indexes default to start at 1, but can be declared to any range (like
Pascal). Strings are indexed starting with 1 as well. Is there a good
reason not to do this?

Fredrik Lundh

unread,
May 11, 2001, 5:19:01 AM5/11/01
to
douglas alan wrote:

> In what way did I misrepresent Ben's original claim? I did not.

you wrote:

> Ben claimed that tuples should only be used when you know
> how many elements there will be, and that lists should only be
> used for homogeneous data of unknown length.

ben didn't use "should only". you made that up.

to get closer to what he really said (original post plus clarification),
you can change "should only" to "should", in the IETF sense [1].

> > ben's right. tuples are records, lists are containers. anyone who
> > has written (or studied) real-life python programs knows that.
>
> That's quite the claim. Are you saying that my Python programs are
> not "real-life" Python programs?

no, I'm saying that anyone who has written or studied real-life
python programs knows that tuples don't work well as containers,
and lists don't work well if you try to use them as records. that's
a fact, not an opinion.

as for your programs -- well, I haven't seen a single line of code
from you. if you spent more time writing and sharing code, and
less time trolling (etc)

Cheers /F

1) http://www.ietf.org/rfc/rfc2119.txt

"SHOULD. This word, or the adjective "RECOMMENDED", mean that
there may exist valid reasons in particular circumstances to ignore a
particular item, but the full implications must be understood and
carefully weighed before choosing a different course."

(my english dictionary doesn't disagree with the IETF, of course)

(btw, is it just me, or did someone just hack IETF.org? who's
Steve Nash?)


Douglas Alan

unread,
May 11, 2001, 5:40:50 AM5/11/01
to
"Fredrik Lundh" <fre...@pythonware.com> writes:

>> In what way did I misrepresent Ben's original claim? I did not.

> you wrote:

>> Ben claimed that tuples should only be used when you know how many
>> elements there will be, and that lists should only be used for
>> homogeneous data of unknown length.

> ben didn't use "should only". you made that up.

I did not. It is clear from the context that this is what he
intended. This is exactly what he said and I maintain that my
paraphrasing above is accurate:

A list contains a variable number of homogeneous values, e.g. the

lines of a file. Lists are like arrays in other languages. A
tuple contains a fixed number of heterogeneous values where each
element has a distinct meaning e.g. (year, month, day) for a

date. Tuples are like data structures or product types in other


languages, except that their types and fields are nameless.
Comprehensions work with a variable number of homogeneous values,
so they produce lists.

>>> ben's right. tuples are records, lists are containers. anyone


>>> who has written (or studied) real-life python programs knows that.

> > That's quite the claim. Are you saying that my Python programs are
> > not "real-life" Python programs?

> no, I'm saying that anyone who has written or studied real-life
> python programs knows that tuples don't work well as containers, and
> lists don't work well if you try to use them as records. that's a
> fact, not an opinion.

A fact, eh? And would you be so kind as to explain this fact. Why
don't tuples work well as immutable containers and why don't lists
work well as mutable records? And if there were such a thing as tuple
comprehension, what terrible things would result?

> as for your programs -- well, I haven't seen a single line of code
> from you. if you spent more time writing and sharing code, and less
> time trolling (etc)

When have I ever "trolled"? I have only expressed my opinion, which
is reasonably well-informed, and, I think, worth considering.

|>oug

Fredrik Lundh

unread,
May 11, 2001, 6:51:54 AM5/11/01
to
Douglas Alan keeps on babbling:

> >> In what way did I misrepresent Ben's original claim? I did not.
>
> > you wrote:
>
> >> Ben claimed that tuples should only be used when you know how many
> >> elements there will be, and that lists should only be used for
> >> homogeneous data of unknown length.
>
> > ben didn't use "should only". you made that up.
>
> I did not. It is clear from the context that this is what he
> intended.

did you read his followup message? you're saying that "lists
are normally used" is the same thing as "lists should only be
used"? that's weird.

(but alright, his followup was in response to someone who
basically agreed with him, so maybe you didn't bother to read
further down that thread...)

maybe we should ask him: Ben, did you really mean that Python
won't let you abuse lists and tuples? or where you just describing
a well-known best practice?

> > no, I'm saying that anyone who has written or studied real-life
> > python programs knows that tuples don't work well as containers, and
> > lists don't work well if you try to use them as records. that's a
> > fact, not an opinion.
>
> A fact, eh? And would you be so kind as to explain this fact.

if you're so experienced in Python as you say you are, maybe you
can prove me wrong, by posting some code where you use lists and
tuples the "wrong" way, and show that it won't work better if done
the other way around.

(where "better" means better from a software engineering perspective:
performance, memory usage, code size, readability, support for different
python versions, scalability, maintainability, etc).

> When have I ever "trolled"? I have only expressed my opinion, which
> is reasonably well-informed, and, I think, worth considering.

good for you.

Cheers /F


Roman Suzi

unread,
May 11, 2001, 6:04:02 AM5/11/01
to pytho...@python.org
On 11 May 2001, Douglas Alan wrote:

Hello!

It seems our discussion is getting anti-productive. As I have not heard
strong arguments for the proposal of syntactically supported ranges, I
understand that the feature is not desirable and it's better to leave
things where they are, because nobody (including me) has no idea of good
enough syntax.

> "Fredrik Lundh" <fre...@pythonware.com> writes:
> >> In what way did I misrepresent Ben's original claim? I did not.

> > as for your programs -- well, I haven't seen a single line of code
> > from you. if you spent more time writing and sharing code, and less
> > time trolling (etc)
>
> When have I ever "trolled"? I have only expressed my opinion, which
> is reasonably well-informed, and, I think, worth considering.

Sincerely yours, Roman A.Suzi

Alex Martelli

unread,
May 11, 2001, 9:25:45 AM5/11/01
to
"Andrew Maizels" <and...@one.net.au> wrote in message
news:3AFB0DB9...@one.net.au...

> Alex Martelli wrote:
>
> > I disagree. Inclusive-lower-bound, exclusive-upper-bound
> > is a VERY important and useful idiom, which we should
...

> I can see where consistency is important, but why does Python do the
> inclusive-lower-bound, exclusive-upper-bound thing?

The inclusive-lower, exclusive-upper idiom, also called
the "half-open range" idiom, is intrinsically simpler
than closed-range alternatives. Which is why it has
"taken over" in so many languages and frameworks: its
simplicity has it all over the _apparent_ (surface!)
"convenience" of closed-range.

As an example of where this idiom has "taken over": in
C, it is specified that, for an array x of N elements,
while there is no element x[N] it IS correct to _take
the address_ of x[N]. This must be allowed by any
conformant implementation, and is included specifically
for the purpose of letting the programmer easily specify
an exclusive upper bound and use half-open ranges.

The C++ standard library takes this further, of course,
since just about _every_ standard algorithm consistently
takes as arguments inclusive-start and exclusive-end
iterators. But it IS just an extension of what Koenig
explains in his book "C Traps and Pitfalls", about how
always using half-open ranges helps avoid off-by-one
errors that otherwise tend to plague programs.


> From my point of view, there's one correct behaviour for range(1,5), and
> it's NOT [1, 2, 3, 4]!

It is a common phenomenon for human beings to consider
"correct", "natural", or "right", something that is not
optimal from the viewpoint of underlying mathematical
reality. Fortunately, human beings are much more
flexible than maths tend to be:-). My favourite example
is how, for millennia, the concept of "zero" as a number
was hotly rejected by the keenest thinkers -- Parmenides
and Zeno made a particular career of this ("Non-being
is not, and cannot be!"), but it was a widespread idea.
As a result, no decent way to represent numbers and do
operations with them was available -- addition was high
mathematics, only wizards could multiply large numbers.

Then, some Indian thinker (maybe under Buddhist influence,
given the likely timeframe) came up with this weird "zero"
idea -- the Arabs took it westwards -- Italian merchants
took it from the Arabs, added double-entry bookkeeping --
and modern arithmetic and accounting were born. A few
centuries later, we typically accept "zero" as a pretty
natural concept (traces of "number" as meaning "more than
zero" remain, but they're more or less like fossil traces,
in natural language, in how we number our years, &c).

This is not completely an aside... the ease of denoting
EMPTY ranges is part of what makes a half-open interval
simpler and handier as a general concept. range(x,x) is
a nice way to denote empty ranges, more than range(x,x-1)
would be if range was closed rather than half-open.

More generally, range(x,y) has max(y-x, 0) items -- a
VERY nice property, also easily expressed as y-x if y>=x.
If range was closed, the number of items would be y-x+1,
and that '+1' is the spoiler... cause of deucedly many
off-by-one errors.

Similarly, range(x,y)+range(y,z) = range(x,z), ANOTHER
very nice property. Having such simple axioms means the
behavior of 'range' can easily be symbolically expanded,
and therefore also mentally understood, in more complex
cases. Programs are easier to understand and prove
correct, by having fewer '+1' and '-1' strewn around,
when half-open ranges are the usage norm.


One approach to help a programmer get used to half
open ranges: with half-open ranges, we only need ONE
basic concept, that of _FIRST_ index of some sequence.
When we say we process range(x,y), we say that x is
the first-index of what we process, and y is the
first-index of what we do NOT process. With closed
ranges, we would need TWO basic concepts, that of
first AND that of _LAST_. One basic concept is easier
to handle (mentally, symbolically, or in whatever way)
than two of them.

Say you want "the first N elements starting from the
x-th one". Isn't range(x,x+N) an easier way to
express this than range(x,x+N-1)? Or say that L
is the length of the sequence and you want the _last_
N elements -- range(L-N,L) is, again, an easy and
natural way to get their indices, isn't it?

Say we get a list of strings that are to be catenated
into one big string: as well as catenating them, we
also want to record the places in the big string where
the strings can be located, so we can still recover
the original small-strings within the big one. A very
natural approach:

def catWithTracking(manystrings):
start_at = [0]
tot_len_so_far = 0
for astring in manystrings:
tot_len_so_far += len(astring)
start_at.append(tot_len_so_far)
return ''.join(manystrings), start_at

Nice, right? Not a +1 nor a -1 anywhere, and just
one list of indices being computed and returned. OK,
now, how do we get one small string back from the
big one and the list of indices?

def recoverOne(i, bigstring, indices):
return bigstring[indices[i]:indices[i+1]]

this is the slice-approach of course -- but then it
IS nice that slice and range() behave just the same
way, isn't it? Both half-open ranges... of course.
If we need the INDICES inside the big string of the
i-th original small string, it will be just as we
do for slicing it: range(indices[i],indices[i+1]).
No irregularity, everything smooth as pie.

Now, of course, we can make even more arguments for
half-open ranges in _slicing_ specifically. But
they _are_ just "more of the same" wrt the general
arguments for half-open ranges as a general idiom.


Try designing your own libraries so that, whenever
a start and a stop index are specified, the half
open range approach is used... see if both the use
and implementation thereof doesn't profit...


Alex

Mikael Olofsson

unread,
May 11, 2001, 10:34:47 AM5/11/01
to
Andrew,

On 11-May-2001 Andrew Maizels wrote:
> OK, next question: why does Python start indexes at zero? Your example
> would work perfectly well if the range returned [1, 2, 3, 4] and the
> list was indexed starting with 1. Basically, range(4) has to produce a
> list of four items, we just differ on what those items should be.
>
> I'm not just being difficult; I'm trying to design my own language, and
> this is one of the things I have different to Python. If I've missed
> something where the Python way is superior, then I might want to change
> my mind.
>
> The way I have things at the moment, in Pixy (my language), array
> indexes default to start at 1, but can be declared to any range (like
> Pascal). Strings are indexed starting with 1 as well. Is there a good
> reason not to do this?

What about

range(a,b)+range(b,c)==range(a,c)

for resonable values of the integers a,b, and c?

/Mikael

-----------------------------------------------------------------------
E-Mail: Mikael Olofsson <mik...@isy.liu.se>
WWW: http://www.dtr.isy.liu.se/dtr/staff/mikael
Phone: +46 - (0)13 - 28 1343
Telefax: +46 - (0)13 - 28 1339
Date: 11-May-2001
Time: 16:30:54

/"\
\ / ASCII Ribbon Campaign
X Against HTML Mail
/ \

This message was sent by XF-Mail.
-----------------------------------------------------------------------

Roman Suzi

unread,
May 11, 2001, 10:55:10 AM5/11/01
to Andrew Maizels, pytho...@python.org
On Fri, 11 May 2001, Andrew Maizels wrote:

> Aahz Maruch wrote:
> >
> > In article <3AFB0DB9...@one.net.au>,
> > Andrew Maizels <and...@one.net.au> wrote:
> > >
> > >I can see where consistency is important, but why does Python do the
> > >inclusive-lower-bound, exclusive-upper-bound thing?
> >
> > Because it makes loops more likely to work. E.g.:
> >
> > l = [1,4,9,16]
> > for i in range(len(l)):
> > print l[i]
>
> OK, next question: why does Python start indexes at zero? Your example
> would work perfectly well if the range returned [1, 2, 3, 4] and the
> list was indexed starting with 1. Basically, range(4) has to produce a
> list of four items, we just differ on what those items should be.
>
> I'm not just being difficult; I'm trying to design my own language, and
> this is one of the things I have different to Python. If I've missed
> something where the Python way is superior, then I might want to change
> my mind.
>
> The way I have things at the moment, in Pixy (my language), array
> indexes default to start at 1, but can be declared to any range (like
> Pascal). Strings are indexed starting with 1 as well. Is there a good
> reason not to do this?

That is because numbering starts at point 0 and chars are BETWEEN
points - this way inserts are consistent, because you can specify
any insertion range:

A B C D E F
0 1 2 3 4 5 6

- so, you can insert something from 1 to 3 ("BC"),
or from 1 to 1 (""):

>>> a = list("ABCDEF")
>>> a[1:3] = list("QQQ")
>>> print a
['A', 'Q', 'Q', 'Q', 'D', 'E', 'F']
>>> a = list("ABCDEF")
>>> a[1:1] = list("QQQ")
>>> print a
['A', 'Q', 'Q', 'Q', 'B', 'C', 'D', 'E', 'F']
>>>

So, this notation is quite convenient, isn't it?

As an exersize, try to do the same with "convenient"
notation:

A B C D E F
1 2 3 4 5 6

or

A B C D E F
0 1 2 3 4 5

> Andrew.

Rainer Deyke

unread,
May 11, 2001, 12:27:02 PM5/11/01
to
"Fredrik Lundh" <fre...@pythonware.com> wrote in message
news:dwMK6.9774$sk3.2...@newsb.telia.net...

> Rainer Deyke wrote:
> > The contents of 'args' are almost certainly homogeneous.
>
> erm. since when are the arguments to a function almost certainly
> homogenous?

I can't think of a single function in the standard library which takes an
unbounded number of heterogenous arguments. I can think of several which
take an unbounded number of homogeneous arguments. 'min' and 'max' for
example.

Fredrik Lundh

unread,
May 11, 2001, 1:07:51 PM5/11/01
to
Rainer Deyke wrote:

> > erm. since when are the arguments to a function almost certainly
> > homogenous?
>
> I can't think of a single function in the standard library which takes an
> unbounded number of heterogenous arguments. I can think of several which
> take an unbounded number of homogeneous arguments. 'min' and 'max' for
> example.

a quick grep through the 2.0 standard library reveals about
a hundred uses of the *args syntax. a quick look at those
didn't bring up a single function that uses *args to read an


unbounded number of heterogenous arguments.

(I'm sure you can find one or two if you look hard enough,
but that doesn't make it "almost certain" that *args implies
a homogenous sequence of arguments ;-)

Cheers /F


Grant Edwards

unread,
May 11, 2001, 2:05:51 PM5/11/01
to
In article <3AFB0DB9...@one.net.au>, Andrew Maizels wrote:

>I can see where consistency is important, but why does Python do the
>inclusive-lower-bound, exclusive-upper-bound thing?

one reason is so that for 0 <= n < len(a),

a[:n]+a[n:] == a

That property makes processing sections of lists much simpler.

--
Grant Edwards grante Yow! I want to dress you
at up as TALLULAH BANKHEAD and
visi.com cover you with VASELINE and
WHEAT THINS...

Tim Peters

unread,
May 11, 2001, 2:54:38 PM5/11/01
to pytho...@python.org
[Rainer Deyke]

> I can't think of a single function in the standard library which
> takes an unbounded number of heterogenous arguments. I can think
> of several which take an unbounded number of homogeneous arguments.
> 'min' and 'max' for example.

?

>>> import sys
>>> min(1, 1.0, [1], "one", {1 :1}, sys.stdin)
1
>>>

I suppose those are homogeneous arguments in the sense that they're all
objects, but if that's the degenerate sense we're using then I don't know
what heterogeneous could mean.


Tim Peters

unread,
May 11, 2001, 2:49:39 PM5/11/01
to pytho...@python.org
[Fredrik Lundh]
> ...

> like most other operations that work on sequences, min() and max()
> only require you to have working __len__ and __getitem__ methods.

You won't need either in 2.2: it will be enough that the min()/max()
argument be iterable. For an extreme example, max(sys.stdin) (btw, this
stuff already works in current CVS Python).

[Douglas Alan, on filter()]


>> Yes, if its sequence argument is a tuple, then it returns a tuple

[back to /F]


> which probably is an unfortunate oversight, since it returns lists
> for all other sequences:

Not quite. It also special-cases the snot out of strings:

>>> filter(lambda ch: ch in 'aeiou', "It also special-cases the "
... "snot out of strings.")
'aoeiaaeeoouoi'
>>>

Oops: make that 8-bit strings. Pass filter() a Unicode string instead, and
then it returns a list -- oh ya, *that's* Pythonic <wink>.

The tuple and 8-bit string special-casing in filter() aren't oversights,
they're deliberate warts in the code. Although I'd agree to call the
existence of these warts an oversight in the patch review process ...

stuck-with-it-now-ly y'rs - tim


Tim Peters

unread,
May 11, 2001, 3:46:09 PM5/11/01
to pytho...@python.org
[Andrew Maizels]

> OK, next question: why does Python start indexes at zero?

Like C (and many other languages), Python views indices as *offsets* from the
start of the sequence being indexed. The element at the start of a sequence
is clearly at offset 0, etc. Note that since Python is very keen to make
writing extension modules in C pleasant, it's quite a practical benefit that
they have the same view of this.

> Your example would work perfectly well if the range returned
> [1, 2, 3, 4] and the list was indexed starting with 1. Basically,
> range(4) has to produce a list of four items, we just differ on
> what those items should be.

But sequences *are* indexed starting at 0 in Python, so having range(4)
produce [1, 2, 3, 4] *in Python* would be, well, stupid. The decisions
aren't independent.

> I'm not just being difficult; I'm trying to design my own language,
> and this is one of the things I have different to Python. If I've
> missed something where the Python way is superior, then I might want
> to change my mind.

You can make it work either way, although (as above) there's reason to favor
0-based indexing if ease of talking between Pixy and C is interesting to you.
Icon is a good example of a language with the same basic "indices point
*between* elements" (== "indices are offsets") approach, but where indices
start at 1. In two respects this can be nicer:

1. The last element of a non-empty Icon list (or string) is (using
Python spelling) list[len(list)]. In Python, at the start, the
only way to spell it was list[len(list)-1]. That created its own
breed of "off by 1" errors. But Python later grew meaning for
negative list indices too, and since then list[-1] is the best
way to get at the last list element.

2. Spelling "the point just beyond the end of the sequence" is
easier in Icon: in Python that's index len(list), in Icon it's
index 0 (or *its* breed of off-by-1 temptation, len(list)+1).
That is, indices in the 0-based Python look like:

x[0] x[1] x[3]
0 1 2 3 positive flavor
-3 -2 -1 3 negative flavor

but in the 1-based Icon they're:

x[1] x[2] x[3]
1 2 3 4 positive flavor
-3 -2 -1 0 negative flavor

If only 1's-complement integer arithmetic had caught on, Python
could get rid of the "3 wart" in the lower-right corner by using
-0 instead <wink>.

> The way I have things at the moment, in Pixy (my language), array
> indexes default to start at 1, but can be declared to any range (like
> Pascal).

Or Perl or Fortran77 or any number of other languages. The flexibility
creates its own problems, though; for example, how can I write a general
routine in Pixy to iterate over the elements of a passed-in array? In Python
(or C, or any number of other languages), I can always start indexing at 0.
In Pascal you have to clutter the argument list by passing the array bounds
as well as the array. In Perl, the index base is a magical global vrbl and
applies to *all* arrays, and then routines written *assuming* a particular
base (not coincidentally, usually the author's favorite base <wink>) can work
or fail depending on whether somebody else fiddled the global's value. In
Ada there are inquiry functions to *ask* an array what its declared bounds
were; that allows writing general code without relying on globals or
cluttering argument lists, but general code is wordy due to all the
inquiries, and array objects have to allocate space to store the bounds info.

> Strings are indexed starting with 1 as well.

I should hope so.

> Is there a good reason not to do this?

If you can't think of at least three "good reasons" to do this *and* not to
do this, learn some more languages. There are almost no pure wins or pure
losses in language design.

see-abc-for-why-a-newbie-friendly-language-is-a-bad-idea-and-c++-
for-why-it's-a-good-one<wink>-ly y'rs - tim


Terry Reedy

unread,
May 11, 2001, 4:42:11 PM5/11/01
to
> I can't think of a single function in the standard library which takes an
> unbounded number of heterogenous arguments. I can think of several which
> take an unbounded number of homogeneous arguments. 'min' and 'max' for
> example.

Only if you define away heterogeneity by redefining 'homogeneous' as
meaning
'sensibly included together for processing by this particular function'.

>>> class num:
... # simple minded example standing in for anything comparable to
builtin types
... def __init__(self,val):
... self.value = val
... def __cmp__(self,other):
... return cmp(self.value, other)
...
>>> min(1, 2L, 3.3, num(4))
1

For many (most?) languages, the above sequence of arguments is considered
heterogenous and is illegal as a standalone list or array.

Douglas Alan

unread,
May 11, 2001, 5:07:43 PM5/11/01
to
"Fredrik Lundh" <fre...@pythonware.com> writes:

> Douglas Alan keeps on babbling:

Perhaps you should treat people who disagree with you with more
respect.

> if you're so experienced in Python as you say you are, maybe you can
> prove me wrong, by posting some code where you use lists and tuples
> the "wrong" way, and show that it won't work better if done the
> other way around.

I don't have to post any code -- I can just refer you to way Python
itself works. Why do excess parameters get put into a tuple rather
than a list? Excess parameters are not of a fixed length, and they
are more typically homogeneous than heterogeneous. The tuple
containing the excess parameters is acting as a "container" and not a
"record".

|>oug

Douglas Alan

unread,
May 11, 2001, 5:16:13 PM5/11/01
to
"Tim Peters" <tim...@home.com> writes:

> >>> import sys
> >>> min(1, 1.0, [1], "one", {1 :1}, sys.stdin)
> 1
> >>>

> I suppose those are homogeneous arguments in the sense that they're
> all objects, but if that's the degenerate sense we're using then I
> don't know what heterogeneous could mean.

"Homogeneous" means that all the elements respond to a common
protocol that you will be using on them. In this case the protocol is
the "min" protocol. The function call

sum(3, 4.5, 8l)

would be homogenious as numbers that can be added together.

|>oug

Andrew Maizels

unread,
May 11, 2001, 5:47:30 PM5/11/01
to
Tim Peters wrote:
>
> [Andrew Maizels]
> > OK, next question: why does Python start indexes at zero?
>
> Like C (and many other languages), Python views indices as *offsets* from the
> start of the sequence being indexed. The element at the start of a sequence
> is clearly at offset 0, etc. Note that since Python is very keen to make
> writing extension modules in C pleasant, it's quite a practical benefit that
> they have the same view of this.

And C gets it from the hardware, which is perfectly understandable.
(Though C uses pointers for arrays and strings, which is hideous.)

Making it easy for implementors of C modules is not my primary goal in
Pixy, but it's worth considering. Of course, the run-time interpreter
(Pixy is compiled to a byte-code) is written in C, so I need to look
after myself as well.



> > Your example would work perfectly well if the range returned
> > [1, 2, 3, 4] and the list was indexed starting with 1. Basically,
> > range(4) has to produce a list of four items, we just differ on
> > what those items should be.
>
> But sequences *are* indexed starting at 0 in Python, so having range(4)
> produce [1, 2, 3, 4] *in Python* would be, well, stupid. The decisions
> aren't independent.

Sure, agreed.



> > I'm not just being difficult; I'm trying to design my own language,
> > and this is one of the things I have different to Python. If I've
> > missed something where the Python way is superior, then I might want
> > to change my mind.
>
> You can make it work either way, although (as above) there's reason to favor
> 0-based indexing if ease of talking between Pixy and C is interesting to you.
> Icon is a good example of a language with the same basic "indices point
> *between* elements" (== "indices are offsets") approach, but where indices
> start at 1. In two respects this can be nicer:
>
> 1. The last element of a non-empty Icon list (or string) is (using
> Python spelling) list[len(list)]. In Python, at the start, the
> only way to spell it was list[len(list)-1]. That created its own
> breed of "off by 1" errors. But Python later grew meaning for
> negative list indices too, and since then list[-1] is the best
> way to get at the last list element.

Right. However you do it, you must be consistent. I like the negative
indexes in Python too, but I'm not sure if (or how) I'll add them to
Pixy.

But list[len(list)] still works if you have the index point to the
element rather than the gap - if you count from 1.

> 2. Spelling "the point just beyond the end of the sequence" is
> easier in Icon: in Python that's index len(list), in Icon it's
> index 0 (or *its* breed of off-by-1 temptation, len(list)+1).
> That is, indices in the 0-based Python look like:
>
> x[0] x[1] x[3]
> 0 1 2 3 positive flavor
> -3 -2 -1 3 negative flavor
>
> but in the 1-based Icon they're:
>
> x[1] x[2] x[3]
> 1 2 3 4 positive flavor
> -3 -2 -1 0 negative flavor
>
> If only 1's-complement integer arithmetic had caught on, Python
> could get rid of the "3 wart" in the lower-right corner by using
> -0 instead <wink>.


Well, you could always use floating point; the IEEE standard supports
-0. (Ewww!)

This is a more telling example of the advantage of pointing between
elements: you have a consistent notation for "before the first element"
and "after the last element", which Pixy (as currently designed) doesn't
have. I'll have to take another look at Icon, it's been years since
I've played with it.



> > The way I have things at the moment, in Pixy (my language), array
> > indexes default to start at 1, but can be declared to any range (like
> > Pascal).
>
> Or Perl or Fortran77 or any number of other languages. The flexibility
> creates its own problems, though; for example, how can I write a general
> routine in Pixy to iterate over the elements of a passed-in array?

In Pixy you can easily determine the bounds of an array (and the number
of dimensions). Any sequence type, indeed any collection, has bounds
and size properties, which may be static or dynamic.

> In Python
> (or C, or any number of other languages), I can always start indexing at 0.
> In Pascal you have to clutter the argument list by passing the array bounds
> as well as the array. In Perl, the index base is a magical global vrbl and
> applies to *all* arrays, and then routines written *assuming* a particular
> base (not coincidentally, usually the author's favorite base <wink>) can work
> or fail depending on whether somebody else fiddled the global's value.

Hmm. I think I'll try to avoid that :) ("Magic globals considered
harmful".)

> In
> Ada there are inquiry functions to *ask* an array what its declared bounds
> were; that allows writing general code without relying on globals or
> cluttering argument lists, but general code is wordy due to all the
> inquiries, and array objects have to allocate space to store the bounds info.

Space isn't really an issue unless you have zillions of tiny arays;
wordiness is (or can be) a problem. I'm trying not to create a new
COBOL (though Pixy's problem domain is similar). In Pixy you can
iterate with "for x in y" just like Python (well, you can't at the
moment, because the compiler can't cope with anything much more
complicated than a := (x + y) / z, but that's just an implementation
detail).

> > Strings are indexed starting with 1 as well.
>
> I should hope so.
>
> > Is there a good reason not to do this?
>
> If you can't think of at least three "good reasons" to do this *and* not to
> do this, learn some more languages.

Well, I can think of lots of reasons either way, but none of them are
obvious killers, so it boils down to the taste of the language designer
(me!) Since I've never designed a general-purpose language before (I've
done a couple of macro languages) I want to make sure that I don't miss
something and get bitten by it later. I've programmed in Basic, Pascal,
Logo, Fortran, Prolog, Perl, Python, Postscript, C, various assemblers,
various shells, the Progress 4GL (a proprietary language, but a good
one), and dabbled in others (Icon, Modula-2, Forth). And I've probably
forgotten something.

The point about indexing *between* elements is a good one, and something
I'll have to think about.

I like Python (a lot!) but it's not ideal for my particular problem
domain. The Pixy compiler is written in Python, by the way. The idea
is to rewrite it in Pixy once it becomes powerful enough. (It's a
pretty standard recursive-descent parser; I've never met a
compiler-building tool I liked.)

> There are almost no pure wins or pure
> losses in language design.

You're so right about no pure wins or pure losses; you set off with a
clean sheet of paper and almost immediately you're up to your neck in a
sea of tradeoffs. Which is why no perfect language exists, which is why
language designers are still around and as busy as ever.

For example, I like Python's use of new-line as a statement separator,
but in Pixy you can embed relational database queries directly into your
code, and given the target application domain (business applications
like accounting, customer service etc), I expect they'll be one of the
commonest constructs in Pixy programs. Unfortunately, queries have a
tendency to be quite lengthy, and trying to squash them into one line or
having line-contination markers all over the place both seem like bad
ideas. Which means that having a statement separator (or terminator) is
the better option, so I have to choose a character for it, which means
that character is either not available for other uses or has to have
some disambiguation mechanism to allow the compiler to work out what you
mean. Oh, and I don't like semi-colons. Don't know why; they just
irritate me. (In code that is; I'm perfectly happy with them in
English).

And there's a constant fight between compact and expressive notation and
legibility: I don't expect the users of Pixy to be language experts, and
I've often seen that a mid-level programmer can work through a
three-hundred line implementation of an algorithm, but is completely
thrown by a thirty-line implementation - the ideas are just too densely
packed for them to take in.



> see-abc-for-why-a-newbie-friendly-language-is-a-bad-idea-and-c++-
> for-why-it's-a-good-one<wink>-ly y'rs - tim

To be successful, a language has to be accessible to newbies and useful
to experts. Python does pretty well; I've looked at C++ and shuddered,
though I'm comfortable enough in C. I looked at Java and laughed - a
language where printing "Hello world!" involves the sequence "public
static void" is some sort of joke, though I'm not sure what.

Ben Hutchings

unread,
May 11, 2001, 5:53:15 PM5/11/01
to
"Fredrik Lundh" <fre...@pythonware.com> writes:
<snip>
> maybe we should ask him: Ben, did you really mean that Python
> won't let you abuse lists and tuples? or where you just describing
> a well-known best practice?
<snip>

I certainly didn't mean that Python won't let you 'abuse' them (I
wouldn't go so far as to say 'abuse', anyway). I only meant that I
believed there were differences in intended usage that went beyond
mutability.

--
Any opinions expressed are my own and not necessarily those of Roundpoint.

Aahz Maruch

unread,
May 11, 2001, 6:18:27 PM5/11/01
to
In article <3AFC5DF2...@one.net.au>,

Andrew Maizels <and...@one.net.au> wrote:
>
>For example, I like Python's use of new-line as a statement separator,
>but in Pixy you can embed relational database queries directly into your
>code, and given the target application domain (business applications
>like accounting, customer service etc), I expect they'll be one of the
>commonest constructs in Pixy programs. Unfortunately, queries have a
>tendency to be quite lengthy, and trying to squash them into one line or
>having line-contination markers all over the place both seem like bad
>ideas.

That's why Python has triple quotes.
--
--- Aahz <*> (Copyright 2001 by aa...@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"Everyone is entitled to an *informed* opinion." --Harlan Ellison

Fredrik Lundh

unread,
May 11, 2001, 6:45:49 PM5/11/01
to
Fredrik Lundh mistyped:

> a quick grep through the 2.0 standard library reveals about
> a hundred uses of the *args syntax. a quick look at those

> didn't bring up a single function that uses *args to read an


> unbounded number of heterogenous arguments.

character eating nanovirii? I'm pretty sure I wrote:

"unbounded number of homogenous arguments."

(most uses appear to be methods that pass most or all of
their arguments on to another method. no homogenous-
ness there...)

Over and out /F


Alex Martelli

unread,
May 11, 2001, 6:09:29 PM5/11/01
to
"Grant Edwards" <gra...@visi.com> wrote in message
news:3SVK6.301$Dd5.2...@ruti.visi.com...

> In article <3AFB0DB9...@one.net.au>, Andrew Maizels wrote:
>
> >I can see where consistency is important, but why does Python do the
> >inclusive-lower-bound, exclusive-upper-bound thing?
>
> one reason is so that for 0 <= n < len(a),
>
> a[:n]+a[n:] == a
>
> That property makes processing sections of lists much simpler.

Yes, but, is the constraint on n necessary? It seems to me both
by reasoning and by experiment that this nice and useful property
holds for ALL n, e.g.:

>>> a="hello world"
>>> L=len(a)
>>> for i in range(-L-1,2*L+1):
... if a[:i]+a[i:] != a: print i
...
>>>

Many cases will be 'degenerate' by having one slice empty and
the other one covering all the sequence, but still the property
holds even then. Or am I missing something?


Alex

Alex Martelli

unread,
May 11, 2001, 6:17:48 PM5/11/01
to
"Tim Peters" <tim...@home.com> wrote in message
news:mailman.989607330...@python.org...

Isn't it (Python 2.1 at least) "homogeneous in the sense they're all
COMPARABLE"...? min/max USED TO work on all objects, at least
all built-in ones I think, but no more...:

D:\Python21>python
Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> min(1,'a',2.0,3j)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: cannot compare complex numbers using <, <=, >, >=
>>>

vs the old behavior

D:\Python20>python
Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
Alternative ReadLine -- Copyright 2001, Chris Gonnerman
>>> min(1,'a',2.0,3j)
3j
>>>


Python doesn't formally define interfaces/protocols ("yet", he
adds hopefully:-), but "informally" it has them -- here, it seems
to me that the objects min/max accept need to "implement
OrderedComparable" (be adaptable to protocol "compare by
< &c") in the typical informal Python sense of interfaces and
protocols/'implements' and 'adaptable to'...


Alex

Douglas Alan

unread,
May 11, 2001, 7:03:55 PM5/11/01
to
"Alex Martelli" <ale...@yahoo.com> writes:

> "Tim Peters" <tim...@home.com> wrote in message

> > I suppose those are homogeneous arguments in the sense that they're all


> > objects, but if that's the degenerate sense we're using then I don't know
> > what heterogeneous could mean.

> Isn't it (Python 2.1 at least) "homogeneous in the sense they're all
> COMPARABLE"...? min/max USED TO work on all objects, at least all
> built-in ones I think, but no more...:

See, Alex and I can completely agree on some topics.

|>oug

Rainer Deyke

unread,
May 11, 2001, 7:47:49 PM5/11/01
to
"Tim Peters" <tim...@home.com> wrote in message
news:mailman.989607330...@python.org...

I thought we had already agreed that in this context "homogeneous" does not
refer to concrete type but to usage. These arguments to 'min' are all
homogeneous because they are treated identically; you can even change their
order and still get the same result.

Alex Martelli

unread,
May 11, 2001, 7:15:24 PM5/11/01
to
"Andrew Maizels" <and...@one.net.au> wrote in message
news:3AFBA6E0...@one.net.au...
...

> OK, next question: why does Python start indexes at zero? Your example
> would work perfectly well if the range returned [1, 2, 3, 4] and the
> list was indexed starting with 1. Basically, range(4) has to produce a
> list of four items, we just differ on what those items should be.

If indexes started at 1, then maybe so should ranges. However,
having read the followups to this message, I think there are still
advantages of simplicity and regularity in having arrays (even if
one calls them lists:-) indexed from 0. My master thesis, lo that
many years ago, included a large program in Fortran IV (1-based
index only) and had to do a lot of +1/-1 twiddling because of
that. I didn't understand why at the time (having not yet met
Koenig's book "C traps & pitfalls", which introduced me to the
many advantages of half-open ranges -- array indexing being a
case of that), but now I think I do.

Suppose my 1-dimensional array/list needs at some point to
be 'seen' as composed of several adjacent subarrays, each of
length N -- just for example. OK, what's M, the index in the
array of element I of subarray K?

If everything starts from 0:
M = I+K*N
nice and simple. If everything starts from 1:
M = 1 + (I-1) + (K-1)*N
= I + (K-1)*N
darn -- an unavoidable '-1'... 1-based indexing just isn't as
arithmetically nice as 0-based when you start having to
compute your indices.

And the reverse, too -- given M and N,
K, I = divmod(M,N)
with 0-based indexing throughout -- nice, isn't it...? What
about *one*-based indexing throughout...? Hmmm, looks
like we'll have to do the -1 dance on M first, then the +1
one on both of the subresults...:
K, I = divmod(M-1, N)
K += 1
I += 1
Doesn't it start to look as if indices WANT to be zero-based,
and coercing them to 1-based is simply a pretty artificial
choice requiring many +1's and -1's strewn around...?

OK, what about the "main diagonal" of the intrinsic 2D
array embedded in my 1D one -- first element of first
subarray, second element of second subarray, etc. Can
I get the indices of that easily and naturally? What's the
index M in the big array of the I-th element of this main
diagonal? Well, when 0-based indexing is being used
throughout, M=I*(N+1) seems right. When 1-based...:
M = 1 + (I-1)*(N+1)
Again we have to do a -1 on I to move it from 1-based
to 0-based for the computation, and +1 on the result
to move the natural 0-based one to 1-based. Just take
care to NOT do more or fewer -1's and +1's than needed
or a bug may emerge...:-).


OK, forget subarrays. Say we just have two very long
arrays A and B. We need to consider them starting
from indices IA and IB, and obtain a result by summing
corresponding elements -- the first element in our
result array is A[IA]+B[IB], and so on. Simple, right?

OK, so, what's the I-th element of our result?
C[I] = A[IA+I] + B[IB+I]
when all indices are 0-based. If 1-based, though:
C[I] = A[IA+I-1] + B[IB+I-1]
darn -- once again the -1 emerges! Once again we
have to just about "translate" arithmetically-funny
1-based indices to the "natural" 0-based ones, and
so the -1 (or +1, depending).


You will no doubt find some counterexamples too, but
in general I think you'll notice that anytime two indices
need to be added, or other kinds of arithmetic on
indices are required, 0-based indices tend to behave
better, 1-based ones need some 'translation' (-1, +1)
far more often than the reverse case.


Alex

Fredrik Lundh

unread,
May 11, 2001, 8:47:36 PM5/11/01
to
In the dumbest subthread in quite a while, someone wrote:

> Perhaps you should treat people who disagree with you with more
> respect.

when you deserve it.

> > if you're so experienced in Python as you say you are, maybe you can
> > prove me wrong, by posting some code where you use lists and tuples
> > the "wrong" way, and show that it won't work better if done the
> > other way around.
>
> I don't have to post any code

didn't expect you to. putting some weight behind your
words isn't exactly your style.

> I can just refer you to way Python itself works. Why do excess
> parameters get put into a tuple rather than a list?

yeah, why?

is it because tuples are immutable (which was your main
argument until you decided to tilt at another windmill)? if
so, how come python's using *mutable* dictionaries for
keyword args?

or is it because excess arguments are more important than
ordinary argument lists, and thus have influenced the whole
design? do you really believe that?

or is it because tuples are more efficient when the number
of arguments is known at construction time (i.e when the
function is called)? if so, you just proved my point.

or maybe it's just an historical implementation artifact, and
has nothing to do with the intended use of tuples vs. lists.

> Excess parameters are not of a fixed length, and they are more
> typically homogeneous than heterogeneous

oh, you really do believe that.


Courageous

unread,
May 11, 2001, 9:10:28 PM5/11/01
to

>> Perhaps you should treat people who disagree with you with more respect.

>when you deserve it.

I've always thought that newsgroup comp.lang.python was very welcoming and
friendly. I left comp.lang.lisp because of just _one_ rather big asshole; the whole
experience was a real downer.

While I'm not accusing anyone of being an asshole here, could we maybe try
extra special hard to keep c.l.p a friendly place to be?

C//


Andrew Dalke

unread,
May 11, 2001, 9:17:05 PM5/11/01
to
Rainer Deyke wrote:
>These arguments to 'min' are all
>homogeneous because they are treated identically; you can even change their
>order and still get the same result.

That's not necessarily true, although I can't think of any
counterexamples which are also useful. Here's a made-up one.

>>> class BadCmp:
... def __init__(self): self.count = 0
... def __cmp__(self, other):
... self.count = self.count + 1
... return (self.count % 3) - 1
...
>>> min([BadCmp(), 0, 1])
1
>>> min([BadCmp(), 1, 0])
0
>>>

Andrew
da...@acm.org

Andrew Maizels

unread,
May 11, 2001, 10:40:13 PM5/11/01
to
Aahz Maruch wrote:
>
> In article <3AFC5DF2...@one.net.au>,
> Andrew Maizels <and...@one.net.au> wrote:
> >
> >For example, I like Python's use of new-line as a statement separator,
> >but in Pixy you can embed relational database queries directly into your
> >code, and given the target application domain (business applications
> >like accounting, customer service etc), I expect they'll be one of the
> >commonest constructs in Pixy programs. Unfortunately, queries have a
> >tendency to be quite lengthy, and trying to squash them into one line or
> >having line-contination markers all over the place both seem like bad
> >ideas.
>
> That's why Python has triple quotes.

Sorry, I wasn't quite clear: these queries are part of the language, not
embedded SQL.

So, if I have an accounts database with customer records containing
name, address, account balance etc, and item records containing date,
due data, amount, open amount, customer number etc, and I wanted to send
a letter to every customer with a balance of more than $50 over 60 days,
I could write something like:

open database accounts for read.
import send_polite_email, send_polite_letter from send_letter.

var c, i are buffer.
var over60 is decimal.

for c in customer
where c.balance <= 50:
-- can safely ignore customers with balance <= $50

over60 := 0.

for i in item
where i.custnum = c.custnum and
i.duedate < today - 60 and
i.open <> 0:
over60 += i.open.
end.

if over60 > 50 then do:
if c.email <> ""
then send_polite_email(c.email,c.name,c.balance,over60).
else send_polite_letter(c.address,c.name,c.balance,over60).
end.
end.


Pixy's syntax is still in flux, but that will give you the idea.

Douglas Alan

unread,
May 12, 2001, 1:06:41 AM5/12/01
to
"Fredrik Lundh" <fre...@pythonware.com> writes:

> In the dumbest subthread in quite a while, someone wrote:

Indeed, but I'm not the one making it dumb.

I made a *very* simple statement: tuples *are* full-fledged immutable
sequences in Python, and therefore it is fair and often useful to
treat them as full-fledged immutable sequences.

You chose to attack and insult me for this very straight-forward and
informed opinion. That's pretty dumb, if you ask me.

> > Perhaps you should treat people who disagree with you with more
> > respect.

> when you deserve it.

And why don't I deserve it, Fredrik?

> > > if you're so experienced in Python as you say you are, maybe you can
> > > prove me wrong, by posting some code where you use lists and tuples
> > > the "wrong" way, and show that it won't work better if done the
> > > other way around.

> > I don't have to post any code

> didn't expect you to. putting some weight behind your words isn't
> exactly your style.

Why should I? No doubt you would just ridicule me and tell me how it
is "obvious" that your approach is better and what a dumbshit I must
be, because the superiority of your approach is a "fact". It's better
to let Python speak for itself, since it belies your claims.

Face it, a tuple *is* a full-fledged sequence. Therefore, it is just
not a "fact" that one should not use it as a full-fledged sequence and
only as a "record". Sometimes immutable sequences are useful, as the
Python implementation proves.

> > I can just refer you to way Python itself works. Why do excess
> > parameters get put into a tuple rather than a list?

> yeah, why?

> is it because tuples are immutable

In part.

> (which was your main argument until you decided to tilt at another
> windmill)?

What windmill is that?

> if so, how come python's using *mutable* dictionaries for keyword
> args?

Because, in part, there are no immutable dictionaries in Python.

> or is it because excess arguments are more important than ordinary
> argument lists, and thus have influenced the whole design?

I have no idea what you are talking about. Who ever said that they
are "more" important?

> do you really believe that?

I have no idea what you are talking abut, so I can't say whether I
believe it or not.

> or is it because tuples are more efficient when the number of
> arguments is known at construction time (i.e when the function is
> called)? if so, you just proved my point.

The number of elements for an immutable object must *always* be known
at construction time. Otherwise it couldn't be immutable, now could
it? But the determination of the size occurs at run-time. A "record"
is something for which the length is typically known at compile time
(though in a dynamic language, such statements must always be taken
with a grain of salt).

Time efficiency of construction is indeed *one* of the benefits of
immutability. It's far from the only one. An immutable object will
also occupy less space. It's semantics are less complicated and more
predictable. Immutable objects are more amenable to certain
optimization techniques.

Another situation where you might want an immutable object is where
you want an object to reveal part of the internal state of its
representation for efficiency reasons, but it would be very bad if the
client modified it. Using immutable objects in this case can help
reduce bugs and increase efficiency. This is why dictionaries only
accept immutable objects as keys. The same thing can be true if it
would be very bad if a called procedure were to modify the object.
Both of these situations are good reasons that strings are immutable.

Or consider this case: I want to use a sequence type to represent
polynomials of a single variable. Such polynomials can be represented
as a sequence of numbers. Consequently the sequence is homogeneous --
all numbers. Each polynomial can be of a different length and they
are arbitrarily long. Once a polynomial is created, it is never
changed. Are you going to tell me, Fredrik, that such a polynomial is
better represented as a list than as a tuple? What if I need to use
polynomials as dictionary keys?

> or maybe it's just an historical implementation artifact, and has
> nothing to do with the intended use of tuples vs. lists.

Are you saying that this use of tuples in Python is a "wart on the
language". Be careful, Fredrik, you might be accused of being a
"troll".

> > Excess parameters are not of a fixed length, and they are more
> > typically homogeneous than heterogeneous

> oh, you really do believe that.

Of course I do. Take this hypothetical function, for instance:

lock("file1", "file2", "file3")

This either gets a lock on all the files or none of them. This is
a pretty typical use of excess arguments. Or just look at os.execl(),
os.execlp(), etc.

|>oug

Stephen Hansen

unread,
May 12, 2001, 1:10:28 AM5/12/01
to

"Andrew Maizels" <and...@one.net.au> wrote in message
news:3AFCA28D...@one.net.au...

> open database accounts for read.
> import send_polite_email, send_polite_letter from send_letter.
>
> var c, i are buffer.
> var over60 is decimal.
>
> for c in customer
> where c.balance <= 50:
> -- can safely ignore customers with balance <= $50
>
> over60 := 0.
>
> for i in item
> where i.custnum = c.custnum and
> i.duedate < today - 60 and
> i.open <> 0:
> over60 += i.open.
> end.
>
> if over60 > 50 then do:
> if c.email <> ""
> then send_polite_email(c.email,c.name,c.balance,over60).
> else send_polite_letter(c.address,c.name,c.balance,over60).
> end.
> end.
>
>
> Pixy's syntax is still in flux, but that will give you the idea.

I realize you said that you do not like semi-colons, but it seems to me
that your use of a period as a statement seperator has its own problems.
Consider the following:

print 1.

In Python, that is printing a floating point number. In Pixy, that is
printing an integer. Now, It'd be all fine and dandy to just make it a
language requirement that all floating points *have* a number after the
decimal point, but it eats at the clairity and starts looking uncool:

print 1.0.

Just an example. :)

--Stephen
(replace 'NOSPAM' with 'seraph' to respond in email)


Fredrik Lundh

unread,
May 12, 2001, 5:14:44 AM5/12/01
to
Courageous wrote:
> While I'm not accusing anyone of being an asshole here, could we maybe try
> extra special hard to keep c.l.p a friendly place to be?

c.l.py is also know for the high technical quality on the replies.

anyone should be able to trust what he reads on this newsgroup
(unless something is obviously a joke, of course), and also be able
to post correct advice without being misquoted and ridiculed.

if thinking that makes me an asshole, so be it.

Cheers /F


Douglas Alan

unread,
May 12, 2001, 6:39:44 AM5/12/01
to
"Fredrik Lundh" <fre...@pythonware.com> writes:

> Courageous wrote:

> > While I'm not accusing anyone of being an asshole here, could we
> > maybe try extra special hard to keep c.l.p a friendly place to be?

> c.l.py is also know for the high technical quality on the replies.

> anyone should be able to trust what he reads on this newsgroup
> (unless something is obviously a joke, of course), and also be able
> to post correct advice without being misquoted and ridiculed.

Then why do you misquote and ridicule me when I post correct advice?

> if thinking that makes me an asshole, so be it.

I think being hypocritical is what would do it.

|>oug

Courageous

unread,
May 12, 2001, 7:21:16 AM5/12/01
to

>> While I'm not accusing anyone of being an asshole here, could we maybe try
>> extra special hard to keep c.l.p a friendly place to be?
>
>c.l.py is also know for the high technical quality on the replies.
>anyone should be able to trust what he reads on this newsgroup
>(unless something is obviously a joke, of course), and also be able
>to post correct advice without being misquoted and ridiculed.

One should be able to post _incorrect_ advice without being ridiculed.

This isn't a comment about this particular thread; it's only a comment
which became necessary _because_ of the thread. I surely hope you
see the difference. I'm not pointing any fingers at any people; to the
contrary, the _both_ of you seem like intelligent, amiable folks.

What I _am_ saying that this thread has degenerated into something
counter-productive, and perhaps between the two of you, you can
figure out how to keep things productive.

I _don't_ want this place to become c.l.l. That place is filled with jerks.


C//

It is loading more messages.
0 new messages