"These operators ≤ ≥ ≠ should be added to the language having the
following meaning:
<= >= !=
this should improve readibility (and make language more accessible to
beginners).
This should be an evolution similar to the digraphe and trigraph
(digramme et trigramme) from C and C++ languages."
How do people on this group feel about this suggestion?
The symbols above are not even latin-1, you need utf-8.
(There are not many usefuls symbols in latin-1. Maybe one could use ×
for cartesian products...)
And while they are better readable, they are not better typable (at
least with most current editors).
Is this idea absurd or will one day our children think that restricting
to 7-bit ascii was absurd?
Are there similar attempts in other languages? I can only think of APL,
but that was a long time ago.
Once you open your mind for using non-ascii symbols, I'm sure one can
find a bunch of useful applications. Variable names could be allowed to
be non-ascii, as in XML. Think class names in Arabian... Or you could
use Greek letters if you run out of one-letter variable names, just as
Mathematicians do. Would this be desirable or rather a horror scenario?
Opinions?
-- Christoph
I can't find "≤, ≥, or ≠" on my keyboard.
James
> I can't find "≤, ≥, or ≠" on my keyboard.
Get a better keyboard? or OS?
On OS X,
≤ is Alt-,
≥ is Alt-.
≠ is Alt-=
Fewer keystrokes than <= or >= or !=.
--
Robert Kern
rober...@gmail.com
"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
>> I can't find "?, ?, or ?" on my keyboard.
>
> Get a better keyboard? or OS?
>
> On OS X,
>
> ? is Alt-,
> ? is Alt-.
> ? is Alt-=
>
> Fewer keystrokes than <= or >= or !=.
Sure, but I can't find OS X listed as a prerequisite for using Python. So,
while I don't give a damn if those symbols are going to be supported by Python,
I don't think the plain ASCII version should be deprecated. There are too many
situations where it's still useful (coding across old terminals and whatnot).
--
Giovanni Bajo
> On the page http://wiki.python.org/moin/Python3%2e0Suggestions
> I noticed an interesting suggestion:
>
> "These operators ≤ ≥ ≠ should be added to the language having the
> following meaning:
>
> <= >= !=
>
> this should improve readibility (and make language more accessible to
> beginners).
>
> This should be an evolution similar to the digraphe and trigraph
> (digramme et trigramme) from C and C++ languages."
>
> How do people on this group feel about this suggestion?
>
> The symbols above are not even latin-1, you need utf-8.
>
> (There are not many usefuls symbols in latin-1. Maybe one could use ×
> for cartesian products...)
Or for multiplication :-)
> And while they are better readable, they are not better typable (at
> least with most current editors).
>
> Is this idea absurd or will one day our children think that restricting
> to 7-bit ascii was absurd?
>
> Are there similar attempts in other languages? I can only think of APL,
> but that was a long time ago.
My earliest programming was on (classic) Macintosh, which supported a
number of special characters including ≤ ≥ ≠ with the obvious
meanings. They were easy to enter too: the Mac keyboard had (has?) an
option key, and holding the option key down while typing a character would
enter a special character. E.g. option-s gave Greek sigma, option-p gave
pi, option-less-than gave ≤, and so forth. Much easier than trying to
memorize character codes.
I greatly miss the Mac's ease of entering special characters, and I miss
the ability to use proper mathematical symbols for (e.g.) pi, not equal,
and so forth.
> Once you open your mind for using non-ascii symbols, I'm sure one can
> find a bunch of useful applications. Variable names could be allowed to
> be non-ascii, as in XML. Think class names in Arabian... Or you could
> use Greek letters if you run out of one-letter variable names, just as
> Mathematicians do. Would this be desirable or rather a horror scenario?
> Opinions?
I think the use of digraphs like != for not equal is a poor substitute for
a real not-equal symbol. I think the reliance of 7-bit ASCII is horrible
and primitive, but without easier, more intuitive ways of entering
non-ASCII characters, and better support for displaying non-ASCII
characters in the console, I can't see this suggestion going anywhere.
--
Steven.
One of issues in Python is cross-platform portability. Limiting the
range of symbols to lower ASCII and with specification of a code table
to ASCII is a good deal here. I think, that Unicode is not yet
everywhere and as long it is that way it makes not much sense to go for
it in Python.
Claudio
Both... this idea will only become none-absurd when unicode will become
as prevalent as ascii, i.e. unicode keyboards, universal support under
almost every application, and so on. Even if you can easly type it on
your macintosh, good luck using it while using said macintosh to ssh or
telnet to a remote server and trying to type unicode...
> "These operators ≤ ≥ ≠ should be added to the language having the
> following meaning:
>
> <= >= !=
>
> this should improve readibility (and make language more accessible to
> beginners).
>
I assume most python beginners know some other programming language, and
are familiar with the >= and friends. Those learning python as their
first programming language will benefit from learning the >= when they
learn a new language.
Unicode is not yet supported everywhere, so some editors/terminals might
display the suggested one-char operators as something else, effectively
"guess what operator I was thinking".
Fortran 90 allowed >, >= instead of .GT., .GE. of Fortran 77. But F90
uses ! as comment symbol and therefore need /= instead of != for
inequality. I guess just because they wanted. However, it is one more
needless detail to remember. Same with the suggested operators.
Posting code to newsgroups might get harder too. :-)
[James Stroud wrote:]
>>>>I can't find "?, ?, or ?" on my keyboard.
>
> Posting code to newsgroups might get harder too. :-)
His post made it through fine. Your newsreader messed it up.
I think we should limit the discussion to allowing non-ascii symbols
*alternatively* to (combinations of) ascii chars. Nobody should be
forced to use them since not all editors/OSs and keyboards support it.
Think about moving from ASCII to LATIN-1 or UTF-8 as similar to moving
from ISO 646 to ASCII (http://en.wikipedia.org/wiki/C_trigraph).
I think it is a legitimate question, after UTF-8 becomes more and more
supported.
Editors could provide means to easily enter these symbols once
programming languages start supporting them: Automatic expansion of
ascii combinations, Alt-Combinations (like in OS-X) or popup menus with
all supported symbols.
-- Christoph
This will eventually happen in some form. The problem is that we are
still in the infancy of computing. We are using stones and chisels to
express logic. We are currently faced with text characters with which
to express intent. There will come a time when we are able to represent
a program in another form that is readily portable to many platforms.
In the meantime (probably 50 years or so), it would be advantageous to
use a universal character set for coding programs. To that end, the
input to the Python interpreter should be ISO-10646 or a subset such as
Unicode. If the # -*- coding: ? -*- line specifies something other than
ucs-4, then a preprocessor should convert it to ucs-4. When it is
desireable to avoid the overhead of the preprocessor, developers will
find a way to save source code in ucs-4 encoding.
The problem with using Unicode in utf-8 and utf-16 forms is that the
code will forever need to be written and forever execute additional
processing to handle the MBCS and MSCS (Multiple-Short Character Set)
situation.
Ok. Maybe computing is past infancy. But most development environments
are not much past toddler stage.
[~]$ ssh rk...@192.168.1.66
rk...@192.168.1.66's password:
Linux rkernx2 2.6.12-9-amd64-generic #1 Mon Oct 10 13:27:39 BST 2005 x86_64
GNU/Linux
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
Last login: Mon Jan 9 12:40:28 2006 from 192.168.1.141
[~]$ cat > utf-8.txt
x + y ≥ z
[~]$ cat utf-8.txt
x + y ≥ z
Luck isn't involved.
The point is that it is just *not* the same. The suggested operators are
universal symbols (unicode). Nobody would use ≠ as a comment sign. No
need to remember was it .NE. or -ne or <> or != or /= ...
There is also this old dispute of using "=" for both the assignment
operator and equality and how it can confuse newcomers and cause errors.
A consequent use of unicode could solve this problem:
a ← b # Assignment (now "a = b" in Python, a := b in Pascal)
a = b # Eqality (now "a == b" in Python, a = b in Pascal)
a ≡ b # Identity (now "a is b" in Python, @a = @b in Pascal)
a ≈ b # Approximately equal (may be interesting for floats)
(I know this goes one step further as it is incompatible to the existing
use of the = sign in Python).
Another aspect: Supporting such symbols would also be in accord with
Python's trait of being "executable pseudo code."
-- Christoph
[...]
>
>Fortran 90 allowed >, >= instead of .GT., .GE. of Fortran 77. But F90
>uses ! as comment symbol and therefore need /= instead of != for
>inequality. I guess just because they wanted. However, it is one more
>needless detail to remember. Same with the suggested operators.
C uses ! as a unary logical "not" operator, so != for "not equal" just
seems to follow, um, logically.
Pascal used <>, which intuitively (to me, anyway ;-) read "less than
or greater than," i.e., "not equal." Perl programmers might see a
spaceship.
Modula-2 used # for "not equal." I guess that wouldn't work well in
Python...
Regards,
-=Dave
--
Change is inevitable, progress is not.
[...]
>Once you open your mind for using non-ascii symbols, I'm sure one can
>find a bunch of useful applications. Variable names could be allowed to
>be non-ascii, as in XML. Think class names in Arabian... Or you could
>use Greek letters if you run out of one-letter variable names, just as
>Mathematicians do. Would this be desirable or rather a horror scenario?
The latter, IMHO. Especially variable names. Consider i vs. ì vs. í
vs. î vs. ï vs. ...
I'm not exactally sure what happened - I can see the three charachters
just fine in your (Robert's) and the original (Christoph's) post. In
Giovanni's post, they're rendered as question marks.
My point still stands: _somewere_ along the way the rendering got messed
up for _some_ people - something that wouldn't have happened with the
<=, >= and != digraphs.
(FWIW, my newsreader is Thunderbird 1.0.6.)
> a ← b # Assignment (now "a = b" in Python, a := b in Pascal)
^-- this seems to me to be still open for further proposals and
discussion. There is no symbol coming to my mind, but I would be glad if
it would express, that 'a' becomes a reference to a Python object being
currently referred by the identifier 'b' (maybe some kind of <-> ?).
> a = b # Eqality (now "a == b" in Python, a = b in Pascal)
> a ≡ b # Identity (now "a is b" in Python, @a = @b in Pascal)
> a ≈ b # Approximately equal (may be interesting for floats)
^-- this three seem to me to be obvious and don't need to be
further discussed (only implemented as the time for such things will come).
Claudio
Yes, but Python is already a bit handicapped concerning posting code
anyway because of its significant whitespace. Also, I believe once
Python will support this, the editors will allow converting "digraphs"
<=, >= and != to symbols back and forth, just as all editors learned to
convert tabs to spaces back and forth... And newsreaders and mailers are
also improving. Some years ago, I used to write all German Umlauts as
digraphs because you could never be sure how they arrived. Nowadays, I'm
using Umlauts as something very normal.
-- Christoph
> > My point still stands: _somewere_ along the way the rendering got messed
> > up for _some_ people - something that wouldn't have happened with the
> > <=, >= and != digraphs.
>
> Yes, but Python is already a bit handicapped concerning posting code
> anyway because of its significant whitespace. Also, I believe once
> Python will support this, the editors will allow converting "digraphs"
> <=, >= and != to symbols back and forth
umm. if you have an editor that can convert things back and forth, you
don't really need language support for "digraphs"...
</F>
-- Christoph
With unicode, you have a lot of possibilities to express this:
a ← b # a = b
a ⇐ b # a = copy(b)
a ⇚ b # a = deepcopy(b)
-- Christoph
Consequently, C should have used !> for <= and !< for >= ...
-- Christoph
There could be conventions discouraging you to use ambiguous symbols.
Even today, you wouldn't use a lowercase "l" or an "O" because it can be
confused with a digit 1 or 0. But you're right this problem would become
much greater with unicode chars. This kind of pitfall has already been
overlooked with the introduction of international domain names which are
exploitable for phishing attacks...
-- Christoph
a ← b # a = b
starts to be obvious to me, as it covers also some of the specifics of
Python.
Nice idea.
Claudio
>>
>> -- Christoph
It would just be very impractical to convert back and forth every time
you want to run a program. Python also supports tabs AND spaces though
you can easily convert things.
But indeed, in 100 years or so ;-) if people get accustomed to using
these symbols and input will be easy, digraph support could become
optional and then phase out... Just as now happens with C trigraphs.
-- Christoph
Well, actually, no.
"Less (than) or equal" is <=. "Greater (than) or equal" is >=. "Not
equal" is !=.
If you want to write code for the IOCCC, you could use !(a>b) instead
of a<=b...
> The latter, IMHO. Especially variable names. Consider i vs. ì vs. í
> vs. î vs. ï vs. ...
Agreed, but that's the programmer's fault for choosing stupid variable
names. (One character names are almost always a bad idea. Names which can
be easily misread are always a bad idea.) Consider how easy it is to
shoot yourself in the foot with plain ASCII:
l1 = 0
l2 = 4
...
pages of code
...
assert 11 + l2 = 4
--
Steven.
>On Tue, 24 Jan 2006 10:38:56 -0600, Dave Hansen wrote:
>
>> The latter, IMHO. Especially variable names. Consider i vs. ì vs. í
>> vs. î vs. ï vs. ...
>
>Agreed, but that's the programmer's fault for choosing stupid variable
>names. (One character names are almost always a bad idea. Names which can
>be easily misread are always a bad idea.) Consider how easy it is to
I wasn't necessarily expecting single-character names. Indeed, the
different between i and ì is easier to see than the difference
between, say, long_variable_name and long_varìable_name. For me,
anyway.
>shoot yourself in the foot with plain ASCII:
>
>
>l1 = 0
>l2 = 4
>...
>pages of code
>...
>assert 11 + l2 = 4
You've shot yourself twice, there. Python would tell you about the
second error, though.
> On Wed, 25 Jan 2006 08:26:16 +1100 in comp.lang.python, Steven
> D'Aprano <st...@REMOVETHIScyber.com.au> wrote:
>
>>On Tue, 24 Jan 2006 10:38:56 -0600, Dave Hansen wrote:
>>
>>> The latter, IMHO. Especially variable names. Consider i vs. ì vs. í
>>> vs. î vs. ï vs. ...
>>
>>Agreed, but that's the programmer's fault for choosing stupid variable
>>names. (One character names are almost always a bad idea. Names which can
>>be easily misread are always a bad idea.) Consider how easy it is to
>
> I wasn't necessarily expecting single-character names. Indeed, the
> different between i and ì is easier to see than the difference
> between, say, long_variable_name and long_varìable_name. For me,
> anyway.
Sure. But that's no worse than pxfoobrtnamer and pxfoobtrnamer.
I'm not saying that adding more characters to the mix won't increase the
opportunity to pick bad names. But this isn't a new problem, it is an old
problem.
>>shoot yourself in the foot with plain ASCII:
>>
>>
>>l1 = 0
>>l2 = 4
>>...
>>pages of code
>>...
>>assert 11 + l2 = 4
>
> You've shot yourself twice, there.
Deliberately so. The question is, in real code without the assert, should
the result of the addition be 4, 12, 15 or 23?
--
Steven.
Please talk to my boss. Tell him I want a Quad G5 with about 2 Giga ram.
I'll by the keyboard myself, no problemo.
> On OS X,
>
> ≤ is Alt-,
> ≥ is Alt-.
> ≠ is Alt-=
>
> Fewer keystrokes than <= or >= or !=.
>
James
Alternatively, you can simply learn to use the tools in front of you.
http://www.cl.cam.ac.uk/~mgk25/unicode.html#input
>On the page http://wiki.python.org/moin/Python3%2e0Suggestions
>I noticed an interesting suggestion:
>
>"These operators ≤ ≥ ≠ should be added to the language having the
>following meaning:
>
> <= >= !=
>
>this should improve readibility (and make language more accessible to
>beginners).
>
>This should be an evolution similar to the digraphe and trigraph
>(digramme et trigramme) from C and C++ languages."
>
>How do people on this group feel about this suggestion?
>
>The symbols above are not even latin-1, you need utf-8.
>
Maybe we need a Python unisource type which is abstract like unicode,
and through encoding can be rendered various ways. Of course it would have
internal representation in some encoding, probably utf-16le, but glyphs
for operators and such would be normalized, and then could be rendered
as multi-glyphs or special characters however desired. This means that
unisource would not just be an encoding resulting from decoding just
a character encoding like latin-1, but would be a result of decoding
source in a Python-syntax-sensitive way, differentiating between <=
as a relational operator vs '<=' in a string literal or comment etc.
>(There are not many usefuls symbols in latin-1. Maybe one could use ×
>for cartesian products...)
>
>And while they are better readable, they are not better typable (at
>least with most current editors).
>
>Is this idea absurd or will one day our children think that restricting
>to 7-bit ascii was absurd?
I think it's important to have readable ascii representations available for
programming elements at least.
>
>Are there similar attempts in other languages? I can only think of APL,
>but that was a long time ago.
>
>Once you open your mind for using non-ascii symbols, I'm sure one can
>find a bunch of useful applications. Variable names could be allowed to
>be non-ascii, as in XML. Think class names in Arabian... Or you could
>use Greek letters if you run out of one-letter variable names, just as
>Mathematicians do. Would this be desirable or rather a horror scenario?
>Opinions?
I think there are pros and cons. What if the "href" in HTML could be spelled in
any characters? I.e., some things are part of a standard encoding and representation
system. Some of python is like that. "True" should not be spelled "Vrai" or "Sant," except
in localized messages, IMO, unless perhaps there is a unisource type that normalizes
these things too, and can render in localized formats. ... I guess China is a
pretty big market, so I wonder what they will do.
Someone has to get really excited about it, and have the expertise or willingness
to slog their way to expertise, and the persistence to get something done. And all
that in the face of the fact that much of the problem will be engineering consensus,
not engineering technical solutions. So are you excited? Good luck ;-)
Probably the best anyone with any excitement to spare could do is ask Martin
what he could use help with, if anything. He'd probably not like muddying any
existing clear visions and plans with impractical ramblings though ;-)
Regards,
Bengt Richter
For quantitative data, anyway, or things which can be ordered consistently.
It's unclear to me how well this concept maps to other sorts of data.
Complex numbers, for example.
I think "not equal", at least the way our brains handle it in general,
is not equivalent to "less than or greater than".
That is, I think the concept "not equal" is less than or greater than
the concept "less than or greater than". <wink>
-Peter
> I think "not equal", at least the way our brains handle it in general,
> is not equivalent to "less than or greater than".
>
> That is, I think the concept "not equal" is less than or greater than
> the concept "less than or greater than". <wink>
For objects that don't have total ordering, "not equal" != is not the
same as "less than or greater than" <>.
The two obvious examples are complex numbers, where C1 != C2 can be
evaluated, but C1 <> C2 is not defined, and NaNs, where NaN != NaN is
always true but NaN <> NaN is undefined.
--
Steven.
In principle, and in the long run, I am definitely for it.
Pragmatically, though, there are still a lot of places
where it would cause me pain. For example, it exposes
problems even in reading this thread in my mail client
(which is ironic, considering that it manages to correctly
render Russian and Japanese spam messages. Grrr.).
OTOH, there will *always* be backwards systems, so you
can't wait forever to move to using newer features.
> The symbols above are not even latin-1, you need utf-8.
> And while they are better readable, they are not better
> typable (at least with most current editors).
They're not that bad. I manage to get kana and kanji working
correctly when I really need them.
> Are there similar attempts in other languages? I can only
> think of APL, but that was a long time ago.
I'm pretty sure that there are. The idea of adding UTF8 for
use in identifiers and stuff has been around for awhile for
Python. I'm pretty sure you can do this already in Java,
can't you? (I think I read this somewhere, but I don't
think it gets used much).
> Once you open your mind for using non-ascii symbols, I'm
> sure one can find a bunch of useful applications.
> Variable names could be allowed to be non-ascii, as in
> XML. Think class names in Arabian... Or you could use
> Greek letters if you run out of one-letter variable names,
> just as Mathematicians do. Would this be desirable or
> rather a horror scenario? Opinions?
Greek letters would be a real relief in writing scientific
software. There's something deeply annoying about variables
named THETA, theta, and Theta. Or "w" meaning "omega.
People coming from other programming backgrounds may object
that these uses are less informative. But in the sciences,
some of these symbols have as much recognizability as "+" or
"$" do to other people. Reading math notation from a
scientists, I can be pretty darned certain that "c" is "the
speed of light" or that "epsilon" is a small, allowable
variation in a variable. And so on. It's true that there are
occasionable problems when problem domains merge, but that's
true of words, too.
It would also reduce the difficulty of going back and forth
between the paper describing the math, and the program
using it.
One thing that I also think would be good is to open up the
operator set for Python. Right now you can overload the
existing operators, but you can't easily define new ones.
And even if you do, you are very limited in what you can
use, and understandability suffers.
But unicode provides codeblocks for operators that
mathematicians use for special operators ("circle-times"
etc). That would both reduce confusion for people bothered
by weird choices of overloading "*" and "+" and allow people
who need these features the ability to use them.
It's also relevant that scientists in China and Saudi Arabia
probably use a roman "c" for the speed of light, or a "mu"
to represent a mass, so it's likely more understandable
internationally than using, say "lightspeed" and "mass".
OTOH, using identifiers in many different languages would
have the opposite effect. Right now, English is accepted as
a lingua franca for programming (and I admit that as a
native speaker of English, I benefit from that), but if it
became common practice to use lots of different languages,
cooperation might suffer.
But then, that's probably why English still dominates with
Java. I suspect that just means people wouldn't use it as
much. And I've certainly dealt with source code commented
in Spanish or German. It didn't kill me.
So, I'd say that in the long run:
1) Yes it will be adopted
2) The math and greek-letter type symbols will be the big
win
3) Localized variable names will be useful to some people,
but not widely popular, especially for cooperative free
software projects (of course, in the Far East, for example,
han character names might become very popular as they span
several languages). But I bet it will remain underused so
long as English remains the most popular international trade
language.
In the meantime, though, I predict many luddites will
scream "But it doesn't work on my vintage VT-220 terminal!"
(And I may even be one of them).
Cheers,
Terry
--
Terry Hancock (han...@AnansiSpaceworks.com)
Anansi Spaceworks http://www.AnansiSpaceworks.com
I just asked myself how Chinese programmers feel about this. I don't
know Chinese, but probably they could write a whole program using only
one-character names for variables, and it would be still readable (at
least for Chinese)... Would this be used or would they rather prefer to
write in English on account of compatibilty issues (technical and human
readability in international projects) or because typing these chars is
more cumbersome than ascii chars? Any Chinese here?
-- Christoph
Yeah, I'm pretty sure we're talking about the future here.
:-)
> That depends. People with ages in the middle or older
> probably have very rare experience of typing han
> characters. But with the popularity of computer
> as well as the development of excellent input packages,
> and most importantly,
> the online-chats that many teenagers hooking to, next
> several geneartions can type han char easily and
> comfortably.
That's interesting. I think many people in the West tend to
imagine han/kanji characters as archaisms that will
disappear (because to most Westerners they seem impossibly
complex to learn and use, "not suited for the modern
world"). I used to think this was likely, although I always
thought the characters were beautiful, so it would be a
shame.
After taking a couple of semesters of Japanese, though, I've
come to appreciate why they are preferred. Getting rid of
them would be like convincing English people to kunvurt to
pur fonetik spelin'.
Which isn't happening either, I can assure you. ;-)
> One thing that is lack in other languages is the "phrase
> input"---- almost every
> han input package provides this customizable feature. With
> all these combined,
> many of youngesters can type as fast as they talk. I
> believe many of them input
> han characters much faster than inputting English.
I guess this is like Canna/SKK server for typing Japanese.
I've never tried to localize my desktop to Japanese (and I
don't think I want to -- I can't read it all that well!),
but I've used kanji input in Yudit and a kanji-enabled
terminal.
I'm not sure I understand how this works, but surely if
Python can provide readline support in the interactive
shell, it ought to be able to handle "phrase input"/"kanji
input." Come to think of it, you probably can do this by
running the interpreter in a kanji terminal -- but Python
just doesn't know what to do with the characters yet.
> The "side effect" of this technology advance might be that
> in the future the
> simplified chinese characters might deprecate, 'cos
> there's no need to simplify
> any more.
Heh. I must say the traditional characters are easier for
*me* to read. But that's probably because the Japanese kanji
are based on them, and that's what I learned. I never could
get the hang of "grass hand" or the "cursive" Chinese han
character style.
I would like to point out also, that as long as Chinese
programmers don't go "hog wild" and use obscure characters,
I suspect that I would have much better luck reading their
programs with han characters, than with, say, the Chinese
phonetic names! Possibly even better than what they thought
were the correct English words, if their English isn't that
good.
> One thing that I also think would be good is to open up the
> operator set for Python. Right now you can overload the
> existing operators, but you can't easily define new ones.
> And even if you do, you are very limited in what you can
> use, and understandability suffers.
One of the issues that would need to be dealt with in allowing new
operators to be defined is how to work out precedence rules for the new
operators. Right now you can redefine the meaning of addition and
multiplication, but you can't change the order of operations. (Witness
%, and that it must have the same precedence in both multiplication and
string replacement.)
If you allow (semi)arbitrary characters to be used as operators, some
scheme must be chosen for assigning a place in the precedence hierarchy.
Speaking maybe only for myself:
I don't like implicit rules, so I don't like also any precedence
hierarchy being in action, so for safety reasons I always write even
8+6*2 (==20) as 8+(6*2) to be sure all will go the way I expect it.
Claudio
But for people who often use mathematical formulas this looks pretty
weird. If it wasn't a programming language, you wouldn't write an
asterik even, but either a mid dot or nothing. The latter is possible
because contrary to programming languages, you usually use one-letter
names in formulas, so it is clear that ab means a*b, and does not
designate a variable with the name "ab". x**2+y**2+(2*pi*r) looks way
uglier than x²+y²+2πr (another appication for greek letters). Maybe
providing a "formula" or "math style" mode would be sometimes helpful.
Or maybe not, because other conventions of mathematical formulas (long
fraction strokes, using subscript indices and superscript exponents
etc.) couldn't be solved so easily anyway. You would need editors with
the ability to display and input "formula sections" in Python programs
differently. Python would become something like "executable TeX" rather
than "executable pseudo code"...
-- Christoph
Maybe you would like the unambiguousness of
(+ 8 (* 6 2))
or
6 2 * 8 +
?
Hm, ... ISTM you could have a concept of all objects as potential operator
objects as now, but instead of selecting methods of the objects according
to special symbols like + - * etc, allow method selection by rules applied
to a sequence of objects for selecting methods. E.g., say
a, X, b, Y, c
is a sequence of objects (happening to be contained in a tuple expression here).
Now let's define seqeval such that
seqeval((a, X, b, Y, c))
looks at the objects to see if they have certain methods, and then calls some of
those methods with some of the other objects as arguments, and applies rules of
precedence and association to do something useful, producing a final result.
I'm just thinking out loud here, but what I'm getting at is being able to write
8+6*2
as
seqeval((8, PLUS, 6, TIMES, 2))
with the appropriate definitions of seqeval and PLUS and TIMES. This is with a view
to having seqeval as a builtin that does standard processing, and then having
a language change to make white-space-separated expressions like
8 PLUS 6 TIMES 2
be syntactic sugar for an implicit
seqeval((8, PLUS, 6, TIMES, 2))
where PLUS and TIMES may be arbitrary user-defined objects suitable for seqeval.
I'm thinking out loud, so I anticipate syntactic ambiguities in expressions and the need to
use parens etc., but this would in effect let us define arbitrarily named operators.
Precedence might be established by looking for PLUS.__precedence__. But as usual,
parens would control precedence dominantly. E.g.,
(8 PLUS 6) TIMES 2
would be sugar for
seqeval((seqeval(8, PLUS, 6), TIMES, 2)
IOW, we have an object sequence expression analogous to a tuple expression without commas.
I guess generator expressions might be somewhat of a problem to disambiguate sometimes, we'll see
how bad that gets ;-)
One way to detect operator objects would be to test callable(obj), which would allow
for functions and types and bound methods etc. Now there needs to be a way of
handling UNARY_PLUS vs PLUS functionality (obviously the name bindings are just mnemonic
and aren't seen by seqeval unless they're part of the operator object). ...
A sketch:
>>> def seqeval(objseq):
... """evaluate an object sequence. rules tbd."""
... args=[]
... ops=[]
... for obj in objseq:
... if callable(obj):
... if ops[-1:] and obj.__precedence__<= ops[-1].__precedence__:
... args[-2:] = [ops.pop()(*args[-2:])]
... ops.append(obj)
... continue
... elif isinstance(obj, tuple):
... obj = seqeval(obj)
... while len(args)==0 and ops: # unary
... obj = ops.pop()(obj)
... args.append(obj)
... while ops:
... args[-2:] = [ops.pop()(*args[-2:])]
... return args[-1]
...
>>> def PLUS(x, y=None):
... print 'PLUS(%s, %s)'%(x,y)
... if y is None: return x
... else: return x+y
...
>>> PLUS.__precedence__ = 1
>>>
>>> def MINUS(x, y=None):
... print 'MINUS(%s, %s)'%(x,y)
... if y is None: return -x
... else: return x-y
...
>>> MINUS.__precedence__ = 1
>>>
>>> def TIMES(x, y):
... print 'TIMES(%s, %s)'%(x,y)
... return x*y
...
>>> TIMES.__precedence__ = 2
>>>
>>> seqeval((8, PLUS, 6, TIMES, 2))
TIMES(6, 2)
PLUS(8, 12)
20
>>> seqeval(((8, PLUS, 6), TIMES, 2))
PLUS(8, 6)
TIMES(14, 2)
28
>>> seqeval(((8, PLUS, 6), TIMES, (MINUS, 2)))
PLUS(8, 6)
MINUS(2, None)
TIMES(14, -2)
-28
>>> seqeval((MINUS, (8, PLUS, 6), TIMES, (MINUS, 2)))
PLUS(8, 6)
MINUS(14, None)
MINUS(2, None)
TIMES(-14, -2)
28
>>> list(seqeval((i, TIMES, j, PLUS, k)) for i in (2,3) for j in (10,100) for k in (5,7))
TIMES(2, 10)
PLUS(20, 5)
TIMES(2, 10)
PLUS(20, 7)
TIMES(2, 100)
PLUS(200, 5)
TIMES(2, 100)
PLUS(200, 7)
TIMES(3, 10)
PLUS(30, 5)
TIMES(3, 10)
PLUS(30, 7)
TIMES(3, 100)
PLUS(300, 5)
TIMES(3, 100)
PLUS(300, 7)
[25, 27, 205, 207, 35, 37, 305, 307]
Regards,
Bengt Richter
Claudio
> After taking a couple of semesters of Japanese, though, I've
> come to appreciate why they are preferred. Getting rid of
> them would be like convincing English people to kunvurt to
> pur fonetik spelin'.
>
> Which isn't happening either, I can assure you. ;-)
The Germans just had a spelling reform. Norway had a major
language reform in the mid 19th century to get rid of the old
Danish influences (and still have two completely different ways
of spelling everything). You never know what will happen. You
are also embracing the metric system, inch by inch... ;)
Actually, it seems that recent habit of sending text messages
via mobile phones is the prime driver for reformed spelling
these days.
> I'm not sure I understand how this works, but surely if
> Python can provide readline support in the interactive
> shell, it ought to be able to handle "phrase input"/"kanji
> input." Come to think of it, you probably can do this by
> running the interpreter in a kanji terminal -- but Python
> just doesn't know what to do with the characters yet.
I'm sure the same principles could be used to make a very fast
and less misspelling prone editing environment though. That
could actually be a reason to step away from vi or Emacs (but
I assume it would soon work in Emacs too...)
> I would like to point out also, that as long as Chinese
> programmers don't go "hog wild" and use obscure characters,
> I suspect that I would have much better luck reading their
> programs with han characters, than with, say, the Chinese
> phonetic names! Possibly even better than what they thought
> were the correct English words, if their English isn't that
> good.
You certainly have a point there. Even when I don't work in an
English speaking environment as I do now, I try to write all
comments and variable names etc in English. You never know when
you need to show a code snippet to people who don't read Swedish.
Also, ASCII lacks three of our letters and properly translated
is often better than written with the wrong letters.
On the other hand, if the target users describe their problem
domain with e.g. a Swedish terminology, translating all terms
will take time and increase confusion. Also, there are plenty
of programmers who don't write English so well...
On Fri, 27 Jan 2006 11:05:15 +0100 in comp.lang.python, Magnus Lycka
<ly...@carmen.se> wrote:
>Terry Hancock wrote:
>> That's interesting. I think many people in the West tend to
>> imagine han/kanji characters as archaisms that will
>> disappear (because to most Westerners they seem impossibly
>> complex to learn and use, "not suited for the modern
>> world").
>I don't know about "the West". Isn't it more typical for the
>US that people believe that "everybody really wants to be like
>us". Here in Sweden, *we* obviously want to be like you, even
>if we don't admit it openly, but we don't suffer from the
>misconception that this applies to all of the world. ;)
1) Actually, we don't think "everyone wants to be like us." More like
"anyone who doesn't want to be like us is weird."
2) This extends to our own fellow citizens.
[...]
>Maybe you would like the unambiguousness of
> (+ 8 (* 6 2))
>or
> 6 2 * 8 +
>?
Well, I do like lisp and Forth, but would prefer Python to remain
Python.
Though it's hard to fit Python into 1k on an 8-bit mocrocontroller...
The simplified chinese exists due to the call for modernization of
language decades ago. That involved the 'upside-down' of almost
entire culture --- nowadays people in China can't even read most of
the documents written just 70~80 years ago. Imagine its damage
to the 'historical sense' of modern chinese !!! The "anti-simplification"
force was thus imaginaribly huge. Actually, not only the original
plan of simplification wasn't completed (only proceded to the 1st
stage; the 2nd stage was put off), there are calls for reversal -- back
to the traditional forms -- lately. Obviously, language reform is not
trivial; Especially, for asian countries, it is probably not as easy as it
is for western countries.
China is still a central authoritarian country. Even with that government
they were unable to push this thru. If any one would even dream about
language reform in democratic Taiwan, I bet the proposal won't even
pass the first step in the congress.
> Actually, it seems that recent habit of sending text messages
> via mobile phones is the prime driver for reformed spelling
> these days.
Well, to solve the problem you can either (1) reform the spelling
of a language to meet the limitation of mobile phones, or (2)
advancing the input device on the mobile phones such that they
can input the language of your choice. For most asian languages,
(1) is certainly out of question.
> > I'm not sure I understand how this works, but surely if
> > Python can provide readline support in the interactive
> > shell, it ought to be able to handle "phrase input"/"kanji
> > input." Come to think of it, you probably can do this by
> > running the interpreter in a kanji terminal -- but Python
> > just doesn't know what to do with the characters yet.
> I'm sure the same principles could be used to make a very fast
> and less misspelling prone editing environment though. That
> could actually be a reason to step away from vi or Emacs (but
> I assume it would soon work in Emacs too...)
True. Actually Google, Answers.com and some other desktop
applications use 'auto-complete' feature already. It might seem
impressive to most western users but, from where I was from
(Taiwan), this 'phrase-input', as well as "showing up in the order
of the most-frequently-use for any specific user", have been
around for about 20 years.
>> I would like to point out also, that as long as Chinese
>> programmers don't go "hog wild" and use obscure characters,
>> I suspect that I would have much better luck reading their
>> programs with han characters, than with, say, the Chinese
>> phonetic names! Possibly even better than what they thought
>> were the correct English words, if their English isn't that
>> good.
> You certainly have a point there. Even when I don't work in an
> English speaking environment as I do now, I try to write all
> comments and variable names etc in English. You never know when
> you need to show a code snippet to people who don't read Swedish.
> Also, ASCII lacks three of our letters and properly translated
> is often better than written with the wrong letters.
If there will be someday that any programming language can
be input with some form like Big5, I believe its intended target
will ONLY be people using only Big5. That means, if it exists, the
chance of showing it to other-language-users probably be extremely
nil, Think about this: there are still a whole lot of people who don't
know English at all. If no such a 'Big5-specific' programming
tool around, their chances of learning programming is completely
rid off.
--
~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~
Runsun Pan, PhD
pytho...@gmail.com
Nat'l Center for Macromolecular Imaging