The idea about changing the precedence and meaning of ^ from
XOR to exponentiation is a bit provoking. I've been
programming so long that I often just forget that ^
means exponentiation for most people.
I did a quick search and it reveals that the current Python
symbol for exponentiation (**) comes from Fortran. It
used that symbol because the ^ did not exist yet. The ^ as
bitwise XOR was introduced by C language. Even the search
results pointed out that this is confusing for newcomers.
If I did what you propose, everyone who has been programming
C would do the mix up from time to time. But these guys are
actually more capable of dealing with the problem, compared
to people who might enter programming through symbolic
algebra. Replacing the XOR ^ with POW ^ could make a lot of
sense even if I did not really consider it at first.
Another, slightly wilder idea that I've had is that what if
you actually made ordinary numbers written by users bignums,
and numbers such as 1.2 would be fractionals by default.
This could slow down some programs considerably but the way
I see it the dynamic language is usually an excellent
starting point and when you go further from that it might be
better to instead have tools to optimize the dynamically
typed programs into something else.
I did follow Richard's advice and trying to list and/or
identify the points of friction.
To help this off I made myself a model of what a typical
operator overloading looks like in Python at this moment:
To resolve a+b, Python first calls a.__add__(b) for the
left-side object. If it returns NotImplemented, then it
tries b.__radd__(a). If it still returns NotImplemented then
it fails.
Operator overloading in Python looks better than I remember.
It's probably because many guides and memos fail to state
the important part of how Python specifically resolves the
call and they do not even include the 'isinstance' -part to
identify the other side on the expression.
In my language I have implemented the '+' as a multimethod
and added another multimethod to coerce. I were happy to
that for a while but then I realised that it may make some
things worse because the multimethod resolution ends up
being more complex rather than simpler, and not that many
problems appear to come from the dispatch mechanism after
all.
Another thing that seemed important but is maybe not that
crucial flaw is that Python's approach ends up dominating
with the left side of __add__. If the operation is found in
the __add__ then it won't be searched from the __radd__ and
this can cause conflicts between different libraries that
extend the arithmetic.
In my opinion the real problem is, and will likely always
be in getting multiple libraries interoperate. I've been
looking that statically typed languages seem to handle this
better than dynamically typed ones but in the end it turns
out to be the same ways by which you successfully cope with
the challenges.
If you have M different kinds of things, then in worst case
you need M*M different implementations for arithmetic. In an
one project it's likely that you cope well, but in presence
of multiple systems it becomes harder.
I think these ideas boil down to this: When you create new
behavior for arithmetic you're creating a new numerical
system that extends from the base types you have. This new
"number system" describes how the new values behave with the
existing values.
Languages like Haskell seem acknowledge that people create
new numerical systems when they overload operators. The
numbers are implicitly wrapped with (fromInteger N), and
you are supposed to define those conversion functions from
numeric values when you define the + operation. Different
numerical systems defined this way do not interact with each
other in Haskell.
I've been studying subtyping in order to optimize
dynamically typed programs directly from their source code
and get them to translate and run in C or GLSL -like
environments alongside the dynamic portions of the programs.
I think that this interacts with the way how you overload
the arithmetic.
The another, kind of a potential starting point I've been
thinking about is related to how numerical libraries deal
efficiently with computing.
The kind of an obvious thing that a programming language
implementation could provide would be tight memory layouts,
or even in-memory relational tables where you can fill up
the numerical data. The tight memory layouts would appear to
matter the most for efficient computing.
Numerical data itself seems to always be bundles of numbers.
You either got quaternions or matrix data with single or
double precision floating point.
I've been thinking that actually when you have quaternions
or matrices, it looks like neither one of those actually
need to take precedence and the number bundle end up staying
as an array. Perhaps those bundles could be treated as sums
of terms*constants, where the terms -part is a parametric
type?
Lisp is often mentioned being a family of various languages
such as CLisp, scheme, racket. If you people provided specific
examples it would help but I will try to find out those
myself as well.