+1
when you've been "spoiled" by Fortran and NumPy it is hard to believe
one can write code w/o concise multidimensional array slicing.
cheers,
sebastien.
--
I'd also find slices that bit more friendly if Go supported negative indices for accessing elements relative to the end of the slice. This makes a lot of code in Ruby and Icon considerably more readable than the equivalent in C.
Also being able to specify that a for...range operation should operate in reverse would be nice as well although I appreciate there are difficulties in the semantics of that particular operation.
And +1 for the Array Operators - their presence in Go would make another chunk of logic in golightly redundant :)
Ellie
Eleanor McHugh
Games With Brains
http://feyeleanor.tel
----
raise ArgumentError unless @reality.responds_to? :reason
> Another powerful feature that I have become accustomed to is flexible
> array slicing. A 4-D array, X, could be sliced like this X[:,3,2:4,10]
> to create a 2-D array where the : indicates inclusion of all indices
> along that dimension.
Have you considered writing a method which takes a formated string, such as:
X.Slice(":,3,2:4,10")
This can do exactly the same slicing as done in Python, and can be
made pretty efficient as well.
You can probably do something interesting along these lines to solve
the first problem as well, but would require a bit more thinking to
get the right interface.
> The last thing that it would be difficult to live without is a REPL. I
> see that an interpreter is in the works which is exciting!
There's a chance hsandbox could help you meanwhile. It was built
as an alternative to the lack of a good REPL for Go:
Interestingly, I ended up using it for Python as well. Twisted, for
instance, is nicer to hack with using hsandbox than the interpreter
due to the asynchronous/callback-based style.
--
Gustavo Niemeyer
http://niemeyer.net
http://niemeyer.net/blog
http://niemeyer.net/twitter
Being clean sometimes is a matter of perspective. I'm willing to bet
that the vast majority of Python developers have no idea about what
X[:,3,2:4,10] does, for instance.
> Thinking about this issue makes me think that operator overloading (or
> specialized built-in operators) is a small convenience for most
> programming but makes an enormous difference for writing code based on
> equations.
I can understand and agree with this. One thing to ponder about while
debating about this, though, is that this is true for pretty much any
domain-specific language built on top of another base language. Being
able to twist the syntax of a language to adapt to specific needs of a
problem domain will always help with readability for that specific
problem domain, but it doesn't necessarily help in other problem
domains, and makes the language more complex overall.
> Thanks for the pointer and all the feedback!
You're very welcome. I'm looking forward to seeing some of those
areas explored in Go. Richer math support would be fantastic.
>I'd also find slices that bit more friendly if Go supported negative
>indices for accessing elements relative to the end of the slice. This
>makes a lot of code in Ruby and Icon considerably more readable than the
>equivalent in C.
That's hard to do efficiently. It requires a run-time check for whether
the index is negative -- not a big deal in an interpreted language, which
has lots of interpreter overhead anyway; but in Go, the compiler would
have to insert a test and branch before each slice operation (except
those with constant indices), bloating the binaries and seriously slowing
down execution. Besides, whether
a[3:len(a)-2]
is less readable than
a[3:-2]
can be disputed; the second is certainly shorter, but is a bit on the
cryptic side.
--
Norman Yarvin http://yarchive.net
On Wed, Dec 29, 2010 at 12:11:31PM +0000, Eleanor McHugh wrote:That's hard to do efficiently. It requires a run-time check for whether
>I'd also find slices that bit more friendly if Go supported negative
>indices for accessing elements relative to the end of the slice. This
>makes a lot of code in Ruby and Icon considerably more readable than the
>equivalent in C.
the index is negative -- not a big deal in an interpreted language, which
has lots of interpreter overhead anyway; but in Go, the compiler would
have to insert a test and branch before each slice operation (except
those with constant indices), bloating the binaries and seriously slowing
down execution. Besides, whether
Our problem (well, mine at least) with negative indices meaning
"backwards from the end" is safety, not efficiency. A lexically clear
x[-1] is one thing, but x[a] is another, and x[a+b+c] another still.
It's terrifying that expression-driven code like this can sometimes
change semantics automatically from "index from the beginning" to
"index from the end" because of the dynamic value of the expression.
Dynamic semantic changes like that are not what Go is about. I have
been bitten by precisely this bug too often in Python code to consider
enabling it in Go.
I prefer my program to crash rather than continue incorrectly when it
develops a negative index. (A truly horrible case: a+b+c overflows;
see sort.go:66 to see a related defense.)
Convenience and safety often conflict; we favor safety.
-rob
We could always bound the problem by mandating that both slice indices have the same sign, which would allow overflows to be identified either way except in cases where calculation of both slice bounds overflows and the results are still the same side of zero - which arguably is evidence that the algorithm being used is incorrect.
However that's by the by. My main point is that there are legitimate cases where slice access should semantically be relative to its tail (just as there are cases where ranging over a slice should be performed in reverse) and that expressing it naturally in this form makes a whole class of errors go away, so a language such as Go which encourages safety would demonstrably be safer if it included first-class support for this. Negative indices are a method that exists in other languages which is why I mentioned them, but I'm sure there are other ways of handling this - such as perhaps an 'anchor' or 'reverse' operator/modifier which could be combined with the usual index and range operators.
Indeed the latter would have the advantage that it's a property that could be checked at compile-time, although at the cost of introducing more complexity to the core language.
Michael T. Jones
Chief Technology Advocate, Google Inc.
1600 Amphitheatre Parkway, Mountain View, California 94043
Email: m...@google.com Mobile: 650-335-5765 Fax: 650-649-1938
Organizing the world's information to make it universally accessible and useful
And a[len(a) - 3] is very clear.
--
André Moraes
http://andredevchannel.blogspot.com/
When a is an expression, repeating it may be inconvenient
or incorrect. Even a := declaration can inconveniently interrupt
the flow -- at the least, it makes the whole thing a non-expression.
Chris
--
Chris "allusive" Dollin
In short, the number of keystrokes saved by simpler indexing from the
end of the array isn't even proportional to the length of this
discussion.
-rob
I understand your worry about x[a] having undecidably different
semantics depending on whether a < 0 or a >= 0, I have similar doubts.
However where the index is clearly positive or negative, I would say
that the negative index counts backwards as in Python is an algorithmic
issue, it is not an issue of saving keystrokes. Personally I use this a
lot in Python code.
--
Russel.
=============================================================================
Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel...@ekiga.net
41 Buckmaster Road m: +44 7770 465 077 xmpp: rus...@russel.org.uk
London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Algorithmic issue? It's a cheap trick. Someone noticed that negative
indices are always erroneous in indexes and invented a meaning for
them. People got used to the convenience (they always do), but Go is
not Python and is not likely to acquire overloaded semantics for a
basic type.
By the way, a "negative index count backwards" in Go too: it's just
that it counts backwards from zero, which is the traditional origin
for integers.
-rob
One final point. Although the Go language doesn't allow negative
array indices, unlike in most languages they do have a natural (if
unimplemented) meaning. It might be useful to allow a slice to be
resliced to earlier locations in the array, by analogy with it being
lengthened to later positions. That is, it might make sense to permit
a = a[-1:]
to add an element at the beginning of the array, just as
a = a[:len(a)+1]
now adds an element at the end (assuming space is available.). This
idea is consistent with the existing ideas of reslicing and would add
functionality without syntax or major semantic chnage.
This possibility (it's only a possibility) doubly argues against -1
meaning "from the end", since not only does it give a plausible
meaning in the current language for a negative index, in the second
case, already common in Go code, +1 already means something "from the
end" - and it's positive.
I am not saying the language will change to permit earlier indexing
like this - and the current implementations would require rethinking
to do so - merely to point out that a negative index has a useful
semantic interpretation in Go (as does a positive one off the end),
unlike in many languages, and it is both a mistake to generalize from
those languages into Go's model and to preclude the potential for
clean extension of the existing model by removing the possibility of a
more natural extension.
-rob
I happily use negative indexing in Python, usually only with constants
though, but don't think it makes sense in Go where the code on the
page does what it says no more, no less. The above proposal feels very
natural to Go and quite useful, however.
-rob
This is only true if array index operations are specified in terms of signed integers. With unsigned integers there are no negative indices and the only array bound that needs checking is the upper limit, making them the more "natural" way of representing an index.
Personally it bugs me a lot that Go uses signed integers, but I guess the typing saved by not having to cast between signed and unsigned integers in certain cases (which may or may not be the more common need) is an optimisation...
> People got used to the convenience (they always do), but Go is
> not Python and is not likely to acquire overloaded semantics for a
> basic type.
In which case how about introducing a second basic type which is the ReverseSlice - it acts just like a slice only its anchor point for indices is the end of the occupied slice (which may itself be an offset into the allocated array) and can be explicitly converted to a normal Slice which has the runtime cost of recalculating the bounding points.
This way there's no additional funky operators required, indices still remain positive allowing the current bounds-checking approach, and the type system can throw panics as and when required on bounds violations. As an added bonus one could always reverse iterate a Slice using range by doing an explicit cast to a ReverseSlice and having its semantics defined appropriately.
being able to write:
a[] = b[] + c[]
where a,b and c are arrays (or slices,) to tell the compiler (and the
human reading back that code) that one wants to apply the sum
element-wise would be really convenient (and safe too, I suppose, if the
compiler can insert the size-checks)
cheers,
sebastien.
>Do you have any thoughts on primitive operations (and order of
>operations) on arrays? (+,-,*,/,==,&&,||, etc. with behavior based on
>the underlying type)
Operations on arrays are a bit of a tar baby: it's easy to start,
thinking they would be simple, then get sucked into more and more
complexity. Addition and subtraction are simple enough, both for the
case of array+array and array+scalar. When it comes to multiplication of
two arrays, there are a variety of operations meriting that name
(element-by-element multiplication of two arrays; dot products; matrix
multiplications...).
The complexity doesn't end once one has decided on some meaning; there is
also the matter of an efficient implementation. For one thing, with
scalars, temporary storage for evaluating an expression like
a=b+c*d
is just a matter of processor registers; with arrays, either memory has
to be allocated for intermediate results (here, c*d), or the compiler has
to figure out, as a matter of optimization, that it doesn't need to
allocate memory. Also, once one provides array operations, one gets into
the situation where people start using those operations because of their
brevity, even when they are a very inefficient way to solve a problem.
Then one gets into trying to make the compiler recognize what the
programmer is really trying to do and implement it efficiently, which
leads to situations where code runs fast on one compiler (which
implements such an optimization), but is horribly slow on another
compiler (which doesn't).
There are probably good solutions for all these issues, but finding them
is a major design effort, not something one can just quickly crank out.
On Mon, 3 Jan 2011 17:20:45 -0500, Norman Yarvin <yar...@yarchive.net> wrote:
> is just a matter of processor registers; with arrays, either memory has
> to be allocated for intermediate results (here, c*d), or the compiler has
> to figure out, as a matter of optimization, that it doesn't need to
> allocate memory.
[...]
> Also, once one provides array operations, one gets into
> the situation where people start using those operations because of their
> brevity, even when they are a very inefficient way to solve a problem.
out of curiosity, do you have an example in mind for such a (counter)
use-case ?
cheers,
sebastien.
--
#########################################
# Dr. Sebastien Binet
# Laboratoire de l'Accelerateur Lineaire
# Universite Paris-Sud XI
# Batiment 200
# 91898 Orsay
#########################################
>On Mon, 3 Jan 2011 17:20:45 -0500, Norman Yarvin <yar...@yarchive.net> wrote:
>
>> Also, once one provides array operations, one gets into
>> the situation where people start using those operations because of their
>> brevity, even when they are a very inefficient way to solve a problem.
>
>out of curiosity, do you have an example in mind for such a (counter)
>use-case ?
In lots of math books, there are formulas involving adding an identity
matrix to another matrix. If writing such code in Go today, one writes a
loop that goes down the diagonal, adding one to each element. If one can
use an array operation, one is likely to write it the same way it's
written in the books; a straightforward implementation then involves
generating an identity matrix, and adding each element of it to the other
matrix. That's O(N^2) work rather than O(N) work.
It's not too hard to detect and optimize this one particular case; it's
just that there are a lot of such cases. Fortran has gone down this
road, and reports are that:
"More than half of the code in the front-end portion of Sun's
Fortran 95 compiler is devoted to analysis and optimization of
array expressions."
"I am unaware of any effective algorithms for optimizing a
general array expression. A superoptimizer approach might work,
but I'm guessing that most people would like their optimized
compilations to finish before the heat death of the universe.
Therefore, optimizing array expressions is, like much of compiler
optimization technology, a matter of recognizing special cases
and generating good code for them. The problem is that the
number of interesting special cases is greater than any compiler
can handle."
(Both of those are quotes from Bob Corbett, a developer of Sun Fortran,
in articles posted to the Usenet group comp.lang.fortran.)
>On Jan 3, 5:20 pm, Norman Yarvin <yar...@yarchive.net> wrote:
>
>> Operations on arrays are a bit of a tar baby: it's easy to start,
>> thinking they would be simple, then get sucked into more and more
>> complexity. Addition and subtraction are simple enough, both for the
>> case of array+array and array+scalar. When it comes to multiplication of
>> two arrays, there are a variety of operations meriting that name
>> (element-by-element multiplication of two arrays; dot products; matrix
>> multiplications...).
>>
>
>I recognize that there are many types of multiplication (and division
>for that matter), but I think the simplest type (element-by-element
>and broadcasting) would go a very long way.
I disagree; I don't find a loop containing
a[i] = b[i] + c[i]*d[i]
to be much worse to write, or to read, than the single statement
a = b + c*d.
The real economy, and improvement in clarity, with array operations is
when you can write a complicated operation like matrix multiplication
with a single * sign. The above line (if it were to imply matrix
multiplication) is a lot easier to read than the code
... allocate the matrix TMP ...
matmul(c,d,tmp)
matadd(b,tmp,a)
which in turn is easier than writing out the three nested loops for
matrix multiplication. But making * mean matrix multiplication gets into
complicated territory again.
Right now, Go doesn't even have some of the basic element-by-element
array operations that one would want to write code such as a
general-purpose matrix multiplication routine. Slices, rather than
arrays, are generally used in Go for the arguments to general-purpose
routines; and while Go has multidimensional arrays, it has no
multidimensional slices. Better to get the language to walk before
trying to get it to fly.