How suitable Go will be for scientific computing?

11,060 views
Skip to first unread message

Franz

unread,
Jan 14, 2010, 9:03:53 AM1/14/10
to golang-nuts
While Fortran grows like a cancer (withouth dying, though) the
scientific community still lacks a language which is
- small and easy to learn (scientists are not all
_computer_scientist_);
- portable;
- fast for number crunching and accurate (IEEE floating point
standard) ;
- concurrent (no need for OpenMP/MPI/Cuda etc...)
ANSI C (plus libraries) is the most suitable of a host of unsuitable
languages. C++ is too big and it takes a life time to master it. Java
lives in a virtual machine, therefore it is slow.
Some scripting languages (e.g. python+numpy) are very good, but only
when neither speed or concurrency is crucial.

From a cursory look it seems to me that Go might evolve into the
perfect high performance computing language for scientific use (in
addition to other uses, of course).
Am I wrong?
Is there some feature (or lack of it) that discourages making a time
investment in learning and deploying Go for scientific computing,
especially for large projects?
Is Go's concurrency model going to be inherently limited to shared-
memory machines, or can it be expanded to other types of parallel
architectures (e.g. clusters)?

Pat Notz

unread,
Jan 14, 2010, 9:55:17 AM1/14/10
to Franz, golang-nuts
We still need MPI. We're currently treating multiple cores as
separate CPUs when launching MPI jobs and MPI is actually very
efficient about that (it uses shared memory when possible). However,
that really increases pressure on CPU cache and as we get to manycore
machines we'll have to move to a hybrid model with MPI for distributed
machines and threaded concurrency on each socket. (FLOPs are free.)
Goroutines seem incompatible with a distributed model. A hybrid
GO+MPI would be really interesting.

We will need portable bindings for C and Fortran. There's too much
money tied up in all those well tested and understood libraries.
Getting them re-written will take time and that will really only
happen if people first see Go as a viable language. Binding to
existing C/Fortran libs is the most expedient (only?) way to make that
happen. It doesn't have to be fancy -- most APIs are just passing
ints, floats and doubles (values, references and arrays).

Cuda/OpenCL could eventually be a layer below Go -- where goroutines
are launched on GPUs. (I'm dreaming here.)

OpenMP was always very limiting. It's useful in deep compute kernels
but not as a general concurrency paradigm.

Your reference to Python+Numpy is interesting... I really feel like Go
has potential to compete there. Go is almost as easy to code in as
Python and as the standard library grows and tools are built I could
actually see Go becoming easier than Python. (Python's main advantage
is you don't need a build tool.) Go's concurrency is much more
natural than Python and since Go is a static language we should
eventually see powerful IDEs (like the CDT for Eclipse) and static
analysis tools.

~ Pat

Ian Lance Taylor

unread,
Jan 14, 2010, 12:00:55 PM1/14/10
to Franz, golang-nuts
Franz <francesco...@unisalento.it> writes:

> Is Go's concurrency model going to be inherently limited to shared-
> memory machines, or can it be expanded to other types of parallel
> architectures (e.g. clusters)?

The goroutine model is limited to shared-memory machines by design.
As ordinary processors get more and more cores, it's not appropriate
to use the same concurrency model on a single machine as on multiple
machines.

Go can support different concurrency models for multiple machines, but
the language provides no direct support for that.

Ian

Ryanne Dolan

unread,
Jan 14, 2010, 2:21:56 PM1/14/10
to Franz, golang-nuts

For me, the big thing keeping Go from being my numerical language of choice is the absense of operator overloading.  I abhore overloading in general, and I understand why user-defined operators are purposefully impossible in Go.  Unfortunately, that makes manipulating complex numbers and matrices way more complicated than it should be.

In general I believe that operators should be restricted to built-in types so that operator behavior is always well-defined and obvious.  The only potential solution I can think of is to augment Go with native support for matrices and complex numbers and provide operators to manipulate them.  Then I'd use Go nearly exclusively.

I don't think native matrices are too far fetched either.  We already have built-in maps (thank you!) and strings. I don't understand at all why + can mean append but not element-wise addition...

I was actually thinking of embedding a small interpreter into my programs so I can have infix expressions with custom operators.

result := infix.Eval(" A * B ")

...after registering what *, A, and B are.

Ryanne
- from my phone -

Ryanne Dolan

unread,
Jan 14, 2010, 2:24:14 PM1/14/10
to Franz, golang-nuts

Shoot sorry, wish my phone had spell check.  Now the whole world knows I can't spell...

- from my phone -

On Jan 14, 2010 1:21 PM, "Ryanne Dolan" <ryann...@gmail.com> wrote:

For me, the big thing keeping Go from being my numerical language of choice is the absense of operator overloading.  I abhore overloading in general, and I understand why user-defined operators are purposefully impossible in Go.  Unfortunately, that makes manipulating complex numbers and matrices way more complicated than it should be.

In general I believe that operators should be restricted to built-in types so that operator behavior is always well-defined and obvious.  The only potential solution I can think of is to augment Go with native support for matrices and complex numbers and provide operators to manipulate them.  Then I'd use Go nearly exclusively.

I don't think native matrices are too far fetched either.  We already have built-in maps (thank you!) and strings. I don't understand at all why + can mean append but not element-wise addition...

I was actually thinking of embedding a small interpreter into my programs so I can have infix expressions with custom operators.

result := infix.Eval(" A * B ")

...after registering what *, A, and B are.

Ryanne
- from my phone -

> > On Jan 14, 2010 8:04 AM, "Franz" <francesco...@unisalento.it> wrote: > > While Fortran g...

andrey mirtchovski

unread,
Jan 14, 2010, 2:55:40 PM1/14/10
to golang-nuts
ever since I read Russ's blog post on gofmt I've been wondering if (if
you squint at it the right way) gofmt can't be used for the creation
of DSLs translated to and compiled with Go's tools. seeing how well it
dealt with the transition from semicolons to non-terminated lines, it
ought to be possible to get it to do operator overloading for you.

whether that's a good idea on the other hand i can not judge... :)

Jon Harrop

unread,
Jan 15, 2010, 12:09:02 PM1/15/10
to golan...@googlegroups.com
On Thursday 14 January 2010 14:03:53 Franz wrote:
> While Fortran grows like a cancer (withouth dying, though) the
> scientific community still lacks a language which is
> - small and easy to learn (scientists are not all
> _computer_scientist_);
> - portable;
> - fast for number crunching and accurate (IEEE floating point
> standard) ;
> - concurrent (no need for OpenMP/MPI/Cuda etc...)
> ANSI C (plus libraries) is the most suitable of a host of unsuitable
> languages. C++ is too big and it takes a life time to master it. Java
> lives in a virtual machine, therefore it is slow.
> Some scripting languages (e.g. python+numpy) are very good, but only
> when neither speed or concurrency is crucial.

I am trying to address this exact problem, beginning with the creation of a
new VM:

http://www.ffconsultancy.com/ocaml/hlvm/

and I'm already getting impressive results:

http://flyingfrogblog.blogspot.com/2010/01/hlvm-on-ray-tracer-language-comparison.html

> From a cursory look it seems to me that Go might evolve into the
> perfect high performance computing language for scientific use (in
> addition to other uses, of course).
> Am I wrong?

There are several missing features such as the lack of operating overloading
and general verbosity of the Go language. There are also some design
decisions, e.g. if Go ever gets generics it looks like they will be extremely
inefficient (like Java's).

> Is there some feature (or lack of it) that discourages making a time
> investment in learning and deploying Go for scientific computing,
> especially for large projects?

Lack of a decent garbage collector.

Right now, C# is probably your best bet for scientific computing from a
technical perspective if you consider Mono to satisfy the portability
requirement. If you're willing to sacrifice portability then F# is far
better.

--
Dr Jon Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?e

Jonathan Amsterdam

unread,
Jan 15, 2010, 4:45:23 PM1/15/10
to golang-nuts
> if Go ever gets generics it looks like they will be extremely
> inefficient (like Java's).

What is your evidence for this claim?

Johann Höchtl

unread,
Jan 15, 2010, 4:58:27 PM1/15/10
to golang-nuts

On Jan 14, 3:03 pm, Franz <francesco.papare...@unisalento.it> wrote:
> While Fortran grows like a cancer (withouth dying, though) the
> scientific community still lacks a language which is
> - small and easy to learn (scientists are not all
> _computer_scientist_);
> - portable;
> - fast for number crunching and accurate (IEEE floating point
> standard) ;

I miss abrbitrary precission integers most. I would rather like not
have to about wheather an integer may overflow and possibly spoil a
computation. Linking against Bignum without syntactic language support
would be verbose. bignumAdd(a,b)? brrrr!

Hint: "integer" is free to take ....

Jon Harrop

unread,
Jan 15, 2010, 9:19:56 PM1/15/10
to golan...@googlegroups.com

The proposed implementations of generics for Go follow the existing
implementation used by Java, which has well known (awful) performance
characteristics due to boxing. Moreover, Go's comparatively-inefficient
garbage collectors exacerbate the problem.

For example, functions generic over types of number will cause all numbers to
be boxed and that results in catastrophic performance degradation. You've
basically recovered the performance characteristics of a dynamic language.

Ian Lance Taylor

unread,
Jan 15, 2010, 8:42:55 PM1/15/10
to golan...@googlegroups.com
Jon Harrop <j...@ffconsultancy.com> writes:

> The proposed implementations of generics for Go follow the existing
> implementation used by Java, which has well known (awful) performance
> characteristics due to boxing. Moreover, Go's comparatively-inefficient
> garbage collectors exacerbate the problem.

To be clear for onlookers, the core Go team has not yet made any
proposals regarding generics. Jon is referring to ideas which have
been discussed on the list. These ideas are helpful in developing our
plans, but they do not in themselves indicate what will be done for
Go, if anything.

Ian

NoiseEHC

unread,
Jan 16, 2010, 8:46:42 AM1/16/10
to Jon Harrop, golan...@googlegroups.com

> The proposed implementations of generics for Go follow the existing
> implementation used by Java, which has well known (awful) performance
> characteristics due to boxing. Moreover, Go's comparatively-inefficient
> garbage collectors exacerbate the problem.
>
>

Jonathan asked about evidence if I am not mistaken. What you say here is
only speculation since as far as I know no proposals ever touched
implementation only syntax and semantics. Mine proposal (the macro
rewrite) has even left the choice of implementation to the developer of
the generic library.

> For example, functions generic over types of number will cause all numbers to
> be boxed and that results in catastrophic performance degradation. You've
> basically recovered the performance characteristics of a dynamic language.
>

What you say is the thing what is possible now without any generics
support. You can create generic algorithms and collections in Go by
using interface{} which is almost equivalent with boxing.

Franz

unread,
Jan 16, 2010, 1:53:55 PM1/16/10
to golang-nuts
Thanks to all for the kind replies.
A few clarifications are probably in order.

The kind of "programmers" that I have in mind is not somebody that
spends all his/her time coding, nor has an extensive computer science
background, yet must produce highly performant code.
As a typical application developed by this kind of people you can take
a big weather/climate/ocean model. Believe it or not, about all models
used by the IPCC (intergovernamental panel on climate change) are
written in Fortran.
Full-time software engineers usually don't belong to the core
development teams of these programs: the community tends to distrust
those who don't have a strong backgound in math or physics.
Don't take me wrong: I'm not talking about mindless bozos, but about
very skilled and intelligent scientist, who have a very limited time
to practice the art of programming (and probably have never read Peter
Norvig's essay http://norvig.com/21-days.html ).
In this kind of setting C has a very hard life: they will try it out,
get burned badly (e.g. because they didn't metabolize the difference
between an array and a pointer, and wrote some Fortran-style code with
arrays that fill-up the stack), and then declare to the world that
Fortran is way better.
Of course C++ has all the subltleties of C, and then some. It's the
worst possible language for this kind of users. Anything that takes
more than a month to learn and more than a year to master will not
make inroads in these communities.

Go seems a much cleaner language than C, while still holding the
promise of good performances.

I agree with several observations:

- bindings to C and Fortran. Yet I thought that go on gcc could
already call C code. Would it be difficult to write a wrapper to, say,
the Gnu Scietific Library, or to FFTW?

- arbitrary precision integers may be important.

- a native complex type, modeled after the C99 specs?

- a standard library implementing vectors and matrices would also be
nice.

- a standard library implementing MPI-like message passing would be
possible? I've seen there is one for remote procedure calls...

The main observation that I disagree with, is the stress on operator
overloading.
The idea of expressing computations as operations on high-level things
as vectors, matrices and tensors (as opposed to element-wise
manipulation) is very appealing in principle, but it doesn't work for
two reasons.
- there aren't that many operators to overload, but there are way too
many operations that one may use: e.g. in the C++ ray tracing code
mentioned by J. Harrop, * becomes a scalar-by vector multiplication,
and the dot product is a function dot(). Vector normalization is not
an unary operator, but, again, a function. The code doesn't need
external product, dyadic products, etc. etc. but if it did, I bet they
would be again functions.
- manipulations of vectors and matrices as whole things, instead of
element-wise manipulations is inefficient. A trivial example: let
a,b,c be vectors of length N. Take the dyadic product of a and b and
multiply the result by c. That's again a vector of length N. If you
hand-code this element-by-element, you need no extra storage. Tell me
there is a compiler smart enough to avoid creating an intermediate N
by N matrix, if you go the overloading way. Another example is sparse
matrices: most finite differences schemes may be expressed elegantly
as operations on sparse matrices. Show me a sparse matrix
implementation which is as fast as element-by-element coding for, say,
a Poisson solver.

A matrix-oriented language has, of course, its own niche, as the large
popularity of Matlab shows. For example, in many cases the post-
processing of the output of those big climate models I was talking
about is made with Matlab or similar tools.
For those inclined to day-dreaming, we might evoke a math-go pre-
processor that can turn a matrix-oriented superset of Go into plain go
code. The biggest advantage of this proposal comes from the native
support of UTF8: there will be no shortage of symbols to denote in a
sinthetic and expressive way just about any mathematical operation.
But that's just a dream.

Ryanne Dolan

unread,
Jan 16, 2010, 2:16:04 PM1/16/10
to Franz, golang-nuts

Franz,

I don't understand why you assert that operator overloading would imply inefficient code.  C++ doesn't disallow per-element manipulation... it just makes it possible to write more readable code in the cases where per-element manipulation would be tedious.

In languages supporting operator overloading, operators are just functions.  It doesn't make any sense to say that operators are less efficient than functions.  Furthermore, it isn't impossible to write sparse matrix operators or operators that work on mixes of sparse/dense matrices without losing efficiency.

One of the reasons operator overloading sorta kinda works in C++ but wouldn't in Go is the (non)existence of exceptions.  Code such as

A / B

...can throw an exception in C++, whereas Go code must look like:

C,err = mat.Div (A, B)
if err...

...which gets tedious quickly.  And it isn't enough to say "check inputs not errors" because some checks might be just as complicated as the intended operation.  Is this singular? etc

The problem in Go gets really bad when you want to write expressions.  Try A*B+C in Go when each operation might return an error.  (I realize mult and add might not have error checks in practice, but you get the point.) You end up using lots of intermediate variables and your code becomes a mess.

That's one reason an embedded interpreter makes some sense to me:

res,err := infix.Eval ("A * B + C")

is readable and doesn't need exceptions.

- from my phone -

Message has been deleted

Gabor

unread,
Jan 16, 2010, 4:40:45 PM1/16/10
to golang-nuts
Just another question related to go's suitability for scientific
computing:

Can we expect the exponentiation operator anytime soon?

Jon Harrop

unread,
Jan 16, 2010, 7:12:40 PM1/16/10
to golan...@googlegroups.com

I think the kinds of scientists you are describing are a dying breed that are
now only found in pockets of academia and surround major legacy applications.
In industry, most scientists are already using modern alternatives not just
because it is prohibitively difficult to solve many interesting problems
using Fortran but also because Fortran is no longer competitively performant
on multicores and cannot compete with GPUs.

> Go seems a much cleaner language than C, while still holding the
> promise of good performances.

There is some potential there outside Windows but I do not see Go ever
catching up with where F# already is today, for example.

> The main observation that I disagree with, is the stress on operator
> overloading.
> The idea of expressing computations as operations on high-level things
> as vectors, matrices and tensors (as opposed to element-wise
> manipulation) is very appealing in principle, but it doesn't work for
> two reasons.
> - there aren't that many operators to overload, but there are way too
> many operations that one may use: e.g. in the C++ ray tracing code
> mentioned by J. Harrop, * becomes a scalar-by vector multiplication,
> and the dot product is a function dot(). Vector normalization is not
> an unary operator, but, again, a function. The code doesn't need
> external product, dyadic products, etc. etc. but if it did, I bet they
> would be again functions.

Probably, yes.

> - manipulations of vectors and matrices as whole things, instead of
> element-wise manipulations is inefficient. A trivial example: let
> a,b,c be vectors of length N. Take the dyadic product of a and b and
> multiply the result by c. That's again a vector of length N. If you
> hand-code this element-by-element, you need no extra storage. Tell me
> there is a compiler smart enough to avoid creating an intermediate N
> by N matrix, if you go the overloading way.

Sure. I was doing that with template metaprogramming in C++ about 10 years
ago.

Provided your matrices are large enough, you can just accumulate a
vector-matrix expression and apply optimizations to eliminate temporaries
before evaluating it. That is a lot easier in a language designed for
metaprogramming, i.e. not Fortran or Go.

> Another example is sparse
> matrices: most finite differences schemes may be expressed elegantly
> as operations on sparse matrices. Show me a sparse matrix
> implementation which is as fast as element-by-element coding for, say,
> a Poisson solver.
>
> A matrix-oriented language has, of course, its own niche, as the large
> popularity of Matlab shows. For example, in many cases the post-
> processing of the output of those big climate models I was talking
> about is made with Matlab or similar tools.
> For those inclined to day-dreaming, we might evoke a math-go pre-
> processor that can turn a matrix-oriented superset of Go into plain go
> code. The biggest advantage of this proposal comes from the native
> support of UTF8: there will be no shortage of symbols to denote in a
> sinthetic and expressive way just about any mathematical operation.
> But that's just a dream.

F# already allows unicode source including operators. :-)

Ian Lance Taylor

unread,
Jan 16, 2010, 6:36:03 PM1/16/10
to Gabor, golang-nuts
Gabor <g...@szfki.hu> writes:

I don't think that is very likely. The operators in Go are
implemented as single machine instructions on many modern processors.
Exponentiation is not. I think math.Pow seems appropriate for a
language like Go.

Ian

Franz

unread,
Jan 16, 2010, 8:16:48 PM1/16/10
to golang-nuts
> In languages supporting operator overloading, operators are just functions.
> It doesn't make any sense to say that operators are less efficient than
> functions.

I didn't say that. I said that
1) functions (and operator overloading) are often less efficient than
element-by-element coding.
2) overloading might not buy you more readability over functions,
because there aren't that many operators to overload.

> Furthermore, it isn't impossible to write sparse matrix
> operators or operators that work on mixes of sparse/dense matrices without
> losing efficiency.

Wonderful! Do you actually know a piece of code that does just that
and that I can study to learn the trick?


> C,err = mat.Div (A, B)
> if err...
>
> ...which gets tedious quickly.  

That's a good point. To me it seems one more reason to stick with
element-by-element manipulations, at least in Go and similar
languages.


Franz

unread,
Jan 16, 2010, 9:32:31 PM1/16/10
to golang-nuts
> In industry, most scientists are already using modern alternatives not just
> because it is prohibitively difficult to solve many interesting problems
> using Fortran but also because Fortran is no longer competitively performant
> on multicores and cannot compete with GPUs.

I wish you were right, but you might be over optimistic. For example:
10 out of 17 bechmarks of the SPEC CFP2006 use Fortran code (and only
4 use C++) http://www.spec.org/cpu2006/CFP2006/.

> I think the kinds of scientists you are describing are a dying breed that are
> now only found in pockets of academia and surround major legacy applications.

The trend is in the opposite direction: as computing resources become
faster and cheaper, more and more scientist and technicians end up
doing some sort of programming, at various levels, from those who
occasionally write a script with an application-specific language
(e.g. matlab) scaling up to implementors and maintainers of enormous
code bases.
A real life example might be enlightening: read the answer to the 1st
question about HIM (Hallberg isopycnal model, a leading ocean model)
http://data1.gfdl.noaa.gov/~arl/pubrel/l/him/doc/HIM_faq.html You are
allowed to weep, afterwards.
(and no, Princeton geophysicists are not dying...)

> There is some potential there outside Windows but I do not see Go ever
> catching up with where F# already is today, for example.

The future of windows as a platform for supercomputing doen't look
bright to me: http://www.top500.org/stats/list/34/os
Believe me, the next time that Nature will feature a simulation of a
supernova explosion, it won't be made on windows. And your weekly
weather forecast isn't computed on it, either, although The Weather
Channel might be using it for post-processing.

> > - manipulations of vectors and matrices as whole things, instead of
> > element-wise manipulations is inefficient. A trivial example: let
> > a,b,c be vectors of length N. Take the dyadic product of a and b and
> > multiply the result by c. That's again a vector of length N. If you
> > hand-code this element-by-element, you need no extra storage. Tell me
> > there is a compiler smart enough to avoid creating an intermediate N
> > by N matrix, if you go the overloading way.
>
> Sure. I was doing that with template metaprogramming in C++ about 10 years
> ago.
>
> Provided your matrices are large enough, you can just accumulate a
> vector-matrix expression and apply optimizations to eliminate temporaries
> before evaluating it. That is a lot easier in a language designed for
> metaprogramming, i.e. not Fortran or Go.

I'd love to see an example. If what you say doesn't boil down to
rewrite the element-by-element calculation somewhere else in the
source code, then I'll take my time to master C++ and use it in my
projects.
Another interesting test problem is computing the Mandelbrot set.
The low-level way to do it is to loop z:=z^2+c until |z|>2 or N
iterations are done, for some large fixed N.
This inner loop is bracketed by to outer loops that explore a matrix
of values for the constant c. This is verbose, messy, but speedy,
because for many values of c the inner loop runs for much less than N
cycles.
The matrix-based way is: define a complex-values matrix C whose
elements are all the c values that you want to plot. Then iterate
exactly N times Z:=Z*Z+C, where * now means 'element-by-element matrix
multiplication'. The (approximate) Mandelbrot set is represented by
the elements of Z having modulus less than 2. This is elegant and very
readable, and there is only one loop (the other two are implicit in
the matrix operations) but all elements of Z iterate N times, even if
their modulus has already exceded 2. Big waste.

Do your metaprogramming techniques allow to keep the elegance of the
second approach and the speed of the first? Any example code?

Norman Yarvin

unread,
Jan 17, 2010, 12:13:20 AM1/17/10
to Ian Lance Taylor, Gabor, golang-nuts

Probably by far the most common case in numerical programming is taking a
small integer power of a floating point number, with the power most often
being two. That's a matter of a relatively few machine instructions.

It might not seem like a big deal to write, for example,

pressure*pressure

instead of

pressure^2

but if one is writing hundreds of lines of math, the former can get quite
tiresome.

Bob Cunningham

unread,
Jan 17, 2010, 4:32:05 AM1/17/10
to Franz, golang-nuts
On 01/16/2010 06:32 PM, Franz wrote:
>> In industry, most scientists are already using modern alternatives not just
>> because it is prohibitively difficult to solve many interesting problems
>> using Fortran but also because Fortran is no longer competitively performant
>> on multicores and cannot compete with GPUs.
>>
> I wish you were right, but you might be over optimistic. For example:
> 10 out of 17 bechmarks of the SPEC CFP2006 use Fortran code (and only
> 4 use C++) http://www.spec.org/cpu2006/CFP2006/.
>

I've been "the" programmer on several projects staffed with (non-CS)
PhDs. There is no way I'd let any of them near a compiler for a
low-level language like C, or a complex language like C++ or Java.

The scientists I've known who were competent in any general-purpose
language beyond FORTRAN most often used Python, and that was primarily
due to the awesome SciPy/NumPy package. Perl does has a significant
presence in Biological computing, primary due to its powerful pattern
matching capabilities.

The scientists I've known generally don't WANT to write programs! They
see programming as a vile but necessary evil forced upon them in order
to obtain access to automated calculation. FORTRAN is, to them, the
simplest and fastest way to minimize the pain. If they had to leave
FORTRAN behind, they'd prefer to go more toward Python, and further away
from C.

Go is a great language testbed, but in industrial contexts it does not
(yet) seem on track to replace either C at the low-end (embedded
real-time and memory-constrained environments), or C++/Java at the
high-end (monster projects), or FORTRAN (scientific) or Ada
(high-integrity systems). That may change as Go evolves, but I'm not
betting on it. The above domains benefit greatly from "rich" (big
syntax) languages and/or complex (slow) compilers: Go's stated mission
is to avoid both of those areas.

One direction of evolution I'd like to see Go attempt would be to expand
the notion of channels and goroutines to work across meshes and
networks, replacing or hiding packages such as MPI and PVM. That would
be a HUGE draw to much of the scientific community.

Rather than trying to make the Go syntax more suitable for scientific
work, I'd rather see channels and goroutines added to Python and FORTRAN
to permit PhDs to leverage their current programming experience to make
better use of multiple cores (and eventually distributed processors /
clouds) with minimal effort, and without needing to use or become a
"professional" programmer.

>> I think the kinds of scientists you are describing are a dying breed that are
>> now only found in pockets of academia and surround major legacy applications.
>>
> The trend is in the opposite direction: as computing resources become
> faster and cheaper, more and more scientist and technicians end up
> doing some sort of programming, at various levels, from those who
> occasionally write a script with an application-specific language
> (e.g. matlab) scaling up to implementors and maintainers of enormous
> code bases.
> A real life example might be enlightening: read the answer to the 1st
> question about HIM (Hallberg isopycnal model, a leading ocean model)
> http://data1.gfdl.noaa.gov/~arl/pubrel/l/him/doc/HIM_faq.html You are
> allowed to weep, afterwards.
> (and no, Princeton geophysicists are not dying...

Great comments and replies!

In the "big problem" scientific context, making it easy and efficient to
distribute computation and data across multiple cores and distributed
processors may matter far more than access to a native Complex type or
generic Vectors.

In other words, the "elephant" problem is not ease of coding or
efficient use of any single execution unit: The problem is making
effective use of all available cores and processors. Serial code
efficiency matters far less compared to making use of the exponentially
growing number of cores (1, 2, 4, 8, ...) and processors (clouds).

The Go crew is absolutely correct to omit complex code optimizers from
their fast (non-GCC) compilers. That would not be the best use of such
an amazingly talented team!

Go has made a huge step in the direction of simplified multicore
processing (channels + goroutines), and the rest of its features pale in
comparison (fast compile, interfaces, multi-value return, type
inference, etc.).

What makes Go so exciting to me is that it is the ONLY language I can
imagine being used to implement the examples in "Multicore Programming
for Dummies".

I'd like to eventually see Go used in "Cloud Programming for Dummies".

But I don't envision Go ever being used in the book "Programming For
Scientists".


-BobC

Franz

unread,
Jan 17, 2010, 5:12:54 AM1/17/10
to golang-nuts
> Probably by far the most common case in numerical programming is taking a
> small integer power of a floating point number, with the power most often
> being two.  That's a matter of a relatively few machine instructions.
>
> It might not seem like a big deal to write, for example,
>
>         pressure*pressure
>
> instead of
>
>         pressure^2
>
> but if one is writing hundreds of lines of math, the former can get quite
> tiresome.

This leads me to ask about inline functions
assume you define the function

func pow3(x float64) float64 {return x*x*x}

is this always going to incur a call overhead, or will the Go compiler
inline it? I guess there will never be an explicit 'inline' keyword in
Go...

Gabor

unread,
Jan 17, 2010, 5:22:53 AM1/17/10
to golang-nuts
On Jan 17, 10:32 am, Bob Cunningham <FlyM...@gmail.com> wrote:
...

I agree with most of what you say about go's importance and potential
in multicore programming.

On the other hand I do not agree with your general claim about
scientist's attitude towards modern programming languages/techniques.
Most of them do not have the luxury to work with a professional
computer scientist, so quite naturally they do the job themselves.
Many of them use languages like C/C++ or techniques like MPI/OpenMP/
Cuda correctly. And depending on how the Google's go language evolves,
would use it too. At the birth of a language, it would be unwise to
consider the whole scientific community as illiterate in modern
programming techniques and incapable of adjusting to new variants. The
features required by this community can be carefully selected (e.g.
complex numbers), kept to a bare minimum and therefore, would not blow
up the original language design.

Johann Höchtl

unread,
Jan 17, 2010, 6:40:40 AM1/17/10
to golang-nuts

On Jan 17, 10:32 am, Bob Cunningham <FlyM...@gmail.com> wrote:

> The Go crew is absolutely correct to omit complex code optimizers from
> their fast (non-GCC) compilers.  That would not be the best use of such
> an amazingly talented team!
>

Fully agreed!

> Go has made a huge step in the direction of simplified multicore
> processing (channels + goroutines), and the rest of its features pale in
> comparison (fast compile, interfaces, multi-value return, type
> inference, etc.).
>
> What makes Go so exciting to me is that it is the ONLY language I can
> imagine being used to implement the examples in "Multicore Programming
> for Dummies".
>
> I'd like to eventually see Go used in "Cloud Programming for Dummies".
>

I know this is a Go-mailing list and while not strictly advocating ...
for scientists which probably have internalized mathematical notation
and single assignment, Erlang would be the language which would first
come to my mind, beeing more high-level than Go. But both languages
fit their nice (which in the case of Go might soon be not so small
anymore)

> But I don't envision Go ever being used in the book "Programming For
> Scientists".
>

Uhm, I do. It's from Go ogle, you know, and for some this might smell
like a million dollar baby ...

> -BobC

Johann Höchtl

unread,
Jan 17, 2010, 6:49:47 AM1/17/10
to golang-nuts

On Jan 16, 2:42 am, Ian Lance Taylor <i...@google.com> wrote:

> To be clear for onlookers, the core Go team has not yet made any
> proposals regarding generics.  Jon is referring to ideas which have
> been discussed on the list.  These ideas are helpful in developing our
> plans, but they do not in themselves indicate what will be done for
> Go, if anything.
>

And I for myself never missed generics from any language I used and
found them much overused in C++. While they have their need (STL is a
wonderful example), language newbees tend to use every fancy feature
without thinking on suitability and later servicability.

In that respect, sad but true: Worse is better.

And as the discussion somewhat mentions "types of scientiests", I very
much liked that post from reddit (reddit search down at the moment, so
I fail to post the link)

"An algorithm in Haskell is worth a paper, an algorithm in Python is a
blog post"

> Ian

Bob Cunningham

unread,
Jan 17, 2010, 6:58:41 AM1/17/10
to Gabor, golang-nuts

I never said scientists don't program: I said the ones I know don't WANT
to program.

The scientists I know believe that having a scientist write programs is
like having a pastry chef grind his own flour. Sure, it can be done,
but it's probably not the best use of the chef's time or talent. But
when there is no alternative, they always step up to meet the need.

Nor did I say scientists CAN'T program. Most of the ones I know are
true wizards in FORTRAN. But their expertise typically was only in
FORTRAN, no matter how many other languages they knew.

I should have mentioned that all the scientists I've worked with have
been in the private sector, outside of academia and government. They
were focused on solving tough problems on a tight schedule, often with a
very tight budget.

That made them extremely expensive programmers! Even if they did like
to write code, the project would proceed faster if they spent their time
where their value was highest.

I don't know anything about how other scientists feel about
programming. I suspect there are many scientists with the luxury of
time who would love to try a new scientific programming language.

I suspect most would prefer to see the languages they already know
become better.


-BobC

Message has been deleted

Jon Harrop

unread,
Jan 17, 2010, 10:07:38 AM1/17/10
to golan...@googlegroups.com
On Sunday 17 January 2010 02:32:31 Franz wrote:
> > In industry, most scientists are already using modern alternatives not
> > just because it is prohibitively difficult to solve many interesting
> > problems using Fortran but also because Fortran is no longer
> > competitively performant on multicores and cannot compete with GPUs.
>
> I wish you were right, but you might be over optimistic. For example:
> 10 out of 17 bechmarks of the SPEC CFP2006 use Fortran code (and only
> 4 use C++) http://www.spec.org/cpu2006/CFP2006/.
>
> > I think the kinds of scientists you are describing are a dying breed that
> > are now only found in pockets of academia and surround major legacy
> > applications.
>
> The trend is in the opposite direction: as computing resources become
> faster and cheaper, more and more scientist and technicians end up
> doing some sort of programming, at various levels, from those who
> occasionally write a script with an application-specific language
> (e.g. matlab) scaling up to implementors and maintainers of enormous
> code bases.
> A real life example might be enlightening: read the answer to the 1st
> question about HIM (Hallberg isopycnal model, a leading ocean model)
> http://data1.gfdl.noaa.gov/~arl/pubrel/l/him/doc/HIM_faq.html You are
> allowed to weep, afterwards.
> (and no, Princeton geophysicists are not dying...)

You're talking about monolithic legacy applications. That is no more
representative of scientific computing than COBOL running on mainframes in
banks is representative of software in industry.

> > There is some potential there outside Windows but I do not see Go ever
> > catching up with where F# already is today, for example.
>
> The future of windows as a platform for supercomputing doen't look
> bright to me: http://www.top500.org/stats/list/34/os

Supercomputing is a tiny and shrinking part of scientific computing.

> Believe me, the next time that Nature will feature a simulation of a
> supernova explosion, it won't be made on windows. And your weekly
> weather forecast isn't computed on it, either, although The Weather
> Channel might be using it for post-processing.

The vast majority of scientific computing is now done on multicore desktops
where, of course, Windows has the lion's share of the market. The vast
majority of the computed content presented in Nature will have been done on a
desktop machine running Windows.

> > Sure. I was doing that with template metaprogramming in C++ about 10 years
> > ago.
> >
> > Provided your matrices are large enough, you can just accumulate a
> > vector-matrix expression and apply optimizations to eliminate temporaries
> > before evaluating it. That is a lot easier in a language designed for
> > metaprogramming, i.e. not Fortran or Go.
>
> I'd love to see an example. If what you say doesn't boil down to
> rewrite the element-by-element calculation somewhere else in the
> source code, then I'll take my time to master C++ and use it in my
> projects.

C++ has been obsolete for years. Don't start learning it now!

> Another interesting test problem is computing the Mandelbrot set.
> The low-level way to do it is to loop z:=z^2+c until |z|>2 or N
> iterations are done, for some large fixed N.
> This inner loop is bracketed by to outer loops that explore a matrix
> of values for the constant c. This is verbose, messy, but speedy,
> because for many values of c the inner loop runs for much less than N
> cycles.
> The matrix-based way is: define a complex-values matrix C whose
> elements are all the c values that you want to plot. Then iterate
> exactly N times Z:=Z*Z+C, where * now means 'element-by-element matrix
> multiplication'. The (approximate) Mandelbrot set is represented by
> the elements of Z having modulus less than 2. This is elegant and very
> readable, and there is only one loop (the other two are implicit in
> the matrix operations) but all elements of Z iterate N times, even if
> their modulus has already exceded 2. Big waste.
>
> Do your metaprogramming techniques allow to keep the elegance of the
> second approach and the speed of the first?

Yes.

> Any example code?

The simplest example is traditional deforestation in a functional language. To
take your Mandelbrot example, this matrix version:

let f z = z * z + c
nest n (map f) matrix

Can be deforested to:

map (nest n f) matrix

You've just exchanged the combinators to switch the loops, avoiding the
creation of a temporary.

Things like matrix operators and slices are just special cases of combinators
in a functional language. Once you've got that functional capability you can
easily express the solutions to such problems and optimize them. Ultimately,
you can create a compiler for a domain specific language that takes your
matrix operations and compiles them down to efficient machine code.

I used to same technique to rewrite a vector-based wavelet transform written
in Python into a function-based representation in OCaml that was many times
faster because it compiles down to element-wise operations without you having
to code a single loop:

http://www.mail-archive.com/python-list%40python.org/msg121602.html

Here's the code:

let rec d4 s1 d1 d2 x =
let c1 = 1.7320508075688772 in
let c2 = 0.4330127018922193 in
let c3 = -0.066987298107780702 in
let c4 = 0.51763809020504137 in
let c5 = 1.9318516525781364 in
let n = Array.length x in
let odd i = x.(2*i) and even i = x.(2*i + 1) in
let d1 i = odd i -. c1 *. even i in
let f = function -1 -> n/2 - 1 | i -> i in
let s1 i = even i +. c2 *. d1 i +. c3 *. d1 (f(i-1)) in
let d2 i = d1 i +. s1 (f(i-1)) in
let even = Array.init (n/2) (fun i -> c4 *. s1 i) in
if n > 2 then d4 s1 d1 d2 even else s1, d1, d2

Mathematica has also helped to make this functional style of programming more
popular in scientific circles.

Jon Harrop

unread,
Jan 17, 2010, 12:30:32 PM1/17/10
to golan...@googlegroups.com
On Sunday 17 January 2010 09:32:05 Bob Cunningham wrote:
> Rather than trying to make the Go syntax more suitable for scientific
> work, I'd rather see channels and goroutines added to Python and FORTRAN
> to permit PhDs to leverage their current programming experience to make
> better use of multiple cores (and eventually distributed processors /
> clouds) with minimal effort, and without needing to use or become a
> "professional" programmer.

If you want to make efficient use of multicores, you'll almost certainly want
Cilk-style wait-free work-stealing task queues. They're in .NET 4, for
example.

> In the "big problem" scientific context, making it easy and efficient to
> distribute computation and data across multiple cores and distributed
> processors may matter far more than access to a native Complex type or
> generic Vectors.

If you get basics like complex numbers wrong, as Java did, then your
performance is so badly degraded that your supercomputer will run as fast as
a desktop.

For example, hash tables over value types are 32x slower in Java than C# due
to fundamental deficiencies in the JVM. Even if you assume linear scaling,
Java running on Azul's 256-core supercomputer would only be competitive with
C# running on my 8 core desktop. In practice, the JVM's inefficiencies will
rapidly max out the memory bandwidth for the whole machine and you won't get
anywhere near linear scaling.

> In other words, the "elephant" problem is not ease of coding or
> efficient use of any single execution unit: The problem is making
> effective use of all available cores and processors. Serial code
> efficiency matters far less compared to making use of the exponentially
> growing number of cores (1, 2, 4, 8, ...) and processors (clouds).

I disagree. Amdahl's law places a limit on the performance gain from an
infinite number of cores that, for many applications, in not substantially
larger than the performance improvements offered by serial optimizations.
Moreover, that is imposed in practice not only from serial segments of
computation but from global resource limits such as memory bandwidth.

> Go has made a huge step in the direction of simplified multicore
> processing (channels + goroutines),

Compared to F# on .NET 4, Go is cumbersome and hugely inefficient for
multicore programming.

> What makes Go so exciting to me is that it is the ONLY language I can
> imagine being used to implement the examples in "Multicore Programming
> for Dummies".

You're joking.

Ian Lance Taylor

unread,
Jan 18, 2010, 10:17:48 PM1/18/10
to Franz, golang-nuts
Franz <francesco...@unisalento.it> writes:

> This leads me to ask about inline functions
> assume you define the function
>
> func pow3(x float64) float64 {return x*x*x}
>
> is this always going to incur a call overhead, or will the Go compiler
> inline it? I guess there will never be an explicit 'inline' keyword in
> Go...

At the moment the 6g/8g compilers will not inline functions. At the
moment the gccgo compiler will inline calls within a package (at
appropriate optimization levels), but will not inline calls across
packages. Both compilers may change over time.

Ian

The Parrot's Theorem

unread,
Mar 19, 2010, 6:27:43 AM3/19/10
to golang-nuts
I am one of the people who program because of scientific needs, and
probably I am not even passable as professional programmer.

Can we have a serious comparison benchmark of some computationally and
memory intensive operation run in F#, Go (and possibly a couple of
other languages, ideally R) on a multicore (like 16 cores) machine
with at least 64Gb of RAM just to see what we are talking about?

For example, a large non linear constrained convex optimisation
problem, or some GCM type model, or some climatic simulation or fluid
mechanics simulation or some serious numerical simulation? Something
you normally need to run for a week, so if the performances are that
amazing, we should see a substantial performance increase.

I our research groups we tend to use languages either high level
languages like R or Matlab / Octave or Python, or C and Cuda / Open CL
for optimised routines or Java and Ruby for generic stuff.
Traditionally it is either C or Python or Java for stuff that goes on
distributed machines, and Java, Python or Ruby when the code can make
use of object oriented features.

It would help greatly to be able to spread computational load on
several cores and on several machine in a very simple transparent and
seamless way, without making us learn MP/ MPI.

I am surprised that F#, C# and .NET do all this things, and we have
never used them .... could you please elaborate John? All our clusters
are actually using *nix because we can get better performance and
memory utilisation.
Also, I am surprised you say that there is no equivalent for the NET
platform .... I would have thought the JVM wasn't very different, and
allowed for many languages that we keep using in scientific
computation like python or ruby. The other problem with the NET
platform is the fact you cannot port it anywhere really.

The other very important aspect is that it is completely open source,
which means it can be optimised progressively.

To me, go looks promising because allows to manage the multi core in a
nearly seamless way, but it would be nice to have mechanisms to
manage:
(1) distributed computing (spreading on more than one machine)
(1) high performance numerical types, like multidimensional matrices /
multi-dimensional arrays (not nested arrays!) , that allows for
integration in R and Octave / GDL
(1) in-line functions
(1) Good documentation
(1) Good documentary
(1) Good documentation
(2) GPUs
(forgive me the joke)

I personally very much like the idea of the garbage collector, because
it takes away the hassle of managing the memory. The documentation
says current gc is a temporary one and a new high performance one is
under development.

Regards

David Roundy

unread,
Mar 19, 2010, 11:39:49 AM3/19/10
to The Parrot's Theorem, golang-nuts
On Fri, Mar 19, 2010 at 3:27 AM, The Parrot's Theorem <ct...@york.ac.uk> wrote:
> It would help greatly to be able to spread computational load on
> several cores and on several machine in a very simple transparent and
> seamless way, without making us learn MP/ MPI.

MPI isn't particularly hard, and is probably easier to program with
than anything you'd write using go, unless you're writing something
that is very atypical. (e.g. computations involving extra nodes
joining in and leaving dynamically) On the other hand, MPI isn't a
particularly good fit if you aren't actually running on multiple
nodes, and parallelization with pthreads is *definitely* harder than
parallelization in go.

I haven't written or run any scientific code in go, but I have written
heavy-duty scientific code, and I've written code in go. My guess
would be that for any sort of linear-algebra-dominated workload, go
will be as fast as C (with appropriate calls to lapack). In any case,
it'll be way faster than python. The sticking point for me is
actually just operator overloading, which is likely to keep me from
leaving C++ in the short term: scientific code is *so* much more
readable and consequently debuggable when you can use + to represent
addition as in a+b rather than a.Plus(b) or the like.

> To me, go looks promising because allows to manage the multi core in a
> nearly seamless way, but it would be nice to have mechanisms to
> manage:
> (1) distributed computing (spreading on more than one machine)

This would be interesting, but doesn't seem like a good thing to be
written into the core language. It's also unclear to me whether it'd
be better to have decent libmpi bindings or something more flexible.

The trouble with putting this into the core language is that the core
language shouldn't have any way to run something on another machine,
which should require some sort of authentication.

> (1) high performance numerical types, like multidimensional matrices /
> multi-dimensional arrays (not nested arrays!) , that allows for
> integration in R and Octave / GDL

Indeed, I also long for multi-dimensional arrays. It's really sad
that if you want to avoid manually computing indices in a compiled
language, you *still* need to use fortran!

> (1) in-line functions

This is just an optimization, and I feel confident will come in time.

> I personally very much like the idea of the garbage collector, because
> it takes away the hassle of managing the memory. The documentation
> says current gc is a temporary one and a new high performance one is
> under development.

I agree about garbage collection. It's completely worth the overhead,
and in inner loops it should cost nothing, since all allocation (in
well-written inner loops) is on the stack.
--
David Roundy

befelemepeseveze

unread,
Mar 19, 2010, 12:11:43 PM3/19/10
to golang-nuts
On 19 bře, 16:39, David Roundy <roun...@physics.oregonstate.edu>
wrote:

> Indeed, I also long for multi-dimensional arrays.  It's really sad
> that if you want to avoid manually computing indices in a compiled
> language, you *still* need to use fortran!

Maybe I misunderstood something:
http://golang.org/doc/go_spec.html#Array_types

-bflm

David Roundy

unread,
Mar 19, 2010, 12:47:48 PM3/19/10
to befelemepeseveze, golang-nuts

I was thinking of multidimensional arrays with a size determined at
runtime. It's nice to have fixed-size multidimensional arrays, but
doesn't help for much of my work. Basically, it's only useful for 3x3
tensors and rotational matrices (among the matrices that I use). Oh,
and 4x4 symmetry operators. But those don't show up in any (of my)
performance-critical code anyhow, so the overhead of nested arrays
wouldn't greatly matter for them.
--
David Roundy

David Roundy

unread,
Mar 19, 2010, 12:49:44 PM3/19/10
to befelemepeseveze, golang-nuts

Of course, we could go back to fortran 77-style code (before they had
dynamic allocation) and just recompile the program for each problem we
want to solve. Since go is fast to compile, maybe that wouldn't be so
bad... we could just run the go code through the cpp preprocessor or
something. But it doesn't feel like a step forwards.
--
David Roundy

befelemepeseveze

unread,
Mar 19, 2010, 1:45:53 PM3/19/10
to golang-nuts
On 19 bře, 17:47, David Roundy <roun...@physics.oregonstate.edu>
wrote:

> I was thinking of multidimensional arrays with a size determined at
> runtime.  

Maybe I'm still confused:
http://golang.org/doc/go_spec.html#Slice_types

Quoted from that link:
"Like arrays, slices are always one-dimensional but may be composed to
construct higher-dimensional objects. With arrays of arrays, the inner
arrays are, by construction, always the same length; however with
slices of slices (or arrays of slices), the lengths may vary
dynamically. Moreover, the inner slices must be allocated individually
(with make). "

-bflm

David Roundy

unread,
Mar 19, 2010, 4:24:02 PM3/19/10
to befelemepeseveze, golang-nuts
On Fri, Mar 19, 2010 at 10:45 AM, befelemepeseveze
<befeleme...@gmail.com> wrote:
> Quoted from that link:
> "Like arrays, slices are always one-dimensional but may be composed to
> construct higher-dimensional objects. With arrays of arrays, the inner
> arrays are, by construction, always the same length; however with
> slices of slices (or arrays of slices), the lengths may vary
> dynamically. Moreover, the inner slices must be allocated individually
> (with make). "

As it says, there's no support for multidimensional slices, only for
multidimensional arrays or nested slices, and the latter means you get
poor performance. We have that in C also (pointers to pointers), but
no one who cares about performance uses it, since you have an extra
level of indirection. On top of that, you still need to allocate one
large array/slice for all your data, or it won't be in a contiguous
chunk of memory, and you'll get even worse performance.

I don't have much of a leg to stand on criticizing go for lacking
multidimensional slices (or dynamically-sizable multidimensional
arrays), since no other compiled language has the either (except
fortran), but it still seems a shame. It's almost as if language
developers no longer consider HPC a priority... which probably also
explains the demise of the ** operator, which is a pain to replace
efficiently and readably. My C++ inline replacement uses an extra
multiply beyond the optimal.
--
David Roundy

befelemepeseveze

unread,
Mar 19, 2010, 5:38:16 PM3/19/10