Maybe translating Go code to C shader code. That will not be the best solution,
but could be practical for some parts of the code that really benefit
from running on GPU.
And even on that case, sending data to and forth could require some
hard programming.
--
André Moraes
http://andredevchannel.blogspot.com/
For a pretty elegant idea that might apply to both Go and newer GPUs,
see CellFS:
http://dl.acm.org/citation.cfm?id=1587392
which I note has been ported to the 386!
ron
> There is a trend in hardware to go to more and more cores. It would seem
> that the programming technology will go through a paradigm shift quite soon.
> To be able to keep abreast with the development (and the "new" direction of
> Moore's law), the computer languages may have to be adapted. For example,
> the report The GPU Computing Revolution<https://ktn.innovateuk.org/web/mathsktn/articles/-/blogs/the-gpu-computing-revolution>predicts that the number of cores will grow by an order of magnitude every 5
> years.
>
> To some extent, Go supports this trend, e.g. using goroutines and channels.
> But how good will Go be? Will you eventually be required to use specialized
> API:s, like Cuda or OpenCL?
No. Perhaps somebody will write some sort of GPU library, but I can't
imagine that such a library would ever be required to write Go programs
which can run on multiple cores.
> Or is there a risk (chance?) that Go will be replaced by something better
> adapted to massive parallel computing?
Well, yes, of course there is a chance that that could happen.
Ian