R is implemented in C and FORTRAN plus R on top of that. SAS is in C (and some Go here and there) plus the SAS language in top of that. Mathematica is implemented in C/C++ with the "Wolfram Language" on top of that. PARI/GP is implemented in C plus some GP-language code. Macsyma, Maple, Octave, Python,... follow this pattern too:
2 [interactive exploration environment with scripting]: many and various, including R, SAS, MMA, GP, Macsyma, Axciom, Maple, Python, ...
0 [ultra-performant kernels in C/Assembler/..]: GMP, LAPACK, BLAS, ATLAS, ...
You say Data Science is an application domain where Level 2 features make sense, where they facilitate understanding by providing an interactive environment. The evidence supports you, though understand that none of your examples (or in my expanded set) actually do much at that level: this is where the "convolve a with b" is specified, but the actual doing is lower, in Level 0 and 1, where Go-like compiled software in C, C++, or FORTRAN does the heavy lifting. (I make this point only to clarify what some people seem not to understand in blogs where they write "my Python giant matrix solver is just as fast as C/C++/Go, I don't see why C/C++/Go is not faster" or "I don't see advantage in compiled languages.")
If Go has a place in interactive, interpretive data science it seems to me that it would be as the substrate language (Levels 0 and 1). Go certainly has a place in statistics, applied mathematics, and other realms related to data science if you want to include apps that do work and act on results--control systems, analysis tools, etc. But to create an interactive "play" space I'd (again, just me) be inclined to follow the PARI/GP model with a Go kind of PARI and a domain-friendly GP.
The high-level GP (Mathematica, Maple, GP, SAS, ...) in the existing systems often seems to me to be weak, not designed as a first-class programming language but more like an endless accretion of script enabling fixes and patches. I feel this especially in the way local variables are defined which often feels brutish and awkward, but that extends to many subtleties. It is natural that it tends this way--developers were focused on the core and just needed "a way" to bind it all together. The successful projects span decades and unanticipated new application domains so have accumulated the most duct tape.
Another goodness of this two-level scheme is that the top language can be "faulty" in ways that are comfortable. For example, think how many scalar variables you see in C/C++/FORTRAN/Go: "i:= 3" is the bulk of variables. But in R, there are (at least when I last looked) no scalar variables(!), but you can get by with vectors of length 1. This would not do, generally, but for R, it may be perfect. The two-level strata design of which PARI/GP is one of the best implementations, makes this kind of field-of-use tailoring work fine in practice. That's important, it is matching the language's exposed concepts to the problem domain.
I don't see any of this as a weakness or strength of Go, or as something to address in the case of a REPL, because it's not Go that you'd want a REPL for, instead something that knows about data, or Diophantine equations, or moon rocks, or whatever the domain may be and its natural forms of notation.