Jason,
On Thu Oct 30, 2025 at 20:40 CET, Jason E. Aten wrote:
> To me, fundamentally, automatically "go getting"
> something is massive security hole if that
> process can execute arbitrary repo source code on my machine, right?
>
> Given that CGO_LDFLAGS_ALLOW has to be used to even get some
> C projects to work, and given how paranoid (justifiably) Google and the
> Go
> team are about avoiding security holes, I doubt having a Rust
> dependency build automatically is a good idea; since that rust build
> script can do just about anything.
>
> There is some interest in sandboxing cargo execution, I hear rumor,
> but of course most sandboxes are not water tight. A 6 year old effort:
>
https://github.com/rust-secure-code/cargo-sandbox
>
> If the tradeoff really is that I have to run make, once, in my Rust
> dependency,
> that seems worth it.
>
> I built a little demo of that to show to myself that is possible.
>
https://github.com/glycerine/grusty
>
> Seems like a pretty small hurdle (running make/a Makefile/manually
> executing the rust build script once), and pretty big
> benefit for more security and less chance of being hacked, no?
well, building the "FFI'ed" Rust code and linking against it isn't rocket science.
you did it with a Makefile, I went the 'go generate' way for my daktilo package:
https://codeberg.org/sbinet/daktilo/src/tag/v0.4.0/internal/capi/capi.go#L5
and, as long as it is in your own package, within your own module, using a Makefile, configure+make+make-install, go generate, cargo or anything else, really, it boils down to the same thing:
baring any self-inflicted foot-gun, you are pretty safe.
the devil is in the 3rd party dependencies that you may depend upon, that may themselves do anything they want with - in this instance - the Rust build script.
so it falls upon you, the main developer of that module or crate, to perform your due diligence and check/audit the dependencies you rely on (and their dependencies, etc...)
so, yes, as I said in my first mail, using cargo is "opening a can of worms".
but let's table the cargo discussion for a moment.
the main issue is when wrapping some Rust code:
- one creates a FFI library around the original Rust code (you need to
actually write a bit of Rust to do that)
- one creates a Cgo package to call that FFI'ed Rust code
- one usually write a bit of Go to present that Cgo interface in a more
Go-ish way.
at that point, you have a little module which you can hack and use, and you can apply all sort of things to actually make all this compile: Makefile, Justfile, go generate, etc... you name it.
but then, you publish that module on
pkg.go.dev.
how are consumers of this module expected to import it ?
```
$> mkdir use-daktilo && cd use-daktilo && go mod init use-daktilo
$> go get
codeberg.org/sbinet/dak...@v0.4.0
go: downloading
codeberg.org/sbinet/daktilo v0.4.0
go: added
codeberg.org/sbinet/daktilo v0.4.0
$> cat > main.go
package main
import _ "
codeberg.org/sbinet/daktilo"
func main() {}
^C
$> go build -v
runtime/cgo
codeberg.org/sbinet/daktilo/internal/capi
codeberg.org/sbinet/daktilo
faktilo
# faktilo
/home/binet/sdk/go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
/usr/bin/gcc -m64 -Wl,--build-id=0xc5d0a9d8ca953bd503aee3541a6c98c10c7f0342 -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Wl,--compress-debug-sections=zlib /tmp/go-link-2151555988/go.o /tmp/go-link-2151555988/000000.o /tmp/go-link-2151555988/000001.o /tmp/go-link-2151555988/000002.o /tmp/go-link-2151555988/000003.o /tmp/go-link-2151555988/000004.o /tmp/go-link-2151555988/000005.o /tmp/go-link-2151555988/000006.o /tmp/go-link-2151555988/000007.o /tmp/go-link-2151555988/000008.o /tmp/go-link-2151555988/000009.o /tmp/go-link-2151555988/000010.o /tmp/go-link-2151555988/000011.o /tmp/go-link-2151555988/000012.o /tmp/go-link-2151555988/000013.o /tmp/go-link-2151555988/000014.o /tmp/go-link-2151555988/000015.o /tmp/go-link-2151555988/000016.o -O2 -g -L/home/binet/tmp/go/pkg/mod/
codeberg.org/sbinet/dak...@v0.4.0/internal/capi -ltypst_cffi_linux_amd64 -lm -O2 -g -lpthread -no-pie
/usr/bin/ld: cannot find -ltypst_cffi_linux_amd64: No such file or directory
```
you'd need to git clone daktilo somewhere, build it locally, and the go-mod-replace it in the "use-daktilo" module to point at the local build of daktilo.
not super user-friendly.
AFAICT, there are 2 ways to make this work:
a) bundle the libcffi.{so,a} binary within the wrapping module (ie: daktilo/internal/capi in my example), possibly one binary for every combination of $GOOS_$GOARCH one wants to support.
that's what Dominik did for his wpgu bindings:
https://github.com/dominikh/go-wgpu/blob/master/wgpu_linux_arm64.go
https://github.com/dominikh/wgpu-linux-arm64
b) bundle the thin Rust-shim with the wrapping module and make it part of
the compilation of the wrapping module (to produce the libcffi.{so,a} binary).
Option a) is easier on downstream users: they don't need any extra development toolchain.
But you bundle possibly many, possibly large binaries with your VCS.
The library daktilo wraps clocks at ~150Mb.
Multiply this by {linux,darwin,windows}×{amd64,arm64}×{daktilo versions} and the VCS repository is going to be massive pretty quick.
(probably why Dominik went with one repository = one binary)
And
proxy.golang.org,
sumdb.golang.org and cmd/go do not have support for Git-LFS.
Option b) is easier on the repository, but one needs an extra toolchain and some support from cmd/go to make it fly.
(and probably limit to invoking 'rustc' or 'gccrs' (when available) instead of 'cargo', for building the shim)
But perhaps, the expected way is to rely on compiling+installing the shim FFI library somewhere, making it available to Cgo via pkg-config and be done with it.
-s