Bindings to / wrapping Rust code

241 views
Skip to first unread message

Sebastien Binet

unread,
Oct 29, 2025, 5:14:36 AMOct 29
to golang-nuts
hi there,

Right now, wrapping C, C++ and Fortran code is somewhat easy: just slam some .c, .cxx or .f files within your Go module and the go command will pick them up and compile those as part of the final binary.
This allowed "go-get"-ability for, e.g., mattn/go-sqlite3.

It doesn't seem like the same thing is possible for Rust code.

Right now, the perhaps "easiest" thing is to ship the compiled Rust code (.a, .so or .syso) with your Go project, but you get all the complications with shipping possibly large binaries with your VCS (`cmd/go` and proxy/sumdb support for Git-LFS is a bit sketchy).

Am I the only one who wants to wrap Rust code ?
Are there enough people who'd like to do just that to warrant some kind of support from the Go ecosystem ?
Also, if we were to add support for Rust, do we use rustc as a compilation vehicle or Cargo ? (the latter might open a rather big can of worms with its ability to execute arbitrary code via the 'build.rs' file, but... Cargo is now *the way* to build Rust code, AFAICT)

WDYT ?

cheers,
-s

Jason E. Aten

unread,
Oct 30, 2025, 3:31:56 AM (14 days ago) Oct 30
to golang-nuts
Hi Sebastien,

Given that Cargo is so much less secure than Go (no
equivalent of a module proxy), it might
be safer to let the Rust project be the "lead" or "top level"
project, and have the Rust build script; these


pull in any Go dependencies. 

Otherwise automatic Rust invocation from a Go package install 
would appear/seem to be introducing alot of security issues for
Go packages, right?

As least that how it appears at the moment... but I've only been learning Rust 
for a week, so I have pretty limited perspective.

Jason

Sebastien Binet

unread,
Oct 30, 2025, 10:28:41 AM (14 days ago) Oct 30
to Jason E. Aten, golang-nuts
hi Jason,

On Thu Oct 30, 2025 at 08:31 CET, Jason E. Aten wrote:
> Hi Sebastien,
>
> Given that Cargo is so much less secure than Go (no
> equivalent of a module proxy), it might
> be safer to let the Rust project be the "lead" or "top level"
> project, and have the Rust build script; these
>
> https://doc.rust-lang.org/stable/cargo/reference/build-scripts.html
>
> pull in any Go dependencies.

I considered this, mainly to have something like PyO3 but for Go (taking care of the minutiae of exchanging slices and stuff, generating header files for Cgo).
but cbindgen does most of that already.

however, using Rust's infrastructure as the driver won't allow for the resulting Go package to be easily "go get"-able nor easily import-able from client packages.
(unless I've missed something)

and that's the main issue/goal of this email plea.

Perhaps the title of this discussion should be rewritten as:
"how to wrap large libraries via Cgo and make these packages play nice with the pure-Go ecosystem ?"

-s

Jason E. Aten

unread,
Oct 30, 2025, 3:40:21 PM (13 days ago) Oct 30
to golang-nuts
To me, fundamentally, automatically "go getting" 
something is massive security hole if that 
process can execute arbitrary repo source code on my machine, right?

Given that CGO_LDFLAGS_ALLOW has to be used to even get some
C projects to work, and given how paranoid (justifiably) Google and the Go
team are about avoiding security holes, I doubt having a Rust 
dependency build automatically is a good idea; since that rust build
script can do just about anything.

There is some interest in sandboxing cargo execution, I hear rumor,
but of course most sandboxes are not water tight. A 6 year old effort:

If the tradeoff really is that I have to run make, once, in my Rust dependency,
that seems worth it.

I built a little demo of that to show to myself that is possible.

Seems like a pretty small hurdle (running make/a Makefile/manually
executing the rust build script once), and pretty big
benefit for more security and less chance of being hacked, no?


Jason E. Aten

unread,
Oct 30, 2025, 10:57:45 PM (13 days ago) Oct 30
to golang-nuts
By the by, Sebastien, I notice Google's own Rust Cloud API 
so it may be worth researching -- 

a) are they making a security mistake?
b) is there something that mitigates this?
c) why would they expect their customers to adopt poor security posture to
access Google cloud, if it a risky behavior?
d) maybe my initial read that it seems dangerous is off base?

https://github.com/googleapis/google-cloud-rust

~/go/src/github.com/googleapis $ find . -name build.rs

./google-cloud-rust/src/gax-internal/grpc-server/build.rs

./google-cloud-rust/src/gax-internal/build.rs

./google-cloud-rust/src/auth/build.rs

~/go/src/github.com/googleapis $


Sebastien Binet

unread,
Oct 31, 2025, 5:12:30 AM (13 days ago) Oct 31
to Jason E. Aten, golang-nuts
Jason,

On Thu Oct 30, 2025 at 20:40 CET, Jason E. Aten wrote:
> To me, fundamentally, automatically "go getting"
> something is massive security hole if that
> process can execute arbitrary repo source code on my machine, right?
>
> Given that CGO_LDFLAGS_ALLOW has to be used to even get some
> C projects to work, and given how paranoid (justifiably) Google and the
> Go
> team are about avoiding security holes, I doubt having a Rust
> dependency build automatically is a good idea; since that rust build
> script can do just about anything.
>
> There is some interest in sandboxing cargo execution, I hear rumor,
> but of course most sandboxes are not water tight. A 6 year old effort:
> https://github.com/rust-secure-code/cargo-sandbox
>
> If the tradeoff really is that I have to run make, once, in my Rust
> dependency,
> that seems worth it.
>
> I built a little demo of that to show to myself that is possible.
> https://github.com/glycerine/grusty
>
> Seems like a pretty small hurdle (running make/a Makefile/manually
> executing the rust build script once), and pretty big
> benefit for more security and less chance of being hacked, no?

well, building the "FFI'ed" Rust code and linking against it isn't rocket science.
you did it with a Makefile, I went the 'go generate' way for my daktilo package:

https://codeberg.org/sbinet/daktilo/src/tag/v0.4.0/internal/capi/capi.go#L5

and, as long as it is in your own package, within your own module, using a Makefile, configure+make+make-install, go generate, cargo or anything else, really, it boils down to the same thing:
baring any self-inflicted foot-gun, you are pretty safe.

the devil is in the 3rd party dependencies that you may depend upon, that may themselves do anything they want with - in this instance - the Rust build script.
so it falls upon you, the main developer of that module or crate, to perform your due diligence and check/audit the dependencies you rely on (and their dependencies, etc...)

so, yes, as I said in my first mail, using cargo is "opening a can of worms".

but let's table the cargo discussion for a moment.
the main issue is when wrapping some Rust code:
- one creates a FFI library around the original Rust code (you need to
actually write a bit of Rust to do that)
- one creates a Cgo package to call that FFI'ed Rust code
- one usually write a bit of Go to present that Cgo interface in a more
Go-ish way.
at that point, you have a little module which you can hack and use, and you can apply all sort of things to actually make all this compile: Makefile, Justfile, go generate, etc... you name it.

but then, you publish that module on pkg.go.dev.

how are consumers of this module expected to import it ?

```
$> mkdir use-daktilo && cd use-daktilo && go mod init use-daktilo
$> go get codeberg.org/sbinet/dak...@v0.4.0
go: downloading codeberg.org/sbinet/daktilo v0.4.0
go: added codeberg.org/sbinet/daktilo v0.4.0

$> cat > main.go
package main

import _ "codeberg.org/sbinet/daktilo"

func main() {}
^C

$> go build -v
runtime/cgo
codeberg.org/sbinet/daktilo/internal/capi
codeberg.org/sbinet/daktilo
faktilo
# faktilo
/home/binet/sdk/go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
/usr/bin/gcc -m64 -Wl,--build-id=0xc5d0a9d8ca953bd503aee3541a6c98c10c7f0342 -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Wl,--compress-debug-sections=zlib /tmp/go-link-2151555988/go.o /tmp/go-link-2151555988/000000.o /tmp/go-link-2151555988/000001.o /tmp/go-link-2151555988/000002.o /tmp/go-link-2151555988/000003.o /tmp/go-link-2151555988/000004.o /tmp/go-link-2151555988/000005.o /tmp/go-link-2151555988/000006.o /tmp/go-link-2151555988/000007.o /tmp/go-link-2151555988/000008.o /tmp/go-link-2151555988/000009.o /tmp/go-link-2151555988/000010.o /tmp/go-link-2151555988/000011.o /tmp/go-link-2151555988/000012.o /tmp/go-link-2151555988/000013.o /tmp/go-link-2151555988/000014.o /tmp/go-link-2151555988/000015.o /tmp/go-link-2151555988/000016.o -O2 -g -L/home/binet/tmp/go/pkg/mod/codeberg.org/sbinet/dak...@v0.4.0/internal/capi -ltypst_cffi_linux_amd64 -lm -O2 -g -lpthread -no-pie
/usr/bin/ld: cannot find -ltypst_cffi_linux_amd64: No such file or directory
```

you'd need to git clone daktilo somewhere, build it locally, and the go-mod-replace it in the "use-daktilo" module to point at the local build of daktilo.

not super user-friendly.

AFAICT, there are 2 ways to make this work:
a) bundle the libcffi.{so,a} binary within the wrapping module (ie: daktilo/internal/capi in my example), possibly one binary for every combination of $GOOS_$GOARCH one wants to support.
that's what Dominik did for his wpgu bindings:
https://github.com/dominikh/go-wgpu/blob/master/wgpu_linux_arm64.go
https://github.com/dominikh/wgpu-linux-arm64

b) bundle the thin Rust-shim with the wrapping module and make it part of
the compilation of the wrapping module (to produce the libcffi.{so,a} binary).

Option a) is easier on downstream users: they don't need any extra development toolchain.
But you bundle possibly many, possibly large binaries with your VCS.
The library daktilo wraps clocks at ~150Mb.
Multiply this by {linux,darwin,windows}×{amd64,arm64}×{daktilo versions} and the VCS repository is going to be massive pretty quick.
(probably why Dominik went with one repository = one binary)
And proxy.golang.org, sumdb.golang.org and cmd/go do not have support for Git-LFS.

Option b) is easier on the repository, but one needs an extra toolchain and some support from cmd/go to make it fly.
(and probably limit to invoking 'rustc' or 'gccrs' (when available) instead of 'cargo', for building the shim)

But perhaps, the expected way is to rely on compiling+installing the shim FFI library somewhere, making it available to Cgo via pkg-config and be done with it.

-s

Jason E. Aten

unread,
Oct 31, 2025, 5:32:46 PM (12 days ago) Oct 31
to golang-nuts
I enjoy the phrase "expected way", as it feels like an inside joke -- I'm
not sure anyone ever envisioned any (sane) way to
build multi-language projects. Maybe this is what Bazel is attempting,
but I've never looked at it...

The Nix package manager's user experience 
handily beats git for worst-of-all-time, but
it seems to me to be theoretically on the right track. 

I conjecture that the sane way to approach 
many-language/language-agnostic builds is to have
input (and maybe output) of a dependency tree captured by (
as content-addressable stores) cryptographic hashes 
and then construct the dependency Merkle tree. Once
you have all dependencies properly fetched, you could, in object oriented
style, tell each package to "build itself" under a depth-first search, so 
leaves (dependencies) are built first.  Obviously this would work for
Go which insists on a tree and not a graph of dependencies. 
Hopefully C code could be isolated sufficiently into its own 
strongly connected component...
Reply all
Reply to author
Forward
0 new messages