tl;dr for anyone but Plan 9 interested people.
We've got a semi-automated setup for akaros, and since akaros uses a
lot of plan 9 code (network stack, name space, mnt device, utilities,
etc.) and we will have some of the same issues w.r.t things like GCE,
I thought I'd mention what we're doing.
I currently run my tests in a docker instance with the standard docker
ubuntu image and kvm on my chromebook, so the experience I'm having
may be applicable to GCE.
To run our tests, we crossbuild, fire up a go9p server (we use
github.com/rminnich/go9p), and boot akaros in qemu optionally with
kvm. [FWIW we're passing almost everything at this point].
We have a script running on the Akaros side that ipconfig's the
network stack, runs
srv and does a mount to get the go tree available, and fires up a
listen1 in another script. The listen ties incoming calls to a shell
(ash in our case, it would be rc in Plan 9), which in turn allows us
to use a simple
shell script on the linux side that kicks off go tests in the akaros
instance. This script with very minor changes would run just fine on
Plan 9, there's nothing special to it.
At that point, on the linux side, we can type
go test
or
go test whatever
and it looks like a local go test but it runs on the guest (Akaros).
It takes a little longer of course. And, trust me, it has worked well
enough to expose all our bugs, races included. We just got past the
one related to TLS server sockets closing in unexpected ways.
Given that all this was built with standard plan 9 tools, name spaces,
and network stack, I suspect it could be made to work without undue
effort on GCE. The key is that you don't need to boot plan 9 as a gce
instance; you fire up a docker in gce and run Plan 9 as a guest in
that. The Plan 9 instance is controlled from the linux side. We use
netcat to issue commands and get text back form the listen1. The
current setup is more plumbing than you want, I'm sure, but it shows
what's possible with a little work. It may be about the level of
effort it took for Windows.
Our setup won't help much if you're hankering to test building under
Plan 9, but the approach of firing up plan 9 in a ubuntu docker
instance is a useful model. You could then control hg commands and the
build sequence from a Go program running on the Linux side, again via
the socket to listen1.
I doubt this makes much sense to non-plan9 folks, but I hope it makes
sense to someone :-)
Short form: Plan 9 is doable, and you don't need GCE to support Plan 9
images directly. An indirect setup is doable. Nested virtualization
will make it perform better, but I don' t know if GCE will do that.
Given the number of Plan 9 people who have been using qemu to run Plan
9 for about 10 years now, I suspect that qemu is 'good enough' for
this purpose.
ron