> On 6 Oct 2017, at 01:02, Palmer Dabbelt <pal...@sifive.com
> I've been handling the RISC-V toolchain testing infrastructure for the past few
> years, and while it's been somewhat functional during that time period it's
> never really been up to par. In that time, RISC-V has matured enough that we
> really need to start doing this right: we have upstream releases, commercial
> implementations, and shipping products so we really need some proper continuous
> Here's what I have in mind for a system that would be useful to me:
> * Running the tests present in riscv-gnu-toolchain on commits to
> riscv-binutils-gdb, riscv-gcc, riscv-glibc, and riscv-newlib is a bare
> minimum. It would be great if the system can also monitor the master/trunk
> branches of our upstream ports for regressions as well.
You can use GitHub webhooks to trigger jobs via API (e.g. Jenkins). Alternatively,
you can scheduled builds and only continue building if you have changes compared to
Do you also want to make native builds? We could use OE/buildroot/etc to make
minimal image which could be used for building (toolchain, kernel, glibc, etc)
natively. We also assemble natively a new image. Next time we take the last stable
image and use it.
This basically tests integration of kernel/glibc/toolchain/some packages and that
we can still self-host it (kinda).
> * The ability to test across versions: for example, whenever there is a commit
> to GCC's trunk I'd like to test it against riscv-binutils-2.29, riscv-next in
> riscv-binutils-gdb, and master in sourceware's binutils.
You could have repositories and commit hash/tag/branch as parameters for the build.
Basically if you provide your repository + hash it would replace any defaults.
> * For now this can all run against QEMU, but eventually it would be great if we
> could run it against hardware as well. We'll want to include our Linux port
> eventually as well, but user-mode emulation is fine to start.
Is QEMU the best option here? Or better, shouldn't we use a few compatible emulators
here to shake out the issues?
E.g. we use multiple architectures/silicons to find unexpected issues in our software
> * Some way of notifying me that tests have failed. It'd be great if
> Travis-level github integration is possible, but I'm OK with something that
> builds a staging branch every night. There's already a way between
and the upstream repos, so waiting a night isn't a big deal.
Mailing-list + archive would be better here for long term. We could store
artifacts (builds + logs) also on some other server. E.g, I used to produce
Fedora stage 3 image + kernel + changelog (based on git) every several hours
on public server.
We use GitHub API + Jenkins for such integration and beyond, e.g., bot sends
comments on pull requests with testing results, and handles all ACKs.
> * It would be ideal if it was possible to bring this inside SiFive: see my
> "riscv-buildbot-infra" stuff from before our move to Travis, where you could
> run multiple instances of the buildbot, for an example. I can understand if
> that isn't a reasonable request, though.
Jenkins is master/slave system. You could have multiple servers (i.e. slaves),
local, remote, cloud, batch systems. Example, we currently have probably more
than 1'000 cores in use by our single Jenkins master for builds, deployment, QA,
backups, etc. We do couple of thousands of jobs a day via Jenkins.
There is one caveat, you need JAVA running on a slave. Same applies to GoCD.
Thus for plugging SiFive riscv64 dev board we would need at least JAVA with
project zero, or go indirectly via some gateway which has serial access to the
> I'm not that worried about the latency of this system, but it does need to
> point to individual commits that are broken. For example: it's OK if the tests
> are run nightly, but if there's a failure it'll need to automatically bisect to
> the first relevant commit in order to be useful. IIRC, many CI systems have
> this mechanism so it shouldn't be too hard. We've been using Travis for a
> while and before that we had a buildbot running at Berkeley. I'm OK with any
> CI system as long as it works. Since this project requires a long-term
> commitment, if you volunteer you should feel free to do this however is easiest
> for you to maintain in the long term.
Jenkins is like Swiss army knife, highly flexible, huge amounts of plugins. It
does scale to large and complex (even too complex) setups. You can also put things
like Mesos below Jenkins and do all the builds within containers.
It's not perfect, but it does work quite well.
> Since this is all based on commercially available hardware for now there
> shouldn't be a major cost implication to running the tests, but if that's a
> problem we can sort something out -- I'm really just looking for someone to
> help out with the time and expertise that I don't have so this very important
> project gets run correctly.
I wouldn't mind investing some hours into this, I guess.