Is Anyone Interested in Running a RISC-V Continuous Integration System?

306 views
Skip to first unread message

Palmer Dabbelt

unread,
Oct 5, 2017, 7:02:47 PM10/5/17
to sw-...@groups.riscv.org
I've been handling the RISC-V toolchain testing infrastructure for the past few
years, and while it's been somewhat functional during that time period it's
never really been up to par. In that time, RISC-V has matured enough that we
really need to start doing this right: we have upstream releases, commercial
implementations, and shipping products so we really need some proper continuous
integration.

Here's what I have in mind for a system that would be useful to me:

* Running the tests present in riscv-gnu-toolchain on commits to
riscv-binutils-gdb, riscv-gcc, riscv-glibc, and riscv-newlib is a bare
minimum. It would be great if the system can also monitor the master/trunk
branches of our upstream ports for regressions as well.
* The ability to test across versions: for example, whenever there is a commit
to GCC's trunk I'd like to test it against riscv-binutils-2.29, riscv-next in
riscv-binutils-gdb, and master in sourceware's binutils.
* For now this can all run against QEMU, but eventually it would be great if we
could run it against hardware as well. We'll want to include our Linux port
eventually as well, but user-mode emulation is fine to start.
* Some way of notifying me that tests have failed. It'd be great if
Travis-level github integration is possible, but I'm OK with something that
builds a staging branch every night. There's already a way between
github.com/riscv and the upstream repos, so waiting a night isn't a big deal.
* It would be ideal if it was possible to bring this inside SiFive: see my
"riscv-buildbot-infra" stuff from before our move to Travis, where you could
run multiple instances of the buildbot, for an example. I can understand if
that isn't a reasonable request, though.

I'm not that worried about the latency of this system, but it does need to
point to individual commits that are broken. For example: it's OK if the tests
are run nightly, but if there's a failure it'll need to automatically bisect to
the first relevant commit in order to be useful. IIRC, many CI systems have
this mechanism so it shouldn't be too hard. We've been using Travis for a
while and before that we had a buildbot running at Berkeley. I'm OK with any
CI system as long as it works. Since this project requires a long-term
commitment, if you volunteer you should feel free to do this however is easiest
for you to maintain in the long term.

Since this is all based on commercially available hardware for now there
shouldn't be a major cost implication to running the tests, but if that's a
problem we can sort something out -- I'm really just looking for someone to
help out with the time and expertise that I don't have so this very important
project gets run correctly.

Thanks in advance to anyone who signs up!

Khem Raj

unread,
Oct 5, 2017, 8:43:14 PM10/5/17
to Palmer Dabbelt, RISC-V SW Dev
I think github has integration with buildbot as well as Jenkins
private instances and can use something like Oauth for external
credential management, github repos should be setup for automatic
mirroring with respective upstreams, so all triggers can use github.
May be its just
syncing the existing repositories and ensuring that any branch namespace
conflicts are solved. There is go.cd which I have not used but have
heard good reviews from folks who used it.

John Leidel

unread,
Oct 5, 2017, 8:55:45 PM10/5/17
to Khem Raj, Palmer Dabbelt, RISC-V SW Dev
LLVM's buildbot system is a solid model.  There is a central buildbot master, several central build machines and a number of donated (remote) resources from various orgs.  


--
You received this message because you are subscribed to the Google Groups "RISC-V SW Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sw-dev+unsubscribe@groups.riscv.org.
To post to this group, send email to sw-...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/sw-dev/.
To view this discussion on the web visit https://groups.google.com/a/groups.riscv.org/d/msgid/sw-dev/CAMKF1spe0cjj3RkYfswyqWk7q%3D9vf2yt%2B%3Dvczyi8-haS9-uPHQ%40mail.gmail.com.

Palmer Dabbelt

unread,
Oct 5, 2017, 9:05:56 PM10/5/17
to john....@gmail.com, raj....@gmail.com, sw-...@groups.riscv.org
I agree. glibc is the same way, and we'll probably donate a builder there when
we're upstream. Machine time isn't a problem, human time is -- we need someone
to set up and maintain the CI server, if cost is an issue we'll figure out a
way to make it not be a monetary burden on whomever is maintaining the CI
setup.
>> email to sw-dev+un...@groups.riscv.org.

Palmer Dabbelt

unread,
Oct 5, 2017, 9:05:57 PM10/5/17
to raj....@gmail.com, sw-...@groups.riscv.org
Ah, cool, I'd never heard of "go.cd" before. It certainly seems to have a nice
web interface, but I'll leave the decision up to the person maintaining the
system :).

Liviu Ionescu

unread,
Oct 6, 2017, 5:15:05 AM10/6/17
to Khem Raj, Palmer Dabbelt, RISC-V SW Dev


> On 6 Oct 2017, at 03:42, Khem Raj <raj....@gmail.com> wrote:
>
>> It'd be great if
>> Travis-level github integration is possible, but I'm OK with something that
>> builds a staging branch every night.

I think that a method to test individual commits (pull requests) and block harmful merges is a very nice feature to have on all GitHub repos.

if some tests are too long, perhaps a multi-tier approach should be considered, with build tests performed immediately and automatically (using Travis, for example) and long functional tests performed upon request.


except running tests on physical boards, I think Travis can do most of the work.

although not directly supported, with some small scripts it is possible to trigger builds in one project by changes in another (https://hiddentao.com/archives/2016/08/29/triggering-travis-ci-build-from-another-projects-build/). however passing the commit id such that the second build updates the submodules to a specific commit might be tricky (but not impossible).

it is also possible to group jobs in stages, and chain them only if successful. (https://docs.travis-ci.com/user/build-stages)


and Travis is not the only service available. for example I have portable applications that also need to run on Windows (what a pain!) which I test with AppVeyor (https://github.com/xpack/xmake-js/blob/master/.appveyor.yml)


regards,

Liviu




Bruce Hoult

unread,
Oct 6, 2017, 6:22:59 AM10/6/17
to Liviu Ionescu, Khem Raj, Palmer Dabbelt, RISC-V SW Dev
On Fri, Oct 6, 2017 at 12:15 PM, Liviu Ionescu <i...@livius.net> wrote:


> On 6 Oct 2017, at 03:42, Khem Raj <raj....@gmail.com> wrote:
>
>> It'd be great if
>>  Travis-level github integration is possible, but I'm OK with something that
>>  builds a staging branch every night.

I think that a method to test individual commits (pull requests) and block harmful merges is a very nice feature to have on all GitHub repos.

Absolutely!
 
 if some tests are too long, perhaps a multi-tier approach should be considered, with build tests performed immediately and automatically (using Travis, for example) and long functional tests performed upon request.

It takes quite a long time to fully build the toolchain. You'd have to run a pretty huge amount of testing before you'd notice that the total time was significantly longer than just building the tools.
 
except running tests on physical boards, I think Travis can do most of the work.

If the boards are running Linux and on a network then it shouldn't be much harder to run tests on them than on an emulator.

However we don't have such boards now.

Hopefully we'll have SiFive's quad 1.5 GHz boards in a few months. That will be great! But I think qemu on a decent i7 or Xeon will be faster per core, plus having the potential of using machines with a lot more cores. rv8-jit is even maybe twice as fast as qemu. It will be some time before we see riscv hardware comparable.

Definitely tests should be run regularly on real hardware, not only emulators, but for every-commit testing I think emulators will be the best option for a few years at least.

David Abdurachmanov

unread,
Oct 6, 2017, 9:28:25 AM10/6/17
to Palmer Dabbelt, sw-...@groups.riscv.org


> On 6 Oct 2017, at 01:02, Palmer Dabbelt <pal...@sifive.com> wrote:
>
> I've been handling the RISC-V toolchain testing infrastructure for the past few
> years, and while it's been somewhat functional during that time period it's
> never really been up to par. In that time, RISC-V has matured enough that we
> really need to start doing this right: we have upstream releases, commercial
> implementations, and shipping products so we really need some proper continuous
> integration.
>
> Here's what I have in mind for a system that would be useful to me:
>
> * Running the tests present in riscv-gnu-toolchain on commits to
> riscv-binutils-gdb, riscv-gcc, riscv-glibc, and riscv-newlib is a bare
> minimum. It would be great if the system can also monitor the master/trunk
> branches of our upstream ports for regressions as well.

Hi,

You can use GitHub webhooks to trigger jobs via API (e.g. Jenkins). Alternatively,
you can scheduled builds and only continue building if you have changes compared to
previous build.

Do you also want to make native builds? We could use OE/buildroot/etc to make
minimal image which could be used for building (toolchain, kernel, glibc, etc)
natively. We also assemble natively a new image. Next time we take the last stable
image and use it.

This basically tests integration of kernel/glibc/toolchain/some packages and that
we can still self-host it (kinda).

> * The ability to test across versions: for example, whenever there is a commit
> to GCC's trunk I'd like to test it against riscv-binutils-2.29, riscv-next in
> riscv-binutils-gdb, and master in sourceware's binutils.

You could have repositories and commit hash/tag/branch as parameters for the build.
Basically if you provide your repository + hash it would replace any defaults.

> * For now this can all run against QEMU, but eventually it would be great if we
> could run it against hardware as well. We'll want to include our Linux port
> eventually as well, but user-mode emulation is fine to start.

Is QEMU the best option here? Or better, shouldn't we use a few compatible emulators
here to shake out the issues?

E.g. we use multiple architectures/silicons to find unexpected issues in our software
stack.

> * Some way of notifying me that tests have failed. It'd be great if
> Travis-level github integration is possible, but I'm OK with something that
> builds a staging branch every night. There's already a way between
> github.com/riscv and the upstream repos, so waiting a night isn't a big deal.

Mailing-list + archive would be better here for long term. We could store
artifacts (builds + logs) also on some other server. E.g, I used to produce
Fedora stage 3 image + kernel + changelog (based on git) every several hours
on public server.

We use GitHub API + Jenkins for such integration and beyond, e.g., bot sends
comments on pull requests with testing results, and handles all ACKs.

> * It would be ideal if it was possible to bring this inside SiFive: see my
> "riscv-buildbot-infra" stuff from before our move to Travis, where you could
> run multiple instances of the buildbot, for an example. I can understand if
> that isn't a reasonable request, though.

Jenkins is master/slave system. You could have multiple servers (i.e. slaves),
local, remote, cloud, batch systems. Example, we currently have probably more
than 1'000 cores in use by our single Jenkins master for builds, deployment, QA,
backups, etc. We do couple of thousands of jobs a day via Jenkins.

There is one caveat, you need JAVA running on a slave. Same applies to GoCD.
Thus for plugging SiFive riscv64 dev board we would need at least JAVA with
project zero, or go indirectly via some gateway which has serial access to the
board.

>
> I'm not that worried about the latency of this system, but it does need to
> point to individual commits that are broken. For example: it's OK if the tests
> are run nightly, but if there's a failure it'll need to automatically bisect to
> the first relevant commit in order to be useful. IIRC, many CI systems have
> this mechanism so it shouldn't be too hard. We've been using Travis for a
> while and before that we had a buildbot running at Berkeley. I'm OK with any
> CI system as long as it works. Since this project requires a long-term
> commitment, if you volunteer you should feel free to do this however is easiest
> for you to maintain in the long term.

Jenkins is like Swiss army knife, highly flexible, huge amounts of plugins. It
does scale to large and complex (even too complex) setups. You can also put things
like Mesos below Jenkins and do all the builds within containers.

It's not perfect, but it does work quite well.

>
> Since this is all based on commercially available hardware for now there
> shouldn't be a major cost implication to running the tests, but if that's a
> problem we can sort something out -- I'm really just looking for someone to
> help out with the time and expertise that I don't have so this very important
> project gets run correctly.

I wouldn't mind investing some hours into this, I guess.

david

Palmer Dabbelt

unread,
Oct 6, 2017, 11:55:05 AM10/6/17
to david.abd...@gmail.com, michae...@mac.com, sw-...@groups.riscv.org
On Fri, 06 Oct 2017 06:28:20 PDT (-0700), david.abd...@gmail.com wrote:
>
>
>> On 6 Oct 2017, at 01:02, Palmer Dabbelt <pal...@sifive.com> wrote:
>>
>> I've been handling the RISC-V toolchain testing infrastructure for the past few
>> years, and while it's been somewhat functional during that time period it's
>> never really been up to par. In that time, RISC-V has matured enough that we
>> really need to start doing this right: we have upstream releases, commercial
>> implementations, and shipping products so we really need some proper continuous
>> integration.
>>
>> Here's what I have in mind for a system that would be useful to me:
>>
>> * Running the tests present in riscv-gnu-toolchain on commits to
>> riscv-binutils-gdb, riscv-gcc, riscv-glibc, and riscv-newlib is a bare
>> minimum. It would be great if the system can also monitor the master/trunk
>> branches of our upstream ports for regressions as well.
>
> Hi,
>
> You can use GitHub webhooks to trigger jobs via API (e.g. Jenkins). Alternatively,
> you can scheduled builds and only continue building if you have changes compared to
> previous build.
>
> Do you also want to make native builds? We could use OE/buildroot/etc to make
> minimal image which could be used for building (toolchain, kernel, glibc, etc)
> natively. We also assemble natively a new image. Next time we take the last stable
> image and use it.
>
> This basically tests integration of kernel/glibc/toolchain/some packages and that
> we can still self-host it (kinda).

Eventually I'd like to get to a system where we:

* Cross-compile some base software.
* Natively build the rest of a distribution.
* Run all the tests for all the packages.
* Run some set of benchmarks

That's going to be difficult and expensive, so I think it's out of the scope of
something I could ask someone to do for free.

>> * The ability to test across versions: for example, whenever there is a commit
>> to GCC's trunk I'd like to test it against riscv-binutils-2.29, riscv-next in
>> riscv-binutils-gdb, and master in sourceware's binutils.
>
> You could have repositories and commit hash/tag/branch as parameters for the build.
> Basically if you provide your repository + hash it would replace any defaults.
>
>> * For now this can all run against QEMU, but eventually it would be great if we
>> could run it against hardware as well. We'll want to include our Linux port
>> eventually as well, but user-mode emulation is fine to start.
>
> Is QEMU the best option here? Or better, shouldn't we use a few compatible emulators
> here to shake out the issues?
>
> E.g. we use multiple architectures/silicons to find unexpected issues in our software
> stack.

Again, that'd be great, but at a certain point it starts to become a big burden.
Great! Michael Clark also said he might have some time to help, so I've added
him to this thread.

Khem Raj

unread,
Oct 6, 2017, 12:50:02 PM10/6/17
to Palmer Dabbelt, david.abd...@gmail.com, Michael Clark, RISC-V SW Dev
Yocto project has test framework ( ptest) which automates all these
well at system level.
and is currently automated with buildbot. So eventually it could be
used for riscv as well.
see https://autobuilder.yocto.io/tgrid

We can take one step at a time. Initally

may be starting off with a buildbot instance setup
hook it to github
Automate toolchain builds

Michael Clark

unread,
Oct 6, 2017, 6:13:56 PM10/6/17
to Bruce Hoult, Palmer Dabbelt, Liviu Ionescu, Khem Raj, RISC-V SW Dev, John Leidel

> On 6/10/2017, at 11:22 PM, Bruce Hoult <br...@hoult.org> wrote:
>
> On Fri, Oct 6, 2017 at 12:15 PM, Liviu Ionescu <i...@livius.net> wrote:
>
>
> > On 6 Oct 2017, at 03:42, Khem Raj <raj....@gmail.com> wrote:
> >
> >> It'd be great if
> >> Travis-level github integration is possible, but I'm OK with something that
> >> builds a staging branch every night.
>
> I think that a method to test individual commits (pull requests) and block harmful merges is a very nice feature to have on all GitHub repos.
>
> Absolutely!

Agree.

> if some tests are too long, perhaps a multi-tier approach should be considered, with build tests performed immediately and automatically (using Travis, for example) and long functional tests performed upon request.

I have some notes i’m writing up about this which I will share at some point. I definitely think asynchronous testing on a per commit basis is the way to go (using the testing matrix of downstream and upstream versions mentioned in Palmer’s initial email). i.e. bisecting is effectively done in advance. I’m also quite interested in some codegen regression reporting e.g. increased dynamic instruction count or larger generated code side. Tracking done these sorts of regressions are tricky as they don’t necessarily break functional tests.

We should have procedures were we can curate and squash changes in staging/feature branches before merging into an incoming branch that is managed by automation (which for gcc will require both a maintenance branch for gcc-7 and a development branch for gcc-8). The CI automation could advance the ref on the automation managed branch one commit at a time, only after testing said commit, such that bisecting is effectively done in advance and breaking changes are never applied to the automation managed branch. i.e. a developer would need to revert a breaking commit and apply a fix on the incoming branch before the pipeline will start accepting changes again. Again there may be more than one incoming branch if one wants to backport maintenance fixes to gcc-7. Of course if the system is flexible it can be instantiated with different branches as inputs. i.e. gcc-8-master, gcc-7-maintenance along with the binutils versions.

We essentially have two sources of changes, and we probably want origin branches on the repos that are continuously mirrored with upstream, however the process to periodically rebase the integration branches that accept changes from the incoming changes (riscv-next) would still be semi-manual. However, I do think automation could make it easier to keep changes more frequently rebased against current upstream, making the job of committing changes upstream a lot easier. The exact workflow needs some thought as to what will work best for the main developers…

> It takes quite a long time to fully build the toolchain. You'd have to run a pretty huge amount of testing before you'd notice that the total time was significantly longer than just building the tools.
>
> except running tests on physical boards, I think Travis can do most of the work.

Travis is possibly okay although we don’t want to reduce choices for the person that has to implement this. I’m bringing John in from another branch of this thread as I’m interested in the quite mature LLVM buildbot infrastructure; along with its IRC integration (or potentially Slack or another other chat systems). There has been some discussion about buildbot on the gcc mailing list recently. There seems to be active work to create buildbot pipelines for GCC that could be leveraged for riscv, along with Palmer’s previous work, if the buildbot path was taken:

- https://github.com/LinkiTools/gcc-buildbot

A lot of the work with CI integration will inevitably be things like parsing the test result format from torture so that the CI can be aware of test results so it can lock out regressions (within tolerances, as I understand the exact test counts can change due to timeouts) and driving the github APIs to automate merging changes that pass tests. The CI systems tend to have standard formats for tests results (massage into JUnit XML for example) so that the integration can go beyond testing for compile failures to checking that changes don’t regress any tests. GitHub also has a well documented API, such that most of the tasks that can be done by end-users can be done by the CI automation.

> If the boards are running Linux and on a network then it shouldn’t be much harder to run tests on them than on an emulator.

It would be possible however it may be limited to tests that fit the constraints of the boards (memory, etc). I’d most likely considering adding spike too, given it is the golden reference.

> However we don't have such boards now.
>
> Hopefully we'll have SiFive's quad 1.5 GHz boards in a few months. That will be great! But I think qemu on a decent i7 or Xeon will be faster per core, plus having the potential of using machines with a lot more cores. rv8-jit is even maybe twice as fast as qemu. It will be some time before we see riscv hardware comparable.

Yep. Having Linux capable hardware would be ideal however riscv-qemu makes the most sense at the moment. Having rv8-jit pass gcc torture is a nice goal however the research direction for rv8 at present is to push towards ~1X native performance. The syscalls can be added later however the performance advantage needs to be credible to make a riscv JIT useful as an open standard’s based execution environment on commodity x86 hardware, versus being yet another riscv emulator. It’s likely that the compile time and expect scripts dominate build and test so qemu and possibly more importantly spike would be fine. An interesting side project perhaps ;-). If self-hosting had ~1X performance compared to x86 then it would be a no brainer.

> Definitely tests should be run regularly on real hardware, not only emulators, but for every-commit testing I think emulators will be the best option for a few years at least.

I personally think running the tests on spike in addition to qemu would be a better initial direction if we want to get riscv-tools in better shape, given spike is the golden reference. Getting torture to run with both spike and qemu would be beneficial as it would help with cross-checking riscv-isa-sim and riscv-qemu, another testing dimension. We could even run riscv-tests and possibly even libc-test given the toolchains are build against newlib, glibc or musl i.e. also test the riscv-linux ABI. The incremental cost of adding additional tests would be minimal if the CI and build environment has the whole riscv-tool suite set up within it.

- http://wiki.musl-libc.org/wiki/Libc-Test

The reality is one has to inch forward with incremental improvements to the current system versus trying to introduce any radical changes. asynchronous per commit torture tests (with result parsing and regression tolerances for timeouts) would be number one on the list.

Michael.
signature.asc

Arun Thomas

unread,
Oct 31, 2017, 10:28:24 AM10/31/17
to Michael Clark, Bruce Hoult, Palmer Dabbelt, Liviu Ionescu, Khem Raj, RISC-V SW Dev, John Leidel
Thanks to everyone who has participated in the CI discussion so far. Our immediate needs are 1) volunteers who can bring up and maintain an initial CI system and 2) network-accessible hardware to run cross-builds, QEMU/spike instances, etc. The exact details of the CI system can be worked out over time.

Does anyone have time and/or hardware to devote to the RISC-V CI cause? Perhaps RISC-V Foundation member companies would be able to help?

Thanks,
Arun

Edmond Cote

unread,
Nov 1, 2017, 12:35:31 PM11/1/17
to Arun Thomas, Michael Clark, Bruce Hoult, Palmer Dabbelt, Liviu Ionescu, Khem Raj, RISC-V SW Dev, John Leidel
Yes. I can help with engineering time and potentially obtain some free (or highly discounted) rack space (through my incubator).  Don't have hardware resources to offer right now.


--
You received this message because you are subscribed to the Google Groups "RISC-V SW Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sw-dev+un...@groups.riscv.org.
To post to this group, send email to sw-...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/sw-dev/.

John Leidel

unread,
Nov 1, 2017, 2:13:59 PM11/1/17
to Edmond Cote, Arun Thomas, Michael Clark, Bruce Hoult, Palmer Dabbelt, Liviu Ionescu, Khem Raj, RISC-V SW Dev
We (TCL) would be willing to purchase and help bring up the initial CI master if that's the route we want to go (per Michael's notes on buildbot).  We also have some spare cycles in our regression environment that can be used as build slaves.  

On Wed, Nov 1, 2017 at 11:35 AM, Edmond Cote <edm...@edmondcote.com> wrote:
Yes. I can help with engineering time and potentially obtain some free (or highly discounted) rack space (through my incubator).  Don't have hardware resources to offer right now.


On Tue, Oct 31, 2017 at 7:28 AM Arun Thomas <arun....@gmail.com> wrote:
Thanks to everyone who has participated in the CI discussion so far. Our immediate needs are 1) volunteers who can bring up and maintain an initial CI system and 2) network-accessible hardware to run cross-builds, QEMU/spike instances, etc. The exact details of the CI system can be worked out over time.

Does anyone have time and/or hardware to devote to the RISC-V CI cause? Perhaps RISC-V Foundation member companies would be able to help?

Thanks,
Arun

--
You received this message because you are subscribed to the Google Groups "RISC-V SW Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sw-dev+unsubscribe@groups.riscv.org.

To post to this group, send email to sw-...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/sw-dev/.

Palmer Dabbelt

unread,
Nov 1, 2017, 2:14:28 PM11/1/17
to edm...@edmondcote.com, Arun Thomas, Michael Clark, br...@hoult.org, i...@livius.net, Khem Raj, sw-...@groups.riscv.org, john....@gmail.com
Great, I think time is the big thing right now. What we really need is someone
who can sign up to be the lead on the CI system, which would mean maintaining
the system. Things like ensuring it's running the tests, that when the build
goes red it's because of an actual bug (ie, not a system misconfiguration),
making sure the emails are getting to the right people, etc.

If you're willing to sign up, the we can get you access to the hardware you
need.
>> <https://groups.google.com/a/groups.riscv.org/d/msgid/sw-dev/CAKvdDhcp4QkhXKcHf%3Do%3DvWb%3Dhoh5wRMOEFhSmXqK1w%2Bb3pfc-g%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>

Richard Herveille

unread,
Nov 1, 2017, 3:02:37 PM11/1/17
to Palmer Dabbelt, edm...@edmondcote.com, Arun Thomas, Michael Clark, br...@hoult.org, i...@livius.net, Khem Raj, sw-...@groups.riscv.org, john....@gmail.com
Maybe talk to Oleg Nenashev, he works on open hardware CI using Jenkins. 
See his presentations at OrConf. 

Cheers,
Richard


Sent from my iPad

Edmond Cote

unread,
Dec 1, 2017, 5:00:06 PM12/1/17
to RISC-V SW Dev, edm...@edmondcote.com, arun....@gmail.com, michae...@mac.com, br...@hoult.org, i...@livius.net, raj....@gmail.com, john....@gmail.com, pal...@sifive.com
Palmer, others.  Want me to lead this?

I already have a bunch of this running on my private Google Cloud.

Can someone approach in Foundation Google and ask for free cloud credits?

Slide

unread,
Dec 1, 2017, 8:31:27 PM12/1/17
to Edmond Cote, RISC-V SW Dev, arun....@gmail.com, michae...@mac.com, br...@hoult.org, i...@livius.net, raj....@gmail.com, john....@gmail.com, pal...@sifive.com
I have some significant experience using Jenkins (I am a contributor to core and several plugins). I'd love to help out in any way I can.

Paulo Matos

unread,
Dec 19, 2017, 5:28:18 AM12/19/17
to RISC-V SW Dev, pal...@sifive.com

Hello,

Apologies if this come in duplicate or triplicate but I have had issues posting to the list from my email client (doing this through the google groups interface to make sure it works).

As I have mentioned and discussed privately with Palmer, I am developing the GCC Buildbot at https://gcc-buildbot.linki.tools and I am happy to contribute a riscv buildbot as these two would share some Buildbot Python code and therefore it wouldn't be a massive effort to support both code bases.

I could contribute with the machine and time to run a buildbot master however I don't have available machines to run workers. I am using the Compiler Farm (https://gcc.gnu.org/wiki/CompileFarm) for GCC workers however these machines are really slow and something else would be needed for RISCV.

I have enough Python buildbot code implemented that running the gcc suite, with notifications, etc would be straightforward. All the other riscv tools would have to be integrated in due time.

Having a second person helping maintain the infrastructure would be great.

This week is pretty chaotic over here but I will try to get a proof of concept done until the end of the week.
Since we already have all the scripts in riscv-gnu-toolchain and riscv-tools to cross-compile etc, I though the best would be for testing to have the buildbot clone riscv-tools, set the submodules git hashes to the ones we need to test and run the scripts directly instead of replicating the commands in python through the buildbot. This would be where I would start.

I also noticed some of you will be in FOSDEM. I am based in Nuernberg, so not that far away. I can surely meet you there and discuss any details on this if you're interested.

Kind regards,

Paulo Matos

Paulo Matos

unread,
Dec 20, 2017, 2:02:05 PM12/20/17
to RISC-V SW Dev, pal...@sifive.com
Hello,

As I mentioned in my previous post, I have taken some time to setup a proof-of-concept of how it could start.
Take a look at:
https://riscv-buildbot.linki.tools

A complete toolchain build is at:
https://riscv-buildbot.linki.tools/#/builders/1/builds/6

Code is at:
https://github.com/LinkiTools/riscv-buildbot

Currently it doesn't do much. There's a nightly build setup that can be forced that builds the riscv tools for riscv32ima on a x86_64 running Fedora 26. This grabs the latest tools and builds it. No tests are run.There are many directions we can take from here. Was it something like this you guys had in mind?

There are a few questions that come up straight away:
* There are many, many possibilities with all the riscv available tools. What combination of tools do you want to test and which branches?
For example, do we want to trigger a build for each riscv-gcc/trunk commit with all the other tools at HEAD (or at the riscv-tools submodule commit)? Do we want to track other branches?
For riscv-binutils do we want to trigger a build for each commit? Do we want riscv-gcc trunk at HEAD?

Which combination matrix do we want to test?

* For each of these combinations, when do we want to test them? For each commit? Nightly with bisect in case of regressions? Whenever theres a commit and resource availability (i.e. commits do not create new build job that wait until a resource are available but instead they accumulate)?

* Once the tests complete who do we want to notify in case of regressions?

* Do we want to test gcc/gdb/binutils upstream instead of the versions in riscv-tools?

* Which metrics per build do we want to track? time to compile? time to test? per-test result? whole testsuite results?

* Which benchmarks would we like to compile?

Needless to say the more of this we want done, then the more resources we will need.
I will be slow to reply in the next 2 weeks so please bear with me. Enjoy the holiday season.

Kind regards,

Paulo Matos

Palmer Dabbelt

unread,
Dec 21, 2017, 2:45:34 PM12/21/17
to Paulo Matos, slide...@gmail.com, edm...@edmondcote.com, sw-...@groups.riscv.org
On Tue, 19 Dec 2017 02:28:17 PST (-0800), sw-...@groups.riscv.org wrote:
>
> Hello,
>
> Apologies if this come in duplicate or triplicate but I have had issues
> posting to the list from my email client (doing this through the google
> groups interface to make sure it works).
>
> As I have mentioned and discussed privately with Palmer, I am developing
> the GCC Buildbot at https://gcc-buildbot.linki.tools and I am happy to
> contribute a riscv buildbot as these two would share some Buildbot Python
> code and therefore it wouldn't be a massive effort to support both code
> bases.
>
> I could contribute with the machine and time to run a buildbot master
> however I don't have available machines to run workers. I am using the
> Compiler Farm (https://gcc.gnu.org/wiki/CompileFarm) for GCC workers
> however these machines are really slow and something else would be needed
> for RISCV.

We should be able to get you access to whatever compute resource you need.

> I have enough Python buildbot code implemented that running the gcc suite,
> with notifications, etc would be straightforward. All the other riscv tools
> would have to be integrated in due time.
>
>
> Having a second person helping maintain the infrastructure would be great.

I've added Alex and Edmond, who I think might also have some time to help out.

>
> This week is pretty chaotic over here but I will try to get a proof of concept done until the end of the week.
> Since we already have all the scripts in riscv-gnu-toolchain and riscv-tools to cross-compile etc, I though the best would be for testing to have the buildbot clone riscv-tools, set the submodules git hashes to the ones we need to test and run the scripts directly instead of replicating the commands in python through the buildbot. This would be where I would start.
>
> I also noticed some of you will be in FOSDEM. I am based in Nuernberg, so not that far away. I can surely meet you there and discuss any details on this if you're interested.

Sounds good, let's talk then!

Edmond Cote

unread,
Dec 26, 2017, 10:15:00 AM12/26/17
to Palmer Dabbelt, Paulo Matos, slide...@gmail.com, sw-...@groups.riscv.org
FYI: I will be out part of this week and next.  I'll plan to schedule a meeting 2nd week of Jan to begin gathering requirements.

Paulo Matos

unread,
Jan 29, 2018, 4:18:54 AM1/29/18
to sw-...@groups.riscv.org
All,

I will be at FOSDEM from Friday evening to Monday morning. I wonder if
in the meantime folks interested in RISC-V CI would be interested in an
informal discussion of what's needed/wanted and how to proceed.

Kind regards,

Paulo Matos
> >>   github.com/riscv <http://github.com/riscv> and the upstream
> --
> You received this message because you are subscribed to the Google
> Groups "RISC-V SW Dev" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sw-dev+un...@groups.riscv.org
> <mailto:sw-dev+un...@groups.riscv.org>.
> To post to this group, send email to sw-...@groups.riscv.org
> <mailto:sw-...@groups.riscv.org>.
> Visit this group at
> https://groups.google.com/a/groups.riscv.org/group/sw-dev/.
> To view this discussion on the web visit
> https://groups.google.com/a/groups.riscv.org/d/msgid/sw-dev/CALzVNFDVNLRpoyDcjMkycyEwDaJMD8cbiTtOvC-H5M7giDwgVA%40mail.gmail.com
> <https://groups.google.com/a/groups.riscv.org/d/msgid/sw-dev/CALzVNFDVNLRpoyDcjMkycyEwDaJMD8cbiTtOvC-H5M7giDwgVA%40mail.gmail.com?utm_medium=email&utm_source=footer>.

--
Paulo Matos

david.abd...@gmail.com

unread,
Feb 2, 2018, 4:36:35 AM2/2/18
to Paulo Matos, sw-...@groups.riscv.org
Hi,

There will be RISC-V BoF session after Palmer talk on Saturday.

See https://fosdem.org/2018/schedule/event/riscv_bof/ for the details.

david

On Mon, 2018-01-29 at 10:18 +0100, 'Paulo Matos' via RISC-V SW Dev
wrote:

Paulo Matos

unread,
Feb 2, 2018, 6:10:49 AM2/2/18
to david.abd...@gmail.com, sw-...@groups.riscv.org
Interesting. I had missed that bof announcement. Will bring the issue there.

Paulo Matos
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Reply all
Reply to author
Forward
0 new messages