Use AppVeyor to test on Windows

28 views
Skip to first unread message

Ondřej Čertík

unread,
Oct 27, 2015, 11:25:17 AM10/27/15
to hash...@googlegroups.com
Hi,

We currently use Travis to do some basic testing of
Hashdist+Hashstack. It's not meant to be comprehensive, since we are
limited by the 50 min cutoff, but it is enough to check quite a few
things, in particular at least:

* that Hashdist actually runs and builds a few packages
* Hashstack itself works for the package that build quickly enough
* that few of our simple but important stacks work, like SciPy, Python
2.7 and 3.4, ...

Travis can also test on Mac OS X. We should enable that and use it.

Finally, we should use AppVeyor to test on Windows. We use it with
SymEngine (https://github.com/symengine/symengine,
https://github.com/symengine/symengine.py), you can browse some PRs
there or our setup. It has proven to be very robust. One can use MSVC,
or mingw, or anything else that we need. The cutoff is 30min, which is
plenty for SymEngine. For Hashdist+Hashstack, it should still be
enough to build at least a few things.

After we get our binaries working, we can then install let's say the
scipy stack as binary, and then test packages that depend on let's say
numpy.

We'll still have to run the "build all" tests that take many hours to
finish, there is no substitute for that, but with some clever
solutions, we can test quite a lot using Travis and AppVeyor on all
Windows, Linux and Mac, and ensure that we always have a working core
on all three platforms, and so we should do that.

Ondrej

Volker Braun

unread,
Oct 27, 2015, 7:42:41 PM10/27/15
to hashdist
Speaking of TravisCI, the test results page has some hint/warning  "This job ran on our legacy infrastructure. Please read our docs on how to upgrade" (e.g. at https://travis-ci.org/hashdist/hashstack/jobs/77095580). The linked page says that if you upgrade it'll run a bit faster. Has anybody looked into that before?

Ondřej Čertík

unread,
Oct 28, 2015, 11:32:41 AM10/28/15
to hash...@googlegroups.com
Hi Volker,

On Tue, Oct 27, 2015 at 5:42 PM, Volker Braun <vbrau...@gmail.com> wrote:
> Speaking of TravisCI, the test results page has some hint/warning "This job
> ran on our legacy infrastructure. Please read our docs on how to upgrade"
> (e.g. at https://travis-ci.org/hashdist/hashstack/jobs/77095580). The linked
> page says that if you upgrade it'll run a bit faster. Has anybody looked
> into that before?

I did. In https://github.com/symengine/symengine/blob/master/.travis.yml
you can see, that we just set "sudo: false", and then you can only
install packages using apt-get that are pre-approved. It's a simple
process to get more packages approved if we needed (it takes about a
week), the only requirement is that the package can't set the setuid
bit. And you can't use sudo in the build script. Otherwise it works
just like before and it is indeed faster.

Ondrej

Chris Kees

unread,
Oct 28, 2015, 6:44:13 PM10/28/15
to hash...@googlegroups.com
I've been meaning to discuss this issue a bit  more. I've been using the support for  remote build and source caches for several months now in the context of CI  on  travis and shippable for proteus.  The relevant branches are here for hashdist: https://github.com/hashdist/hashdist/pull/314  and here for a little relocatability hack for hashstack packages: https://github.com/hashdist/hashstack/tree/add_location.

The proteus stack  can take over an hour just to  build (not including proteus or its tests), but using this approach, the travis build and tests  for proteus only take about 15 minutes because most  or all  of the packages in the dependency stack  are binaries, and it  doesn't require getting any packages approved. As Ondrej pointed out, the catch is building the binaries. The "manual" approach for updating the dependency binaries works  like this:

1) run  docker  image for travis  or shippable  environment, and go into bash shell for travis  or shippable  user
2) 'hit build stack_name.yaml; hit push  remote_name' (time depends  on how many new packages  you're building since the last time you did  it)

That could  be automated using buildbot  so that new binaries would be available at least  within a few hours  of pushing changes to hashtack.

Finally you have  to modify the .hashdist/config.yaml in the CI  environment before trying to build the stack. I  posted a .hashdist/config.yaml file that points to the url's  for remote_name in step 2). So  my .travis.yaml or  .shippable.yaml  look something like  the following to pull the config.yaml before buildling:

before_install:
- tar xzf hashdist_travis.tgz 
- mv .hashdist $HOME

Chris


Ondrej

--
You received this message because you are subscribed to the Google Groups "hashdist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hashdist+u...@googlegroups.com.
To post to this group, send email to hash...@googlegroups.com.
Visit this group at http://groups.google.com/group/hashdist.
To view this discussion on the web visit https://groups.google.com/d/msgid/hashdist/CADDwiVAiO5Oyuusk59q-7D-9iRxbkFy2eEcUfZoL6doAeD_b2A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Aron Ahmadia

unread,
Oct 28, 2015, 9:16:00 PM10/28/15
to hash...@googlegroups.com
This is really cool Chris.

Chris Kees

unread,
Oct 29, 2015, 10:27:03 AM10/29/15
to hash...@googlegroups.com
Thanks, Aron. I meant to add a warning that  the add_location hack for hashstack will trigger a lot of rebuilds because it modifies base_package.yaml.   -Chris

Reply all
Reply to author
Forward
0 new messages