SciUnit: getting operational

60 views
Skip to first unread message

Stephen Larson

unread,
Aug 15, 2014, 2:14:44 PM8/15/14
to openworm-discuss, Rick Gerkin, Michael Currie, Tom Portegys
Hi all,

  We just wrapped up a great journal club with Rick Gerkin on SciUnit.  Rick makes a great case for applying the software engineering technique of unit tests to computational models and has a special focus on computational neuroscience models.

   Here are some relevant issues from GitHub on this:
   How do we proceed to get operational with this?  I know that the movement_validation project is probably the most obvious first consumer of something like this.  It would be an awesome project for someone experienced in python to take up the work of implementing SciUnit tests that reuse the movement_validation library to implement capabilities.  

    Looking through the issues list over there, I see:
    Which may be the best place to begin.  Thoughts?

Thanks,
  Stephen


Project coordinator
"Talk is cheap, show me the code" - L. Torvalds

Stephen Larson

unread,
Aug 15, 2014, 2:48:18 PM8/15/14
to openworm-discuss, Rick Gerkin, Michael Currie, Tom Portegys
I also wanted to add some of the key links from the presentation here:


Thanks,
  Stephen



Project coordinator
"Talk is cheap, show me the code" - L. Torvalds


John Ballentine

unread,
Aug 15, 2014, 3:10:59 PM8/15/14
to openworm...@googlegroups.com, rge...@gmail.com, mcu...@gmail.com, port...@gmail.com, ste...@openworm.org
This seems like a very interesting problem.

Do you think it would be beneficial right now to be able to have a vision-based system that could recognize observational video data of brain activity and translate it into a format that is easier to validate?

Stephen Larson

unread,
Aug 15, 2014, 4:32:11 PM8/15/14
to John Ballentine, openworm-discuss, Rick Gerkin, Michael Currie, Tom Portegys

On Fri, Aug 15, 2014 at 12:10 PM, John Ballentine <johnball...@gmail.com> wrote:
Do you think it would be beneficial right now to be able to have a vision-based system that could recognize observational video data of brain activity and translate it into a format that is easier to validate?

Yes!  In fact, combining that with behavior is probably the makings of a critical data collection platform.  But I wouldn't underestimate its difficulty -- it is definitely something that doesn't exist yet and has many challenges associated with it.

Best,

Michael Currie

unread,
Aug 20, 2014, 7:28:38 PM8/20/14
to openworm...@googlegroups.com, johnball...@gmail.com, rge...@gmail.com, mcu...@gmail.com, port...@gmail.com, ste...@openworm.org
Thanks for getting the ball rolling on this Stephen.

I actually intended for Movement Validation GitHub Issue 71 to pertain to the testing of the movement_validation repository itself.  I was hoping we'd get some formal unit tests on the repo to ensure for instance that:
  • The generated features of a sample video continue to match the features generated by the Schafer Lab's Matlab version of the same code (in the SegWorm repo).
  • Features are generated at all
  • Various corner cases are handled elegantly
  • etc.
SciUnit would be a framework we would apply throughout the repository, to enhance its capabilities, and to take advantage of the best practices and common terminology deployed in SciUnit.  We'd also have the dashboard functionality of SciDash, saving us from fully implementing the custom dashboard GUI the Schafer lab built.

I've just added another issue, issue 75, to movement_validation as a starting point.  It asks that we modify the features sub-package so its classes inherit from the SciUnit base classes.

Sincerely,

Michael

Stephen Larson

unread,
Aug 23, 2014, 12:21:28 AM8/23/14
to Michael Currie, openworm-discuss, John Ballentine, Rick Gerkin, Tom Portegys
Brilliant.  Rick, what do you think?


Project coordinator
"Talk is cheap, show me the code" - L. Torvalds


Rick Gerkin

unread,
Aug 29, 2014, 10:25:25 AM8/29/14
to Stephen Larson, Michael Currie, openworm-discuss, John Ballentine, Tom Portegys
I just got back from the INCF meeting in the Netherlands, where I had the opportunity to speak (along with Stephen) about open collaboration in computational neuroscience, and to see all the pieces of the puzzle coming together that will help make this work.  I think some of the complementary technologies that will help model validation against data become achievable across scales and platforms are starting to reach maturity.  This will also help make the work that we start here (with behavioral testing) relevant to future worm tests (e.g. neuron tests), and to parts of models that we haven't yet imagined.  

Michael mentioned testing of the movement_validation repository itself.  We've called this sort of testing (e.g. are features being generated from the data, do they match across implementations, etc) "verification testing", whereas tests of the scientific validity of the model (e.g. do the features in the data match the features in the model) we called "validation testing".  While in principle this verification testing could be done with SciUnit, I've in the past chosen not to do this sort of thing with SciUnit, simply because there are other reasonable tools for the job, or because someone else had already implemented it.  For example, neuron models in Open Source Brain already have verification tests (written by Padraig Gleeson) that help ensure that some given simulation output (e.g. spike times) is identical as simulators change.  So for that project I am focusing on validation tests which check whether that simulator output is consistent with experimental data.  

So if you also want to implement the worm movement verification tests in SciUnit, I'd be OK with that; it might be a good test case before moving on to validation tests of the sim against those features.  But if you have something else in mind for the verification tests, or want to use tests you've already begun developing for the verification, that would be fine, too.  

Thoughts?  

Stephen Larson

unread,
Sep 1, 2014, 5:22:04 AM9/1/14
to Rick Gerkin, Michael Currie, openworm-discuss, John Ballentine, Tom Portegys
Question for you Rick -- do you have any example SciUnit tests currently running via Travis CI on a public GitHub repo?  That might be a good place to start having a look.

Thanks,
  Stephen


Project coordinator
"Talk is cheap, show me the code" - L. Torvalds


Rick Gerkin

unread,
Sep 2, 2014, 3:44:20 AM9/2/14
to Stephen Larson, Michael Currie, openworm-discuss, John Ballentine, Tom Portegys
A combination of the /rgerkin/sciunit fork, the /rgerkin/neuronunit fork will get you a bunch of working Open Source Brain tests, but that also requires getting a working installation of NEURON and some other things (not recommended).  The same two forks plus /incf/QSNMC will get you the first part of the 2009 of some Quantitative Single Neuron Modeling Competition tests, but I don't think that would be very illustrative.  I think to make this painless for all I should work on an educational Travis CI build for one of those two, which I will do this week.  Stand by.  

Rick Gerkin

unread,
Sep 7, 2014, 2:14:59 AM9/7/14
to Stephen Larson, Michael Currie, openworm-discuss, John Ballentine, Tom Portegys
Here is a very simple example working in Travis of data about the resting potential of a Purkinje cell downloaded programatically from neuroelectro.org, and used to test a completely useless "model" designed to produce a membrane potential trace equal to the mean of that data + 1 standard deviation, resulting in a Z-score of 1.  


Next in that particular use case would be getting the real models to run in Travis (installing NEURON and many other things), but for the purposes of Open Worm the above should serve as an example recipe for model testing by script, for those who want to see how it works in a simple case.  


Reply all
Reply to author
Forward
0 new messages