Demo jupyter notebooks with SymPy

876 views
Skip to first unread message

Björn Dahlgren

unread,
Aug 30, 2015, 12:12:08 PM8/30/15
to sympy
As you may know, the IPython guys are showcasing the new jupyter notebook here:

https://try.jupyter.org/

They have a subfolder "communities" where they invite groups to publish example notebooks:
https://github.com/jupyter/docker-demo-images

I think it would be a nice addition to have a communities/sympy subfolder with some notebooks.
One example that come to mind is this notebook Ondřej posted a while back:
http://nbviewer.ipython.org/gist/certik/6469476

We could create a repo "sympy-demo-notebooks" under github.com/sympy/ where we could peer
review the notebooks before making a PR at jupyter/docker-demo-images.

Just a thought, what do you guys think?

Best,
Björn

Aaron Meurer

unread,
Aug 30, 2015, 5:00:43 PM8/30/15
to sy...@googlegroups.com
That sounds like a good idea. Having them in a separate repo would make it less of a big deal to include inline plots in the notebooks. It would still be good to doctest them somehow, though, if that's possible.

Aaron Meurer

--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/c1679302-d4f9-4cca-9175-005a763650f7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Björn Dahlgren

unread,
Apr 7, 2016, 6:09:16 AM4/7/16
to sympy


On Sunday, 30 August 2015 23:00:43 UTC+2, Aaron Meurer wrote:
That sounds like a good idea. Having them in a separate repo would make it less of a big deal to include inline plots in the notebooks.

Yes, I have strong feelings against checking in any inline plots (or generated binary data in general) into the SymPy repository.
The issue recently surfaced here: https://github.com/sympy/sympy/pull/10869

Regarding doctesting the actual notebooks: I have been doing this in my projects for a while (rendering notebooks as part of CI process, thus never checking in any output data). See e.g. http://hera.physchem.kth.se/~chempy/branches/master/examples/ for an example.
The rendering (and testing) is achieved by using nbconvert:

ipython2 nbconvert --to=html --debug --ExecutePreprocessor.enabled=True *.ipynb

In case someone wants to try to set this up (the CI servers would need to push artifacts somewhere on success).

Ashutosh Saboo

unread,
Apr 7, 2016, 12:58:26 PM4/7/16
to sympy
Hi Björn and Aaron,

Seems like a good idea Björn. 

Although, as I had already told you Aaron, that I had started working on iPython Notebooks for SymPy Gamma here - https://sympy-gamma-testing.appspot.com/ . Though there seems to be some problem I guess with Jupyter itself. I'll try and fix that as well. I am also working on including plot figures and other cards' data in the iPython notebook as well. Your opinion on this Aaron?

I am not sure, but, would that somehow help for this idea of yours as well - Björn ? 

Thanks,
Ashutosh Saboo

Ondřej Čertík

unread,
Apr 7, 2016, 1:47:44 PM4/7/16
to sympy
Hi Björn,
I converged to the same workflow. I write a simulation code in Fortran
(check into git), which generates a log file (not checked in). Then I
have an jupyter notebook that parses the log file and generates
graphs. I check in the notebook, but without output cells. Then it's
small, the "git diff" is very readable (say if I improve a plot in a
new commit) etc.

Finally, and I don't have it setup yet, the goal is for the CI to run
the test (Fortran code) on some very small problem (for example, if I
use FFT and I need 256^3 and 20,000 time steps to get fully converged
results, I would run it with 16^3 PW and 20 time steps, so that it
runs in few seconds), and then I test the notebook that it runs using
nbconvert, exactly as you posted, to check that the analysis still
runs.

That way things should be 100% robust, and I can regenerate the graphs
any time I want, yet no big files or binary blobs are checked into a
git repository.

Ondrej
Reply all
Reply to author
Forward
0 new messages