Jupyter dashboards / dashboards server: transplanting the kernel

381 views
Skip to first unread message

Paul A

unread,
Jul 8, 2016, 5:41:39 AM7/8/16
to Project Jupyter
Is it possible to "transplant" the kernel onto the server?

I.e. you develop a notebook on your laptop, click Deploy as Dashboard on Jupyter Dashboards Server, close your laptop lid, and somehow the dashboard keep running.

If there's such a workflow, how would the dashboards server replicate not only Python virtualenv, but also environment variables that the notebook may depend on?

Phuoc Do

unread,
Jul 15, 2016, 2:17:18 AM7/15/16
to Project Jupyter
You can publish your dashboard notebook to github and use nbviewer to view it online. The disadvantage is your notebook needs to be public.

Notebook viewer:


Notebook code:


Phuoc Do

Peter Parente

unread,
Jul 16, 2016, 9:35:37 PM7/16/16
to Project Jupyter
Hi Paul,

You can't transplant the kernel per se. You can publish your notebook as a standalone web application on the dashboard server.

The text and diagrams on the dashboards wiki here illustrate the concept: https://github.com/jupyter-incubator/dashboards/wiki

> If there's such a workflow, how would the dashboards server replicate not only Python virtualenv, but also environment variables that the notebook may depend on?

There's no magic here. If your notebook depends on pandas, some special env var, some local JPEG images, etc., you need to ensure the kernel gateway providing the kernel for the dashboard has pandas, that you start it with the same environment variable, that you bundle your JPEG images with your notebook, etc. Some of this is automated by the dashboard bundling process. But other parts, namely resolving kernel-side dependencies, is still a manual process.

That said, using the same Docker image between notebook server and kernel gateway and putting one-off pip/conda commands right in the notebook for eval on first run can take you a long way without much hassle in my experience.

Phuoc Do

unread,
Jul 18, 2016, 2:52:04 AM7/18/16
to Project Jupyter
Hi Peter,

As far as I understand, Jupyter dashboard is bundled and imported into dashboard server. This creates extra work to update dashboard. I wonder if there is a way to create a server that reads data directly from Jupyter (e.g. through data API). New analysis data would be updated without bundle work.

Phuoc Do

Peter Parente

unread,
Jul 19, 2016, 11:59:24 AM7/19/16
to Project Jupyter
> As far as I understand, Jupyter dashboard is bundled and imported into dashboard server. 

You are correct.

> This creates extra work to update dashboard. I wonder if there is a way to create a server that reads data directly from Jupyter (e.g. through data API). New analysis data would be updated without bundle work.

We took the bundling approach on purpose for one main reason: it creates a clean separation between the development environment (notebook server) and production environment (dashboard server). Now, you could reverse what we did and make deployment a pull-from-dashboard server proposition instead of push-from-notebook server. But you'd still need to send the same assets across. And I think the two have the same net effect anyway because you'll want a human in the loop deciding when a notebook is fit for deployment as a separate dashboard. Automatically redeploying a notebook on every save or a timer is something we purposely avoided: just because I want to save my work doesn't mean I also want to immediately make that version available as a dashboard for other users.

Cheers,
Pete
Reply all
Reply to author
Forward
0 new messages