Docker jupyter kernel

2,179 views
Skip to first unread message

Marius van Niekerk

unread,
Jun 4, 2015, 3:16:25 PM6/4/15
to jup...@googlegroups.com
Is there a simple way to start a kernel as a docker container?  I don't want to have to start my notebook server as a container. 

I tried to set one up with a kernel.json like 

{
 "display_name": "Python 2",
 "language": "python",
 "argv": [
  "docker", "run", "-v", "/home/username:/home/username", "jupyter/username",
  "/opt/conda/envs/python2/bin/python",
  "-m",
  "IPython.kernel",
  "-f",
  "{connection_file}"
 ]
}

This starts up the kernel fine, but it seems that there are some missing issues with port mappings?

[I 13:41:17.633 NotebookApp] Creating new notebook in /notebooks
[I 13:41:18.821 NotebookApp] Kernel started: ef649e50-da43-42c5-99a0-8d026dff2178
NOTE: When using the `ipython kernel` entry point, Ctrl-C will not work.

To exit, you will have to explicitly quit this process, by either sending
"quit" from a client, or using Ctrl-\ in UNIX-like environments.



To connect another client to this kernel, use:
    --existing kernel-ef649e50-da43-42c5-99a0-8d026dff2178.json
[W 13:41:28.949 NotebookApp] Timeout waiting for kernel_info reply from ef649e50-da43-42c5-99a0-8d026dff2178

Any suggestions?

Marius

Matthias Bussonnier

unread,
Jun 4, 2015, 3:44:57 PM6/4/15
to jup...@googlegroups.com

On Jun 4, 2015, at 12:16, Marius van Niekerk <marius.v...@gmail.com> wrote:

Is there a simple way to start a kernel as a docker container?  I don't want to have to start my notebook server as a container. 

I tried to set one up with a kernel.json like 

{
 "display_name": "Python 2",
 "language": "python",
 "argv": [
  "docker", "run", "-v", "/home/username:/home/username", "jupyter/username",
  "/opt/conda/envs/python2/bin/python",
  "-m",
  "IPython.kernel",
  "-f",
  "{connection_file}"
 ]
}

I would replace the docker-run by a custom script that read the {connection_file} and set-up the port redirection. From the kernel  inside the docker container,
you should be able to have a static connexion file, as docker will be the things mapping the port. 

— 

Marius van Niekerk

unread,
Jun 4, 2015, 4:35:29 PM6/4/15
to jup...@googlegroups.com
I assume for the connection file inside the kernel i'd have to change the key and signature_schema to match those mentioned in the {connection_file}

and then make my container EXPOSE a set of static port like

  "stdin_port": 60000,
  "control_port": 60001,
  "hb_port": 60002,
  "shell_port": 60003,
  "iopub_port": 60004

Or do i not really need to care about the key / sig

thanks
-Marius

Thomas Kluyver

unread,
Jun 4, 2015, 8:29:02 PM6/4/15
to jup...@googlegroups.com
On 4 June 2015 at 13:35, Marius van Niekerk <marius.v...@gmail.com> wrote:
I assume for the connection file inside the kernel i'd have to change the key and signature_schema to match those mentioned in the {connection_file}

and then make my container EXPOSE a set of static port like

  "stdin_port": 60000,
  "control_port": 60001,
  "hb_port": 60002,
  "shell_port": 60003,
  "iopub_port": 60004

Or do i not really need to care about the key / sig

You can use static port numbers inside the container; the ports it exposes to other things outside the container should be the ports from the connection file that it receives when we launch the kernel.

If the outer system has other users, you should probably still care about the message signing, because without it, another user could connect to the ports exposed by the docker container and start sending messages to the kernel. Also, while no key means no signatures, some kernels might not like getting signed messages when they don't think there's a key.

Thomas

Kyle Kelley

unread,
Jun 4, 2015, 10:31:58 PM6/4/15
to jup...@googlegroups.com
Wonderful!

I've been hoping to do this for some time, the notebook was an easy target for us to start with. The next pieces I'd want inside a kernel is access to my data, resources, etc. which I assume you'd have mounted in some uniform way?

--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+u...@googlegroups.com.
To post to this group, send email to jup...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/CAOvn4qiKtXjmC81fU14S23u9jH35%2BKJSp5fUw%3DjmCdoeMkSpCw%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.



--

MinRK

unread,
Jun 4, 2015, 10:39:36 PM6/4/15
to jup...@googlegroups.com
I think perhaps a custom KernelManager, rather than kernel, would be a better choice here. This would let you change how connection information is provided, and use the Docker Python API.

Marius van Niekerk

unread,
Jun 5, 2015, 10:12:49 AM6/5/15
to jup...@googlegroups.com
@rgbkrk: I assume once i can get it to listen and expose properly it should be easy enough to grab volumes from existing containers for data.

So my first attempt is this guy

https://gist.github.com/mariusvniekerk/09062bc8974e5d1f4af6

It spins up the container fine it seems, but the NotebookApp gets a Timeout.

Suggestions?

The custom KernelManager starts sounding like a better idea (particularly when it comes to shutting this thing down, since without the sockets seeming to respond to messages you need to docker stop these a lot).

-Marius

Marius van Niekerk

unread,
Jun 8, 2015, 12:30:31 PM6/8/15
to jup...@googlegroups.com
Got it working.

Needed to make the kernel talk via 0.0.0.0 inside the container.  Seems localhost inside is not the same as outisde.

Paul Yuchao Dong

unread,
Aug 9, 2016, 1:38:06 AM8/9/16
to Project Jupyter
Sorry to revive a concluded case ~

I like the idea a lot - I can just use Jupyter to start difference kernels under docker container instances.

However, as I wrote this, Jupyter seems to upgraded significantly and I tried to create the files in the gist, but I cannot even get the kernel shown in the dropdown list...

Would there be an update to this really nice set up?

thanks,
Paul

Erik Stephens

unread,
Aug 10, 2016, 7:45:55 PM8/10/16
to Project Jupyter
Maybe a bit more than what you need but I think this will be more future proof:


--
Erik

Paul Yuchao Dong

unread,
Aug 11, 2016, 2:52:55 AM8/11/16
to Project Jupyter
Wow this is indeed a lot more involved; I will try to get this setup at home tonight.

Spawn a jupyterhub then connect different notebook server seems a great way to go, but wouldn't it be a bit slow?

James

unread,
May 24, 2017, 5:10:34 PM5/24/17
to Project Jupyter
Hey, sorry to revive this thread again, but having docker container kernels (and not whole jupyter server systems) would be very useful for me.  My use case is having certain hard to build scientific software installed within the container.  That way you could call out to them using python's subprocess calls from within the notebook.  My goal would be to make several kernels, accessible from the same notebook server, to act as a toolkit of sorts for my lab.  Ideally having the kernels in containers would make them easy to share and install in sister labs at other institutions for use in their Jupyter ecosystem.  Thank you for any guidance!

Stojan Jovanović

unread,
Jul 25, 2017, 5:10:56 AM7/25/17
to Project Jupyter
Hi James,

I'm currently buidling something very similar to what you're talking about.

I've currently got it set up so that I can access multiple Dockers, containing isolated machine learning models, through a Jupyter notebook (located in a third Docker), via SSH. 

It wasn't super difficult to do, although I'm not claiming it was done very elegantly. 

If you're interested, you can take a look here https://github.com/stojan211287/DockerSSH. I've uploaded a minimal example, consisting of one "drone" and one "overlord" container. The overlord issues commands via SSH, the drone complies and delivers.

As it stands now, I've based the images on Alpine 3.6 and am currently using them as base images for further development.- the overlord get Jupyter installed on top of it, and the drone, for example, can host scikit-learn.

Ashwin Srinath

unread,
Jul 26, 2017, 8:23:08 AM7/26/17
to jup...@googlegroups.com
We have used Singularity (http://singularity.lbl.gov/) containers in Jupyter Notebooks with relative ease. Some notes available here:

https://github.com/clemsonciti/singularity-in-jupyter-notebook

Thanks,
Ashwin

--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+unsubscribe@googlegroups.com.

To post to this group, send email to jup...@googlegroups.com.

Ken Jiiii

unread,
Jul 10, 2018, 4:01:53 AM7/10/18
to Project Jupyter
Hi guys,

I think I have the same use case and I was wondering whether this discussion is still up to date.
The idea is to have a Jupyter running on a local machine which has one or more docker containers running at the same time. These containers provide for example different python versions like 3.6 and 3.7.
Now the question is how to add an external kernel to Jupyter. The kernel is of course running in the docker container.

https://gist.github.com/mariusvniekerk/09062bc8974e5d1f4af6 is this approach still valid @ Marius?

I have also read that it is possible to connect via ssh to a remote kernel in Jupyter but in that case SSH needs to be configured in the container.

Can anybody tell me what solution is still working for him?

Thanks a lot in advance and kind regards!


Am Mittwoch, 26. Juli 2017 14:23:08 UTC+2 schrieb Ashwin Srinath:
We have used Singularity (http://singularity.lbl.gov/) containers in Jupyter Notebooks with relative ease. Some notes available here:

https://github.com/clemsonciti/singularity-in-jupyter-notebook

Thanks,
Ashwin
On Tue, Jul 25, 2017 at 5:10 AM, Stojan Jovanović <whyisepsilonneverlessthanzero@gmail.com> wrote:
Hi James,

I'm currently buidling something very similar to what you're talking about.

I've currently got it set up so that I can access multiple Dockers, containing isolated machine learning models, through a Jupyter notebook (located in a third Docker), via SSH. 

It wasn't super difficult to do, although I'm not claiming it was done very elegantly. 

If you're interested, you can take a look here https://github.com/stojan211287/DockerSSH. I've uploaded a minimal example, consisting of one "drone" and one "overlord" container. The overlord issues commands via SSH, the drone complies and delivers.

As it stands now, I've based the images on Alpine 3.6 and am currently using them as base images for further development.- the overlord get Jupyter installed on top of it, and the drone, for example, can host scikit-learn.

On Wednesday, 24 May 2017 23:10:34 UTC+2, James wrote:
Hey, sorry to revive this thread again, but having docker container kernels (and not whole jupyter server systems) would be very useful for me.  My use case is having certain hard to build scientific software installed within the container.  That way you could call out to them using python's subprocess calls from within the notebook.  My goal would be to make several kernels, accessible from the same notebook server, to act as a toolkit of sorts for my lab.  Ideally having the kernels in containers would make them easy to share and install in sister labs at other institutions for use in their Jupyter ecosystem.  Thank you for any guidance!

--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+u...@googlegroups.com.

Matt Morgis

unread,
Feb 1, 2019, 1:59:36 PM2/1/19
to Project Jupyter
For anyone in 2019 looking to do this, we built a prototype here: https://github.com/tamera-lanham/ipython-kernel-docker

Similar to the approach in the Gist, except instead of a Python file running the container, we tell Jupyter to do it instead. 


On Tuesday, July 10, 2018 at 4:01:53 AM UTC-4, Ken Jiiii wrote:
Hi guys,

I think I have the same use case and I was wondering whether this discussion is still up to date.
The idea is to have a Jupyter running on a local machine which has one or more docker containers running at the same time. These containers provide for example different python versions like 3.6 and 3.7.
Now the question is how to add an external kernel to Jupyter. The kernel is of course running in the docker container.

https://gist.github.com/mariusvniekerk/09062bc8974e5d1f4af6 is this approach still valid @ Marius?

I have also read that it is possible to connect via ssh to a remote kernel in Jupyter but in that case SSH needs to be configured in the container.

Can anybody tell me what solution is still working for him?

Thanks a lot in advance and kind regards!

Am Mittwoch, 26. Juli 2017 14:23:08 UTC+2 schrieb Ashwin Srinath:
We have used Singularity (http://singularity.lbl.gov/) containers in Jupyter Notebooks with relative ease. Some notes available here:

https://github.com/clemsonciti/singularity-in-jupyter-notebook

Thanks,
Ashwin

Luciano Resende

unread,
Feb 1, 2019, 6:37:22 PM2/1/19
to jup...@googlegroups.com
Have a look at Jupyter Enterprise Gateway, which enables and manage
remote kernels in multiple cluster environments including container
based ones such: Docker, Swarm, Kubernetes, etc

https://jupyter.org/enterprise_gateway/
https://github.com/jupyter/enterprise_gateway

Please let us know if you have specific questions after checking it out.
> To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/36cc59a4-c95e-4557-a34a-4ab928b85be3%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/

Ray Hilton

unread,
Feb 3, 2019, 5:21:44 PM2/3/19
to jup...@googlegroups.com
The idea of splitting out the kernels into seperate container just makes a lot of sense to me.  I’ve used enterprise gateway and it works pretty well.

However, we had one big gotcha with this approach;  the filesystem for the kernel is not the same as in Jupiter - our users are used to being able to reference other files they upload/see in the Jupiter UI.  I tried a few approaches to this (mounting same home directory into kernel, copying files to kernel) but they all resulted in more complexity and there were consistency issues.

Given I was running everything in k8s and that the kernels shares so much with the notebook image, it just seemed easier to keep them in one image for now.  

I’d be very curious to see how others solved or circumvented this.

Ray

 

From: jup...@googlegroups.com on behalf of Luciano Resende <luckb...@gmail.com>
Sent: Saturday, February 2, 2019 10:37 am
To: jup...@googlegroups.com
Subject: Re: [jupyter] Docker jupyter kernel
 

Luciano Resende

unread,
Feb 3, 2019, 5:43:50 PM2/3/19
to jup...@googlegroups.com
On Sun, Feb 3, 2019 at 2:21 PM Ray Hilton <ray.h...@eliiza.com.au> wrote:
>
> The idea of splitting out the kernels into separate container just makes a lot of sense to me. I’ve used enterprise gateway and it works pretty well.
>

Glad to hear that.

> However, we had one big gotcha with this approach; the filesystem for the kernel is not the same as in Jupiter - our users are used to being able to reference other files they upload/see in the Jupiter UI. I tried a few approaches to this (mounting same home directory into kernel, copying files to kernel) but they all resulted in more complexity and there were consistency issues.
>
> Given I was running everything in k8s and that the kernels shares so much with the notebook image, it just seemed easier to keep them in one image for now.
>
> I’d be very curious to see how others solved or circumvented this.

Indeed, we were discussing this exact issue a couple days ago.
Assuming you have the files available in both Jupyter and Kernel
filesystem (e.g. mounts), we still noticed some issues that were not
mirroring the work dirs between Jupyter and Kernel and we should have
a proper fix for that in the next few days. In the meantime, a
workaround is to add a notebook cell that adds the notebook folder to
the path (e.g. sys.path.append('/my_notebooks/model'))

Ray Hilton

unread,
Feb 3, 2019, 6:19:12 PM2/3/19
to jup...@googlegroups.com
I’ve also been entertaining the idea of using something like pachyderm in our workflow.  So filesystem context is much more deliberate and versioned.  I think this might be a better approach than to try and keep the FS in sync

 

From: jup...@googlegroups.com on behalf of Luciano Resende <luckb...@gmail.com>
Sent: Monday, February 4, 2019 9:43 am

To: jup...@googlegroups.com
Subject: Re: [jupyter] Docker jupyter kernel
 
--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+u...@googlegroups.com.
To post to this group, send email to jup...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages