DockerSpawner + JupyterHub + Jupyterlab + multiple network

63 views
Skip to first unread message

Mariusz Wasik

unread,
Aug 7, 2019, 4:56:43 PM8/7/19
to Project Jupyter

Hi,

I have such a challenge :)

1. What I have got:
a) docker
b) two networks: [frontend] and [backend]
c) Jupyterhub working in [frontend] network
d) Jupyterlab spawned (by DockerSpawner) into [frontend] network
Info: The above configuration works

2. Database runs in [backend] network.

Problem:
Spawned Jupyterlab and created inside notebook program dosn't see database.
(because Jupyterlab runs in diffrent network)

*** The challenge (problem) ***

Is it possible for Spawned Jupyterlab to be connected to both the frontend and backend networks?
Or maybe there is another way for Jupyterlab to see a database placed in another docker network?


Best regards


Jason Anderson

unread,
Aug 8, 2019, 3:53:38 PM8/8/19
to jup...@googlegroups.com
Hi Mariusz,

I don't think the DockerSpawner supports creating a container and attaching it to more than one network. This is supported in Docker, but there is no code in the spawner to perform the additional "attach". What you would need is an additional "attach_container" call after the create_container that attaches the spawned container to the [backend] network.

Absent that, you have some other, weirder, options. One option is to have a very trim NAT container, which is itself attached to both the [frontend] and [backend] network--let's call it frontend-db. This container can just have some iptables rules that listen on the frontend interface and forward all traffic to the 'database' host on the backend interface. Your JupyterLab containers could then hit the database over the frontend-db host (rather than the 'database' host.)

Another option is performing the network attach by extending the DockerSpawner class. You could define your own spawner class that extends DockerSpawner, and overrides the create_object method. It could simply call the parent implementation and then perform the attachment of the additional network. You can define the class inline in the jupyterhub_config.

This question illuminated to me that all the spawned containers are indeed on the same network, and can communicate with each other. I don't think there is an option to isolate them further, but that could be an interesting improvement to security.

Hope that gives you some ideas,
/Jason
--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/36dfbb56-a6a0-45e1-bb02-de203b4e00c3%40googlegroups.com.

David Kemp

unread,
Aug 30, 2019, 8:58:35 AM8/30/19
to Project Jupyter
If you're using the SwarmSpawner, you can use the SwarmSpawner.extra_task_spec - this is basically the last chance you have to override the arguments that get passed to dockerpy's TaskTemplate.

Something like:

c.SwarmSpawner.extra_task_spec = { 'networks': [ 'backend', 'frontend'] } # UNTESTED!

DockerSpawner, however, would need fiddling with extra_create_kwargs and dockerpy's create_networking_config - which would probably need to be done in the pre_spawn_hook.

Hope this gives you some ideas. If you're using and IDE like pycharm, you can easily navigate to the source code of Docker/SwarmSpawners and see what's going on. And then go lookup on dockerpy's documentation what the arguments for the methods are.
To unsubscribe from this group and stop receiving emails from it, send an email to jup...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages