Many thanks in advance!Best RegardsSherry
--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+unsubscribe@googlegroups.com.
To post to this group, send email to jup...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/3fa00644-1ea0-419d-8bbe-3dfc0d67fe95%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/CAHNn8BVjEXhxHMe5S0arugaZ7W%3DPTb0rK9Vc5nkARntVOENXZg%40mail.gmail.com.
On Mon, Feb 6, 2017 at 3:54 PM, <nju08e...@gmail.com> wrote:Hi,We know user can create as many notebook instances as they can in one jupyter notebook server client, and when we want to integrate spark's pyspark into jupyter notebook, which means using the ipykernel, when every each notebook instance is created, then a pyspark shell(or to say a driver) is initialized, since we run that in spark client mode, so all the started drivers would run on same host. And when some user in crazy mode, like creating many many notebook instances, then many drivers would all start in one host, which will lead to being lack of available resource easily.So I am wondering if Jupyter notebook has some mechanism to deal with or avoid such kind of that issue?Jupyter itself has no mechanism to deal with this. It would have to be done at the library level - i.e. check for other instances and do one of:- connect to running instances if possible- refuse to start if too many are running- etc.-Min
Many thanks in advance!Best RegardsSherry
--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+u...@googlegroups.com.
If you're using Jupyterhub, however, then there are spawners which have some ways to control the resources available to each user. For example, Systemdspawner:
https://github.com/jupyterhub/systemdspawner
On 7 February 2017 at 11:02, MinRK <benja...@gmail.com> wrote:
On Mon, Feb 6, 2017 at 3:54 PM, <nju08e...@gmail.com> wrote:Hi,We know user can create as many notebook instances as they can in one jupyter notebook server client, and when we want to integrate spark's pyspark into jupyter notebook, which means using the ipykernel, when every each notebook instance is created, then a pyspark shell(or to say a driver) is initialized, since we run that in spark client mode, so all the started drivers would run on same host. And when some user in crazy mode, like creating many many notebook instances, then many drivers would all start in one host, which will lead to being lack of available resource easily.So I am wondering if Jupyter notebook has some mechanism to deal with or avoid such kind of that issue?Jupyter itself has no mechanism to deal with this. It would have to be done at the library level - i.e. check for other instances and do one of:- connect to running instances if possible- refuse to start if too many are running- etc.-Min
Many thanks in advance!Best RegardsSherry
--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+u...@googlegroups.com.
To post to this group, send email to jup...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/3fa00644-1ea0-419d-8bbe-3dfc0d67fe95%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+u...@googlegroups.com.
To post to this group, send email to jup...@googlegroups.com.
Realistically, you have to run either the whole Jupyter or each notebook kernel in a container (Docker, LXC,...). Then you can set limits on the container(s), and/or restrict the number of simultaneously running kernels through the Jupyter kernel manager.Without containers, a user can consume basically unlimited memory and CPU even from a single Python notebook kernel.
--
You received this message because you are subscribed to a topic in the Google Groups "Project Jupyter" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/jupyter/ZofSrwjlXOw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to jupyter+unsubscribe@googlegroups.com.
To post to this group, send email to jup...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/917a5916-485a-46c7-bd81-264f0405e229%40googlegroups.com.
There is this:
https://github.com/pshved/timeout
--
You received this message because you are subscribed to a topic in the Google Groups "Project Jupyter" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/jupyter/ZofSrwjlXOw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to jupyter+unsubscribe@googlegroups.com.
To post to this group, send email to jup...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/bfc20d1c-e3cb-4634-86c1-4c84edc89855%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.