One thing I'd like more clarity on to be able to continue is the relationship between the Notebook interface and the kernels: how integrated are they?
For instance, does the same python interpreter have to be used to run both the notebook server and the ipython kernels?
My thinking is, to build a desktop app, would it make more sense to bundle the notebook server + Python interpreter in the app itself and allow it to connect to (remote?) kernals installed using a separate python interpreter? If you have to use the same interpreter, then the notebook server would have to ship with every dependance someone might want - this was always something I wanted to avoid, because I think people need to be free to set up their computation environment however they like.
The alternative is for the desktop app to be simple a "cruft-less" browser window that displays the notebook running within the kernel.
The downside to this is its harder to customise the UI/UX to take advantage of the fact that you aren't limited by running in a browser. In the experiments I have done so, I found you end up with very fragile monkey-patching of the notebook JS, with the risk of it breaking if someone changes anything. Particular issue is the conflict between the browser-side "requirejs" usage and the server-side nodjs "require" system.
If I understood the architecture correctly, the notebook server and ipython kernel servers shouldn't need to share the same interpreter - but I'd like to know more about this and how it might work to start a kernal in a different interpreter from within the notbook UI.
---
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+u...@googlegroups.com.
To post to this group, send email to jup...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/ba721a76-a25c-441e-97ed-55d5dc7fcf5a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
One thing I'd like more clarity on to be able to continue is the relationship between the Notebook interface and the kernels: how integrated are they? For instance, does the same python interpreter have to be used to run both the notebook server and the ipython kernels?
The alternative is for the desktop app to be simple a "cruft-less" browser window that displays the notebook running within the kernel.
So the current notebook implementation does assume that an IPython kernal is installed for the current interpreter?
Where does the Notebook get information on what kernals are installed from, and where to find them?
This was because had understood that when you started an IPython notebook instance, the notebook was served from the ipython kernel process, rather than from the notbook server process. While what actually happens (correct me if I'm wrong) is that the notebook lives in the tornado server process and when you run a given cell it sends those commands to the kernel which does the processing, sends back a response, and the Notebook system (combining serverside python and client side JS) converts that response into content to display in the Cell
At present, the server side Python mostly passes through messages from the kernel to the browser, but we plan to build more intelligence into it in the future.
What kind of intelligence? This would heavily impact on whether it makes more sense to embed the tornado notebook vs port to Node. What about notebook security? Is this implemented in the server or the kernel (I see a lot of auth stuff in the server) ? If you wanted to provide a kernel cluster that you could connect to from a desktop app, I guess you would either need security in the kernel itself or have some kind of security middleware on your server that the desktop app would be able to talk to.