Spark integration with Jupyter

917 views
Skip to first unread message

Alejandro Guerrero

unread,
Aug 18, 2015, 2:02:49 PM8/18/15
to Project Jupyter, Alejandro Guerrero Gonzalez
Hi!

I know about findspark and the ability to run pyspark on the Python kernel but I was wondering if there are efforts going on now to more closely integrate Jupyter and Spark.
Think of integration like what Zeppelin/Hue are enabling for Spark.

I'd be interested to participate.

Best,
Alejandro

Matthias Bussonnier

unread,
Aug 18, 2015, 2:12:38 PM8/18/15
to jup...@googlegroups.com, Alejandro Guerrero Gonzalez
Hi Alejandro. 



On Aug 18, 2015, at 20:02, Alejandro Guerrero <agg....@gmail.com> wrote:

I know about findspark and the ability to run pyspark on the Python kernel but I was wondering if there are efforts going on now to more closely integrate Jupyter and Spark.
Think of integration like what Zeppelin/Hue are enabling for Spark.

We would be happy of better integrating with any library you want to use. 
The Jupyter part is language agnostic, so better integration Jupyter-Spark might not be
exactly what you expect, it might be more an IPython-Spark integration we are looking for. 


I'd be interested to participate.

If you have something specific in mind, that could be done, or done better, please fire the proposal. 
We can direct you toward how this can be done and how you can contribute.

Already looking forward to your contribution. 
-- 
M

Best,
Alejandro

--
You received this message because you are subscribed to the Google Groups "Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+u...@googlegroups.com.
To post to this group, send email to jup...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/e3f9a00b-1618-49b3-922e-ecdf4f1866ae%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Auberon López

unread,
Aug 19, 2015, 1:11:11 PM8/19/15
to Project Jupyter, a...@microsoft.com
Hi Alejandro,

Another point of integration we're looking into is the creation of magics that work with Spark. Here's a simple one for Spark SQL in pyspark:


I think a small collection of magics like this can reproduce much of the functionality of Zeppelin in Jupyter.

-Auberon

Brian Granger

unread,
Aug 19, 2015, 1:15:01 PM8/19/15
to Project Jupyter, Alejandro Guerrero Gonzalez
Another area of integration we are thinking about is having custom
representation for spark objects in the notebook. We don't have anyone
actively working on this, but are more than willing to engage in
discussions here about that.

Another area is visualizations/plotting UIs for data frame like objects.

Also, Auberon, has there been any progress on making pyspark pip/conda
installable?

Alejandro, can you comment more on what specific things you are interested in?
> --
> You received this message because you are subscribed to the Google Groups
> "Project Jupyter" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to jupyter+u...@googlegroups.com.
> To post to this group, send email to jup...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jupyter/fc0e7984-d782-4ae6-9231-4e5cf8199fa7%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Brian E. Granger
Associate Professor of Physics and Data Science
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
bgra...@calpoly.edu and elli...@gmail.com

Auberon López

unread,
Aug 19, 2015, 1:46:49 PM8/19/15
to Project Jupyter, a...@microsoft.com, elli...@gmail.com
There are a few tweaks that I'm applying to an old PR to make pyspark pip installable. I haven't yet looked into the process for conda, but I'll do that soon.

Brian Granger

unread,
Aug 21, 2015, 1:56:25 AM8/21/15
to Auberon López, Project Jupyter, Alejandro Guerrero Gonzalez
Alejandro,

Auberon and I were talking today about some of this. Some things that
Auberon is working on:

* He is going to submit his sparksql magic for inclusion in pyspark.
* He is going to work on this issue to change the default log level for PySpark:

https://issues.apache.org/jira/browse/SPARK-9226

We also talkd more about about the rich representations of spark
objects. I think the approach that makes the most sense is to build
better representations of pandas data frames first. Then when we want
a nice repr of a spark objects we can create a local instance of that
data as a pandas data from and use that repr. The qgrid project
already provides nice reprs of pandas data frames:

https://github.com/quantopian/qgrid

We could investigate writing a small amount of code that would use
qgrid as a default repr for spark objects. Are you interested in
working on that?

Cheers,

Brian

Matthias Bussonnier

unread,
Aug 21, 2015, 9:54:02 AM8/21/15
to jup...@googlegroups.com, Auberon López, Alejandro Guerrero Gonzalez
If you want uniformity of magics, see with Doug blank and his meta
kernel, he has a wrapper around wrapper kernels that allow common
magics like cd, ls ...
--
M
> To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/CAH4pYpTjBbJQVtd3WmJFMLDbMLPbXRUrRdQaXpnoE3fbvPemhA%40mail.gmail.com.

Alejandro Guerrero

unread,
Aug 24, 2015, 8:02:49 PM8/24/15
to Project Jupyter, aubero...@gmail.com, a...@microsoft.com, elli...@gmail.com
Hi Matthias, Brian, Auberon,

I think the work you guys have planned seems very exciting. It will be great to have a Zeppelin-like experience from Jupyter with the magics and the rich visualization.

I am thinking of creating a kernel that allows scala/python/R spark integration and then doing rich visualization of the Spark objects.

I'd create the spark code submission story for this kernel, while integrating with the rich visualization part Brian and Auberon mentioned.

How can we work together on this? How can I become a code reviewer for the work Auberon will be doing?
Are you guys interested in becoming code reviewers for the kernel I'm describing?

Best,
Alejandro

Peter Wang

unread,
Aug 25, 2015, 1:18:06 AM8/25/15
to Project Jupyter, Mateusz Paprocki, Bryan Van de ven, aubero...@gmail.com, a...@microsoft.com, Brian Granger
On Mon, Aug 24, 2015 at 7:02 PM, Alejandro Guerrero <agg....@gmail.com> wrote:
Hi Matthias, Brian, Auberon,

I think the work you guys have planned seems very exciting. It will be great to have a Zeppelin-like experience from Jupyter with the magics and the rich visualization.

I am thinking of creating a kernel that allows scala/python/R spark integration and then doing rich visualization of the Spark objects.

Just wanted to mention that Bokeh has interfaces for Python, Scala, and R:


Additionally, Mateusz Paprocki who wrote bokeh-scala is a Bokeh core dev, and wrote a IScala kernel for IPython: https://github.com/mattpap/IScala
(CCing him directly on this thread)

We don't have a good table widget right now in Bokeh.  I like Brian's idea of seeing if we can just make qgrid work quickly, but I also think that we should try to converge on a good solid table widget that can be driven by a Python-server-side model, and that hopefully multiple projects can use (jupyter, bokeh, phosphor, quantopian, maybe even beaker etc.?)


Cheers,
Peter

Matthias Bussonnier

unread,
Aug 25, 2015, 4:15:51 AM8/25/15
to jup...@googlegroups.com, Mateusz Paprocki, Bryan Van de ven, aubero...@gmail.com, a...@microsoft.com, Brian Granger
On Aug 25, 2015, at 07:17, Peter Wang <pw...@continuum.io> wrote:

On Mon, Aug 24, 2015 at 7:02 PM, Alejandro Guerrero <agg....@gmail.com> wrote:
Hi Matthias, Brian, Auberon,

I think the work you guys have planned seems very exciting. It will be great to have a Zeppelin-like experience from Jupyter with the magics and the rich visualization.

I am thinking of creating a kernel that allows scala/python/R spark integration and then doing rich visualization of the Spark objects.


+1 on what Peter said below. 
I also am not sure you actually want a kernel, you shouldn’t need a full kernel to have what you ask. 
You should “just” need to add rich object to (py)Spark and other library. 
Though I’m not that familiar with spark. 

Auberon is working on making PySpark pip-installable[1][2], I would suggest you give a hand there, 
and once this is done you can most likely start to improve pyspark in more ways. 

Thoughts ?

-- 
M
[2] I apparently write pip-installable enough that my suggestion engine picked it up. 

Auberon López

unread,
Aug 25, 2015, 2:40:15 PM8/25/15
to Project Jupyter, mateusz....@continuum.io, bry...@continuum.io, aubero...@gmail.com, a...@microsoft.com, elli...@gmail.com
+1 that what you're describing shouldn't need a custom kernel to work.

If there are deeper changes that you're considering that do require a custom kernel, I'd encourage you to look at IBM's spark kernel before writing your own: https://github.com/ibm-et/spark-kernel

-Auberon

Alejandro Guerrero

unread,
Aug 25, 2015, 3:06:41 PM8/25/15
to Project Jupyter, mateusz....@continuum.io, bry...@continuum.io, aubero...@gmail.com, a...@microsoft.com, elli...@gmail.com
Matthias, PySpark would only give me Python code execution, not Scala and R, correct?

Auberon, what scenario are you thinking of when you want to make Spark pip-installable? Would Spark/Jupyter work on a single machine or are you thinking of installing IPython in every Spark cluster?

I was thinking of going the custom Kernel approach because I was evaluating using Livy to do the execution on a remote Spark installation. I knew about IBM's Spark kernel, but I liked the flexibility of having a REST endpoint for other apps to use as well (REST endpoint vs. having to consume IBM's Kernel client library).

What do you think?

Gino Bustelo

unread,
Aug 26, 2015, 10:04:45 AM8/26/15
to Project Jupyter, mateusz....@continuum.io, bry...@continuum.io, aubero...@gmail.com, a...@microsoft.com, elli...@gmail.com
@Alejandro... Livy looks interesting... I would say that we (IBM) have built experimental REST services on top of a running Spark Kernel. So it is doable, plus you still can use the same Kernel code in the Jupyter ecosystem. You basically translates the REST request to a message over the Kernel Protocol. Having said that... there are other more promising interfaces to kernels using approach similar to Thebe over websockets.

Matthias Bussonnier

unread,
Aug 26, 2015, 10:21:48 AM8/26/15
to jup...@googlegroups.com, mateusz....@continuum.io, bry...@continuum.io, aubero...@gmail.com, a...@microsoft.com, Brian Granger
On Aug 26, 2015, at 16:04, Gino Bustelo <lbus...@gmail.com> wrote:

@Alejandro... Livy looks interesting... I would say that we (IBM) have built experimental REST services on top of a running Spark Kernel. So it is doable, plus you still can use the same Kernel code in the Jupyter ecosystem. You basically translates the REST request to a message over the Kernel Protocol. Having said that... there are other more promising interfaces to kernels using approach similar to Thebe over web sockets.


That look interesting. I think that Doug Blank might also want to pitch in with MetaKernel that allow many languages in 
1 kernel by using %%%lang syntax IIRC. 



On Tuesday, August 25, 2015 at 2:06:41 PM UTC-5, Alejandro Guerrero wrote:
Matthias, PySpark would only give me Python code execution, not Scala and R, correct?

You can also use custom IPython magics to intersperse Python/Scala/R syntax. 
the current %%R magic would not do that, but I guess it is possible to have a %%Rspark 
and a %%scalaspark (or whatever name) that talk to the same spark as the Python one. 
The MetaKernel should provide such feature already. 


Auberon, what scenario are you thinking of when you want to make Spark pip-installable? Would Spark/Jupyter work on a single machine or are you thinking of installing IPython in every Spark cluster?

That would be just making local pyspark easier to install, but by setting up environment variable correctly 
you would be able to either use local or cluster Spark. 

We would like every Spark cluster to get IPython, but this should not need or relate to IPython at all.


I was thinking of going the custom Kernel approach because I was evaluating using Livy to do the execution on a remote Spark installation. I knew about IBM's Spark kernel, but I liked the flexibility of having a REST endpoint for other apps to use as well (REST endpoint vs. having to consume IBM's Kernel client library).


In all case I would build on top of an existing kernel, as in your case, I think
there are many things that you do not want/need to reimplement.

-- 
M

Brian Granger

unread,
Aug 28, 2015, 5:06:09 PM8/28/15
to Alejandro Guerrero, Project Jupyter, Auberon Lopez, Alejandro Guerrero Gonzalez
Alejandro,

> I think the work you guys have planned seems very exciting. It will be great
> to have a Zeppelin-like experience from Jupyter with the magics and the rich
> visualization.
>
> I am thinking of creating a kernel that allows scala/python/R spark
> integration and then doing rich visualization of the Spark objects.

I would first start looking at the existing Spark/Scala, Python and R
kernels. I think that will clarify the abstractions in our
architecture. Having one kernel that supports multiple languages best
matches the model provided by our magic commands in the Python kernel.
For example there is already an %R magic of the python kernel and it
woulnd't be too difficult to do a scala/spark magic (in addition to
the one that Auberon has done).

Any particular reason you want to write a *new* kernel?

>
> I'd create the spark code submission story for this kernel, while
> integrating with the rich visualization part Brian and Auberon mentioned.
>
> How can we work together on this? How can I become a code reviewer for the
> work Auberon will be doing?

We use a fairly standard and completely open development model. Here
are some tips on how to engage with the community:

* Start to install and use our existing features (you have probably
already done this).
* Dig into our existing code/repos and learn about the implementation
and design of the parts of the code you are interested in.
* Start to help with code review. Yes please - no permission needed!
* Find small things to start working on and submit PRs for those.
* Gradually build up experience and knowledge about the code and
development model until you can tackle something bigger.

> Are you guys interested in becoming code reviewers for the kernel I'm
> describing?

We encourage the community to develop kernels as projects separate
from the main jupyter org. An example of this is the spark kernel that
IBM developed:

https://github.com/ibm-et/spark-kernel

IBM did this with essentially no interaction with us. But again, I
don't think that writing a new kernel makes sense when R, Python and
Scala kernels already exist. For the nice visualizations/rich display
that Hue and Zeppelin offer, you don't need a kernel as much as
integration with existing visualization libraries. This is where
digging into our existing architecture will show you a lot of the best
ways to go.

Cheers,

Brian

Alejandro Guerrero

unread,
Aug 31, 2015, 7:06:43 PM8/31/15
to Project Jupyter, agg....@gmail.com, aubero...@gmail.com, a...@microsoft.com, elli...@gmail.com
Thanks for the detailed explanation Brian.

We want to enable the ability to submit Spark code from a local Jupyter installation to a remote Spark cluster. IBM's spark cluster does not enable that scenario out of the box. We believe that Livy is best suited for this, as it's already a REST endpoint.

As for the Jupyter-way to land this, the kernel (new or existing kernel alike) would need to keep some Spark related state (e.g. URL of REST endpoint, state of the cluster, configurations...).
I was thinking of implementing a wrapper kernel that would take care of maintaining that state and using magics to indicate the different languages of spark code it wants to submit to the remote cluster. I arrived to this shape by reading code for different kernels and reading your discussion group and mailing lists. A thread I found enlightening was: http://mail.scipy.org/pipermail/ipython-dev/2014-August/014770.html

The existing Scala/R kernels would not enable the scenario I described, as the code to be executed by spark would not be run on the machine that has Jupyter, at all. In essence, Jupyter would become a very nice Spark submission engine that renders nice visualizations.

As you mentioned, for the nice visualizations, I would only need to integrate the wrapper kernel for Livy with the existing visualization libraries, or the work Auberon is doing. That's why I wanted to know what Auberon was doing :)

Does that make sense Brian? Am I confused? I'm wondering if a call would be beneficial. Would you be able to talk for a half hour on this topic?

Best,
Alejandro

Peter Parente

unread,
Aug 31, 2015, 8:56:04 PM8/31/15
to Project Jupyter, agg....@gmail.com, aubero...@gmail.com, a...@microsoft.com, elli...@gmail.com

The existing Scala/R kernels would not enable the scenario I described, as the code to be executed by spark would not be run on the machine that has Jupyter, at all. In essence, Jupyter would become a very nice Spark submission engine that renders nice visualizations.


The ability to launch Jupyter kernels remotely from the Notebook server is one of the driving use cases for the Kernel Provisioning and Gateway API Kyle and I have been drafting over in a hackpad (https://jupyter.hackpad.com/Kernel-Provisioning-Service-API-sZx2qqNHnY1).

My current thinking is that it would be beneficial to extract an API matching the one that currently exists on the server-side of Jupyter Notebook for kernel CRUD operations as well as Websocket communication with running kernels. The client-facing API would be compatible with the new jupyter-js-services package and friendly to any language that has support for HTTP / Websockets. On the backend, the gateway could potentially plug-into any cluster manager (e.g., Mesos, Kubernetes, Docker Swarm, single-host just-run-it, ...) to launch kernels, assuming the ability to write "drivers" for such purposes. After launching a kernel, the gateway itself would serve as a websocket-to-0mq proxy to fix the "impedance mismatch" between web clients and kernels on a cluster.

Many details to hash out. Many alternatives. Many other use cases. I hope to post again when there's more to say.

Brian Granger

unread,
Sep 1, 2015, 12:37:18 PM9/1/15
to Alejandro Guerrero, Project Jupyter, Auberon Lopez, Alejandro Guerrero Gonzalez
Alejandro,

> We want to enable the ability to submit Spark code from a local Jupyter
> installation to a remote Spark cluster. IBM's spark cluster does not enable
> that scenario out of the box. We believe that Livy is best suited for this,
> as it's already a REST endpoint.
>
> As for the Jupyter-way to land this, the kernel (new or existing kernel
> alike) would need to keep some Spark related state (e.g. URL of REST
> endpoint, state of the cluster, configurations...).
> I was thinking of implementing a wrapper kernel that would take care of
> maintaining that state and using magics to indicate the different languages
> of spark code it wants to submit to the remote cluster. I arrived to this
> shape by reading code for different kernels and reading your discussion
> group and mailing lists. A thread I found enlightening was:
> http://mail.scipy.org/pipermail/ipython-dev/2014-August/014770.html

I think you are still missing the fundamental abstraction that Jupyter
exposes. The existing R, Python and Scala kernels are capable of
running *any* code in those languages.

If Livy exposes a REST endpoint, you can simply use any HTTP client
library in R/Python/Scala to talk to Livy. But that is not a new
kernel, it is just regular code running in one of the existing
kernels. A kernel is just a process that runs any code. For example.
the Python example code that is in the Livy README here:

https://github.com/cloudera/hue/tree/master/apps/spark/java#spark-example

Can just be pasted into the Python kernel and used immediately. It
will *just work*. Same with the R example code:

https://github.com/cloudera/hue/tree/master/apps/spark/java#sparkr-example

Now those APIs are a bit painful for users, so you might want to wrap
them in a magic function, etc. But a new kernel just doesn't make
sense for this.

> The existing Scala/R kernels would not enable the scenario I described, as
> the code to be executed by spark would not be run on the machine that has
> Jupyter, at all. In essence, Jupyter would become a very nice Spark
> submission engine that renders nice visualizations.

Part of the difficulty of building your own kernel is that you loose
all of the spectacular libraries that already exist in Python/R/Scala
(pandas, ggplot, dplyr) that users will also want to use.

>
> As you mentioned, for the nice visualizations, I would only need to
> integrate the wrapper kernel for Livy with the existing visualization
> libraries, or the work Auberon is doing. That's why I wanted to know what
> Auberon was doing :)
>
> Does that make sense Brian? Am I confused? I'm wondering if a call would be
> beneficial. Would you be able to talk for a half hour on this topic?

Yes, I can do that later today (after 2pm). What times do you have available?

Alejandro Guerrero

unread,
Sep 1, 2015, 2:20:36 PM9/1/15
to Project Jupyter, agg....@gmail.com, aubero...@gmail.com, a...@microsoft.com, elli...@gmail.com
Hi Brian,

Now that makes sense. I knew that the existing R, Python, Scala kernels can run any code and things would just work.

I didn't know magics were that powerful. I'll take a look.

As for the meeting, I can do any time after 3. Do you want me to schedule something? Where should I send the invite to?

Thanks!
Alejandro

Brian Granger

unread,
Sep 2, 2015, 1:12:59 AM9/2/15
to Alejandro Guerrero, Project Jupyter, Auberon Lopez, Alejandro Guerrero Gonzalez
Let try for tomorrow morning. Why don't you ping me on the
jupyter/jupyter gitter channel when you are around.

Brian

Brian Granger

unread,
Sep 5, 2015, 10:30:14 PM9/5/15
to Alejandro Guerrero, Jeremy Freeman, Dan Gisolfi, Scott Sanderson, Project Jupyter, Auberon Lopez, Alejandro Guerrero Gonzalez
Hi all, I wanted to update the community on some discussion we had
this week with Alejandro and Auberon about Spark+Jupyter stuff.

# Spark magics

* Auberon and Alejandro are going to create a jupyter incubation
project to write a set of IPython magics for working with
PySpark/SparkR/Scala from Python. In particular the focus is going to
be on creating a uniform API for working with local cluster and remote
clusters (through Livy:
https://github.com/cloudera/hue/tree/master/apps/spark/java).
* The Jupyter incubation process is being discussed here
https://github.com/jupyter/governance/pull/3 and will hopefully be
approved this weekend sometime.
* It would be great to get comments on their incubation when it is
submitted (will be posted to this list).

# Rich display/viz of Pandas data frames

The idea of the above Spark magics will be for them to return Pandas
DataFrames whenever a concrete representation of an RDD/DataFrame is
requested. There is strong interest in developing better rich
representations of DataFrames in the notebook, both for tablular data
itself, as well as common statistical visualizations.

## Tabular data

As an initial starting point for display of tabular data, we are going
to look at qgrid, which has been developed at Quantopian:

https://github.com/quantopian/qgrid

Minimally we will submit some pull requests to qgrid to enable qgrid
as the default rich repr of DataFrames.

## Visualization

There are a number of excellent visualization libraries in Python:

http://matplotlib.org/
http://bokeh.pydata.org/en/latest/
https://plot.ly/
http://lightning-viz.org/
http://mpld3.github.io/
http://stanford.edu/~mwaskom/software/seaborn/

But, after lots of conversations with various folks this summer -
including the developers of these viz libraries, it seems that there
are some missing pieces in the Python+viz ecosystem. Namely, high
level statistical visualization such as Tableau
(http://www.tableau.com/) and Jeff Heer's vega-lite
(https://github.com/uwdata/vega-lite) and polestar
(https://github.com/uwdata/polestar). As an example of what is
starting to be possible is this notebook showing polestart working in
the notebook:

http://nbviewer.ipython.org/github/uwdata/ipython-vega/blob/master/Example.ipynb

I had excellent conversations at PyData Settle with Peter Wang
(Bokeh), Thomas Caswell (Matplotlib), Jake vpd (mpld3), Matt Sundquist
(Plotly) and Jeff Heer. The idea that I was proposing is that our
community starts to adopt the vega-lite spec for specifying high level
visualizations. There is still alot to be worked out, but here is the
idea:

* Write a user-focused high-level plotting API whose sole goal is to
emit vega-lite specs.
* Write code in Matplotlib, Bokeh, Plotly that can consume those
vega-lite specs and produce a relevant visualizations.
* Write new, notebook focused UIs (maybe polestar?) that can emit
those same vega-lite specs without requiring the user to code.
* Hook it all up in a reactive way using traitlets.

The benefit of this approach is that we won't end up with 6 different
high level plotting APIs and that each existing plotting library can
continue to focus on what it does best. This will allow users to also
customize their high level visualizations using the native
matplotlib/bokeh/plotly APIs as needed.

I encourage folks who are interested in this work to start thinking
about this direction and provide feedback here. I am guessing that we
will start to create a Jupyter Enhancement Proposal over the next
month that starts to rough out the UIs and APIs for this.

Cheers,

Brian

Matthias Bussonnier

unread,
Sep 7, 2015, 8:23:10 AM9/7/15
to jup...@googlegroups.com, Alejandro Guerrero, Jeremy Freeman, Dan Gisolfi, Scott Sanderson, Auberon Lopez, Alejandro Guerrero Gonzalez
Brian,

I think that most of the content of this mail would have been useful
as a top level thread, and not as a response to a Spark thread.
Most people might have unsubscribed from the Spark thread if they are
not interested, and will miss that.
Especially since you ask for feedback at the end of the mail.

I think that some of this content might be nice in a weekly recap.

Thanks,
--
M
> To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/CAH4pYpSYZ8tQOryquePVOAzT1empkgGE6tUpNnxDOwNdHj9Veg%40mail.gmail.com.

Brian Granger

unread,
Sep 7, 2015, 3:34:16 PM9/7/15
to Project Jupyter, Alejandro Guerrero, Jeremy Freeman, Dan Gisolfi, Scott Sanderson, Auberon Lopez, Alejandro Guerrero Gonzalez
Yeah, I thought the same thing after I sent it. I will post to the
main list soon about the different aspects of this.

Cheers,

Brian
> To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/CANJQusUKGExq3Egr2GaD2ZH6QFO9MPhoK5JYMMzXBR5u15RMqQ%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages