> However, I have some gripes on the near-term issues:
What do you mean by "near-term"? The whole article proposes a really
long-term view.
> Pure python is important up until you have a community large enough to
> take care of the packaging and distribution for you. Anaconda itself
> does not fix this unless CA has taken on the burden of packaging. I
> would love to have C-compiled code to boost performance, but I don't
> think we're there yet..
In my lab we've been working on a cross-platform matrix build system,
and it is not that complicated. It does require some upfront work and
investment, but then you get a system that builds conda packages
automatically for you at every commit (or PR, or release...). Note
that this is made relatively easy thanks to conda and binstar.
I agree that pure Python leads to very straightforward packaging and
distribution, but I don't think it's a sufficient reason to justify
the lock-in into Python. I think other platforms like R, Julia, or
even MATLAB could potentially benefit from VisPy, and that will never
happen if the entire codebase is in Python. But, sticking with pure
Python totally makes sense in the first few years (say, 5 years) of
the project.
> I'm not sure that's accurate. I've spent a lot of time profiling
> VisPy, and the performance issues almost always appear where we are
> doing unnecessary work--rebuilding large structures when we should be
> modifying the existing structures. It just takes a lot more work to
> implement it the right way, and we cut corners in many places in order
> to have a functional prototype. Throwing C++ or LLVM at this problem
> isn't going to automatically solve it, especially given how much work
> we have left to do in Python.
To be clear, I wasn't proposing throwing C++ or LLVM at the current
approach, and, as you say, it would make absolutely no sense at all.
Rather, the proposed idea is radically different, and in a sense more
limited in that you probably loose some dynamic aspects of
visualizations. For example, I am not proposing to implement a dynamic
scene graph at all, at least not in a first approach. I think you can
get a good deal of interesting visualizations without a dynamic scene
graph.
In other words, if VisPy can cover 100% of the use cases, the system I
propose would only cover 90% or something. But I expect it to be much
less complex, in a sense. Again, it's all rather speculative, and the
only way to know would be to experiment with the idea once Vulkan is
released. That's something I'd like to do, and it can be done
completely independently from the normal development of VisPy.
> Did you mean we cannot support them on the browser or on the desktop?
> It has always been the plan to support them wherever the OpenGL
> implementation allows it, and to fall back to slower techniques when
> necessary.
I meant on the desktop. I know it's the plan, but no one has ever done
it as far as I know? It might not be that simple; for example, I guess
you'd have to bypass GLIR somehow and resort to raw OpenGL...