I can appreciate the benefit of ivisual/jupyter/glowscript and any framework that uses WebGL and GPU Rendering over CPU Rendering. The speedup from my limited tests is (as expected) incredible. However, since I don't have the time or skills to make any of these competitive with classic VPython in terms of coverage, and I need the coverage of classic VPython (especially considering that all of my rather large codebase for this project is in classic VPython, and I am on a tight schedule), I decided to try implementing custom UV Mapping in Classic VPython by changing materials.py. I met with no success in this, due to my limited knowledge of the inner working of VPython, and my lack of experience with GLSL.
However, I have managed to create a fully-functional Wavefront .obj parser for VPython, as I mentioned I was working on before. Assuming that the .obj file is in the standard form and is accompanied by a .mtl file of the same name (they are companion files), convert_obj_multi.py will create a VPython Scene from the .obj and .mtl file, and even pull in the image files from their location on disk using PIL and apply them to the correct faces (not thoroughly tested, but preliminary tests look good) using a standard VPython "cubic" mapping. You can change the mapping of course, it won't matter that much since I can't get the correct mapping.
To get the correct mapping would require passing another parameter to our material, which would be a list of vectors representing texture coordinates on the given image files (data parameter to material). And of course, it would have to be processed, the image file would have to be parsed for specific points and broken up based on the texture coordinates, and then interpolation would have to be done between each edge (where an edge is a straight line drawn between any two texture coordinates of a given face). I'm a bit tired, that might not have made a lot of sense. But I know how it would have to be done, I just don't know how to make VPython and GLSL do what I need. Regardless, I am handing this off to any volunteers who want to take up the challenge, and if someone does decide to, please let me know how you do it.
Bruce, also, the convert_obj_multi.py file is pretty stable and useful, despite not getting the mapping right due to my limitations. So if you want to host it in your contributed programs, you are more than welcome to (assuming it goes out under a GNU Public Right licence so that everyone is free to use, modify, distribute, etc blah blah - I assume that is the case anyways, I just think it should stay open-source). But you may not want to, because it uses external libraries such as PIL, time, and os. Not sure if that matters to you. I'm using Python 2.7 of course.
And so below I will provide sample screenshots of a scene I designed in Blender, how it is supposed to look (in Blender), and how VPython renders it. Feel free to ask me about how it works, but basically it creates an object that contains a list of objects, each of which is a faces object. All of the faces objects in this global object are in the same frame, and each one is derived from an object in the .obj file that was parsed (denoted in the .obj file as "o objectName", as per the standard).
Attached is:
convert_obj_multi.py source code
ArchSceneBlenderRender.png (This is how the scene is supposed to look)
ArchScenePythonCubicMapping.png (This is how it actually looks)
ArchUVUnwrap.png (This is the UV Mapping for the Arch in the scene)
SphereUVUnwrap.png (The UV Mapping for the Sphere in the scene)
Arch.obj (The wavefront .obj file that I was focusing testing on)
Arch.mtl (The wavefront .mtl file)
Arch.png (The texture image file for the Arch)
Sphere.png (The texture image file for the Sphere)