This is certainly true.
> - it performs error
> checking, making error diagnosis much easier.
By default pyglet performs a gl error check after every call.
> However, this suggests a brilliant compromise: Use PyOpenGL for all
> your OpenGL calls to begin with. This will be easiest to code and get
> working. Then, if and when you want to optimise for performance,
> replace just the half-dozen OpenGL calls inside your innermost render
> loop with the pyglet bindings. This will give you the max performance,
> while still allowing you to use friendly PyOpenGL for 95% of your
> work.
This approach is a little dangerous as it could introduce obscure bugs
where the translation of PyOpenGL to pyglet isn't done entirely
correctly.
And of course get the maximum performance by compiling your
OpenGL-using module using cython ;-)
PyOpenGL also has some cython-based accelerators, but I've not looked
into where they're used.
Richard
The "1.5 to 3 times slower than pyglet bindings" I mentioned was
measured using PyOpenGL-accelerate, the Cython module for PyOpenGL. I
may have been doing it wrong. For one thing, Mike expects I'll see
better performance using VBOs and uniforms instead of vertex arrays
and modelview transforms. On the other hand, I expect using those will
also provide better performance with pyglet bindings too.
Cheers,
Jonathan
Tone was intended to be helpful ;-) Any bugs introduced in a
transition would typically be quite repeatable and obvious. Now,
switching from pyglet to C-based calls via cython, there's a source of
real crashing bugs and other oddities :-) [oh, how I haven't missed
you, Bus Error]
> Cythoning is going well, in half-hour snatches over breakfast and
> lunch! Doubled my frame rate with the low-hanging fruit (type tags in
> my inner render loop.) Now I'm doing judicious bits of my matrix
> class, which should leave me with 100% 'white' source lines in my
> inner render loop ('white' as in the colored output of "cython -a".)
> Published results soon.
Awesome. Indeed I've also found that the cython'ing mostly helps when
applied to intensive Python calculation code, rather than just
cython'ing some random calls into OpenGL. My core "render a bunch of
landscape segments" code is cython'ed because it operates on actual C
structures so I've not actually compared how it performs to a vanilla
Python loop.
A little judicious use can go a long way :-)
Richard
Geez, OpenGL, way to confuse the hell out of me calling a second,
completely and utterly different thing a Vertex Array (Object)!
Those similarly confused and in the dark may read this:
http://www.opengl.org/wiki/Vertex_Array_Object
Richard
-Casey
> --
> You received this message because you are subscribed to the Google Groups "pyglet-users" group.
> To post to this group, send email to pyglet...@googlegroups.com.
> To unsubscribe from this group, send email to pyglet-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/pyglet-users?hl=en.
>
>
That's a very very useful high level description of what's behind the
openGL current design !
I've been longing for such an explanation for a long time. Thanks a
lot for it !!!
Cheers,
Olivier
I worry though, that if I use opengl3 or higher, that end users who
aren't AAA gaming fanatics won't be able to run it. For example I
consider myself a gamer, but none of the three PCs I use at home and
work can support higher than opengl2.1. Nor could my wife's three PCs,
until she took delivery of a new alienware last week. Is this a
realistic worry, or is opengl3+ penetration higher than I estimate?