Are all OpenGL functions available in pyglet?

885 views
Skip to first unread message

viper

unread,
Nov 27, 2010, 7:44:38 AM11/27/10
to pyglet-users
Hi,

Some quick questions I hope someone will be able to answer for me:

1) What OpenGL functions are available in pyglet? Is there somewhere I
can look to see the complete list?

2) Does pyglet support OpenGL 4.1 functions? I found this link:
http://codeflow.org/entries/2009/jul/31/gletools-advanced-pyglet-utilities/
for a tool that apparently adds advanced functions to pyglet. Does
pyglet now encompass the functions of this tool?

I'm trying to learn OpenGL. Since I know some basic python programming
I thought I'd try to learn OpenGL using python. Any help with the
above questions would be greatly appreciated.

Cheers

Florian Bösch

unread,
Nov 28, 2010, 8:49:17 AM11/28/10
to pyglet-users
I wrote gletools, and everything it does stands on top of pyglets
opengl API wrapping. Gletools wraps some of opengls functionality in a
higher-level way then the direct opengl API (i.e. it saves you a ton
of spagetti code).

There's a new article from me is up on codeflow about opengl 4
tessellation shading, http://codeflow.org/entries/2010/nov/07/opengl-4-tessellation/
for which pyglet (development version from mercurial) provides the
necessary API functions from opengl, so yes, pyglet supports opengl 4.

I would suggest this tutorial on opengl http://www.arcsynthesis.org/gltut/
as well as buying the redbook (programming guide), bluebook (reference
manual), orangebook (GLSL) and superbible. In order to effectively
programm GPUs it helps a lot if you're versed in vector and matrix
maths.

A word of warning though, 3d programming is a pretty extensive topic,
and if you want to explore it in its depth it's easy to spend years or
decades on it.

Jonathan Hartley

unread,
Nov 30, 2010, 2:15:33 PM11/30/10
to pyglet-users
On Nov 27, 12:44 pm, viper <viper...@gmail.com> wrote:
> Hi,
>
> Some quick questions I hope someone will be able to answer for me:
>
> 1) What OpenGL functions are available in pyglet? Is there somewhere I
> can look to see the complete list?
>
> 2) Does pyglet support OpenGL 4.1 functions? I found this link:http://codeflow.org/entries/2009/jul/31/gletools-advanced-pyglet-util...
> for a tool that apparently adds advanced functions to pyglet. Does
> pyglet now encompass the functions of this tool?
>
> I'm trying to learn OpenGL. Since I know some basic python programming
> I thought I'd try to learn OpenGL using python. Any help with the
> above questions would be greatly appreciated.
>
> Cheers


To see the complete list of supported OpenGL functions supported by
pyglet, you could look at the source code. Check out or browse the
source code in Mercurial at http://code.google.com/p/pyglet/source.

Alternatively, if you have installed pyglet, the source will already
be in your Python directory, e.g. on Windows C:\Python27\Lib\site-
packages\pyglet\gl.py. To find where it is on your system, start a
Python prompt, and type:

import pyglet
pyglet.__file__

In that same directory, there will also be other modules containing
the extra OpenGL stuff outlined in the pyglet.gl docs:
http://www.pyglet.org/doc/api/pyglet.gl-module.html


An alternative to using the pyglet OpenGL bindings is to use the
PyOpenGL project. Happily this is dead simple in conjunction with
pyglet - you can make a pyglet project as normal, but wherever you
want to call an OpenGL function, you can simply import it from
OpenGL.GL instead of from pyglet.

The advantage of using PyOpenGL is that it is generally friendlier &
more Pythonic than pyglet's OpenGL bindings - it performs error
checking, making error diagnosis much easier. It performs type
conversion of the parameters you pass it, meaning you, if you wish,
often just pass simple Python lists instead of allocating ctypes
arrays. The OpenGL functions offered by PyOpenGL are documented here:
http://pyopengl.sourceforge.net/documentation/

The disadvantage of using PyOpenGL is that all the extra work it does
for you makes it a little slower - in practice from 1.5 to 3 times
slower than using pyglet's OpenGL bindings.

However, this suggests a brilliant compromise: Use PyOpenGL for all
your OpenGL calls to begin with. This will be easiest to code and get
working. Then, if and when you want to optimise for performance,
replace just the half-dozen OpenGL calls inside your innermost render
loop with the pyglet bindings. This will give you the max performance,
while still allowing you to use friendly PyOpenGL for 95% of your
work.

Cheers,

Jonathan

Richard Jones

unread,
Nov 30, 2010, 4:05:11 PM11/30/10
to pyglet...@googlegroups.com
On Wed, Dec 1, 2010 at 6:15 AM, Jonathan Hartley <tar...@tartley.com> wrote:
> The advantage of using PyOpenGL is that it is generally friendlier &
> more Pythonic than pyglet's OpenGL bindings

This is certainly true.


> - it performs error
> checking, making error diagnosis much easier.

By default pyglet performs a gl error check after every call.


> However, this suggests a brilliant compromise: Use PyOpenGL for all
> your OpenGL calls to begin with. This will be easiest to code and get
> working. Then, if and when you want to optimise for performance,
> replace just the half-dozen OpenGL calls inside your innermost render
> loop with the pyglet bindings. This will give you the max performance,
> while still allowing you to use friendly PyOpenGL for 95% of your
> work.

This approach is a little dangerous as it could introduce obscure bugs
where the translation of PyOpenGL to pyglet isn't done entirely
correctly.

And of course get the maximum performance by compiling your
OpenGL-using module using cython ;-)

PyOpenGL also has some cython-based accelerators, but I've not looked
into where they're used.


Richard

Jonathan Hartley

unread,
Nov 30, 2010, 6:26:16 PM11/30/10
to pyglet-users
On Nov 30, 9:05 pm, Richard Jones <r1chardj0...@gmail.com> wrote:
Hey. Thanks for the corrections, much appreciated.

From your tone I infer I should beware of intermittent and
unrepeatable problems. I'll add some sort of stress test to my suite,
as a smidgeon of insurance. Other ideas welcome. I assume you're not
going so far as to advise not doing it? Doubling my frame rate is not
to be sniffed at.

Cythoning is going well, in half-hour snatches over breakfast and
lunch! Doubled my frame rate with the low-hanging fruit (type tags in
my inner render loop.) Now I'm doing judicious bits of my matrix
class, which should leave me with 100% 'white' source lines in my
inner render loop ('white' as in the colored output of "cython -a".)
Published results soon.

The "1.5 to 3 times slower than pyglet bindings" I mentioned was
measured using PyOpenGL-accelerate, the Cython module for PyOpenGL. I
may have been doing it wrong. For one thing, Mike expects I'll see
better performance using VBOs and uniforms instead of vertex arrays
and modelview transforms. On the other hand, I expect using those will
also provide better performance with pyglet bindings too.

Cheers,

Jonathan

Tristam MacDonald

unread,
Nov 30, 2010, 6:37:27 PM11/30/10
to pyglet...@googlegroups.com
On Tue, Nov 30, 2010 at 6:26 PM, Jonathan Hartley <tar...@tartley.com> wrote:
The "1.5 to 3 times slower than pyglet bindings" I mentioned was
measured using PyOpenGL-accelerate, the Cython module for PyOpenGL. I
may have been doing it wrong. For one thing, Mike expects I'll see
better performance using VBOs and uniforms instead of vertex arrays
and modelview transforms. On the other hand, I expect using those will
also provide better performance with pyglet bindings too.

Cheers,

 Jonathan

VAO will likely do more for you than anything else in this respect. It basically reduces the entire OpenGL commands for rending an object to BindVAO -> Render, cutting out all the calls to bind each buffer and pointer separately.

While this doesn't actually make much difference at the OpenGL end (the driver basically just does the other bind calls for you), it represents a huge savings in calls across that expensive ctypes <--> python bridge.

--
Tristam MacDonald
http://swiftcoder.wordpress.com/

Richard Jones

unread,
Nov 30, 2010, 6:53:03 PM11/30/10
to pyglet...@googlegroups.com
On Wed, Dec 1, 2010 at 10:26 AM, Jonathan Hartley <tar...@tartley.com> wrote:
> From your tone I infer I should beware of intermittent and
> unrepeatable problems.

Tone was intended to be helpful ;-) Any bugs introduced in a
transition would typically be quite repeatable and obvious. Now,
switching from pyglet to C-based calls via cython, there's a source of
real crashing bugs and other oddities :-) [oh, how I haven't missed
you, Bus Error]


> Cythoning is going well, in half-hour snatches over breakfast and
> lunch! Doubled my frame rate with the low-hanging fruit (type tags in
> my inner render loop.) Now I'm doing judicious bits of my matrix
> class, which should leave me with 100% 'white' source lines in my
> inner render loop ('white' as in the colored output of "cython -a".)
> Published results soon.

Awesome. Indeed I've also found that the cython'ing mostly helps when
applied to intensive Python calculation code, rather than just
cython'ing some random calls into OpenGL. My core "render a bunch of
landscape segments" code is cython'ed because it operates on actual C
structures so I've not actually compared how it performs to a vanilla
Python loop.

A little judicious use can go a long way :-)


Richard

Jonathan Hartley

unread,
Dec 1, 2010, 9:50:52 AM12/1/10
to pyglet-users
On Nov 30, 11:53 pm, Richard Jones <r1chardj0...@gmail.com> wrote:
Hey. Thanks very much once again.

I think I am being pretty judicious - I've just done the twenty lines
in my innermost loop around 'glDrawElements', and now I'm just doing
a small portion of my 'matrix' class, the data members of which are
being passed to glMultMatrix inside that same loop. It should only
take a couple of hours, I'm just a crap coder with poor time
management, is all.

It sounds like I should be using my matrices to set uniforms (and
modify shaders accordingly), instead of calling glMultMatrix, I'm
hopeful that change will be relatively straightforward too.

Also, thank-you very much Tristam, your suggestion definitely goes
onto my list - Sounds like I should do that sooner rather than later,
and (possibly) resume Cythoning, etc, when the code is more stable.
I'll take a look at that then.

Wish I had a three week vacation to bury myself in this.

Florian Bösch

unread,
Dec 1, 2010, 4:44:53 PM12/1/10
to pyglet-users
I think if the call overhead to opengl is your rendering bottleneck,
then you're not making effective use of modern opengl. There's so many
ways to avoid calls (and the implied bus transfer). It doesn't really
matter weather you code in C or Python or how slow your API calls are,
at some point, bus transfer/API call speed is *always* too slow.

Jonathan Hartley

unread,
Dec 2, 2010, 9:16:30 AM12/2/10
to pyglet-users
Hey Florian. Thanks for the input.

I think the ideas touched on above (moving from arrays to VBOs, using
VAOs) plus using interleaved arrays, are all taking me in the
direction you suggest, yes?

I'm just doing one call to glDrawElements for each modelview transform
change in my scene. Presumably after I've done all of the above, this
should be my next target - to put all my object orientations and
positions into a big dynamic VBO (as matrices?) and having the shader
transform vertices right from object space to eye space. At that point
my entire scene could be rendered with a single call to
glDrawElements. Presumably this sort of thing is the logical
conclusion of the direction you are recommending?

I still have a bunch to do before I get there though. Roll on the cold
winter evenings.

Richard Jones

unread,
Dec 2, 2010, 3:45:41 PM12/2/10
to pyglet...@googlegroups.com
On Fri, Dec 3, 2010 at 1:16 AM, Jonathan Hartley <tar...@tartley.com> wrote:
> I think the ideas touched on above (moving from arrays to VBOs, using
> VAOs) plus using interleaved arrays, are all taking me in the
> direction you suggest, yes?

Geez, OpenGL, way to confuse the hell out of me calling a second,
completely and utterly different thing a Vertex Array (Object)!

Those similarly confused and in the dark may read this:
http://www.opengl.org/wiki/Vertex_Array_Object


Richard

Tristam MacDonald

unread,
Dec 2, 2010, 4:59:24 PM12/2/10
to pyglet...@googlegroups.com

Despite the potential for confusion, it is a sensible nomenclature. In OpenGL parlance, an 'object' always encapsulates a set of state: VBO encapsulate vertex buffer state, FBO encapsulate framebuffer state, TFO encapsulate transform feedback state, and so on...

Florian Bösch

unread,
Dec 3, 2010, 6:20:47 AM12/3/10
to pyglet-users
On Dec 2, 3:16 pm, Jonathan Hartley <tart...@tartley.com> wrote:
> I think the ideas touched on above (moving from arrays to VBOs, using
> VAOs) plus using interleaved arrays, are all taking me in the
> direction you suggest, yes?
More or less, but there's a bit more to it.

> I'm just doing one call to glDrawElements for each modelview transform
> change in my scene. Presumably after I've done all of the above, this
> should be my next target - to put all my object orientations and
> positions into a big dynamic VBO (as matrices?) and having the shader
> transform vertices right from object space to eye space. At that point
> my entire scene could be rendered with a single call to
> glDrawElements. Presumably this sort of thing is the logical
> conclusion of the direction you are recommending?

Generally the idea of all this new stuff in OpenGL is to avoid per
frame bus transfer. You typically have something on the order of 0.5GB
- 1.5GB of free transfer capacity on the bus when the system is under
load (this is because the transfer from main memory to the CPU also
blocks time until you can put the data back on the bus again). Usually
you want more then 60 FPS rendering. If you divide your spare capacity
by the frames per second you get to between 10MB-25MB per frame!
That's not a whole lot, if you push it you might get to 50MB per
frame, but that's it, even very high end systems won't let you put
more then 3GB/s on the bus when not idle. 50MB is very little data
when when you talk of graphics. Often the geometry you want to render
alone occupies a couple hundred megabytes.

Graphics cards these days are practically their own (super) computer.
They have their own backplane/bus (typically at clock rates and bit-
widths far exceeding PCI), they have their own high speed memory
(VRAM) and they have their own processors (the GPU cores). So the
solution to avoid CPU<->GPU bus transfer per frame is to preload as
much of your data as you can onto VRAM. To facilitate this there are a
variety of buffers for different purpose (altough they tend to become
general purpose buffers).

There are different buffer concepts, they do not necessarily map to a
specific function, but more to a concept of how to use these
functions.

Texture: This is the oldest buffer of all, it has its own API and you
use it to put a chunk of texture data into VRAM. The reason it's old
is because classically textures where the bus bottleneck (before we
did the geometry explosion of the 1995-2000 ties)
- VBO: Stands for vertex buffer object. The idea is to preload
(possibly generic buffers) with the data you want, and at frame render
time bind these buffers and issue the draw calls (which are the same
as for unbuffered array rendering). The difference is in what OpenGL
does when you do this. It uses its own VRAM location of the data
instead of waiting for the data to arrive on the bus.
- FBO: Framebuffer Object: This buffer became necessary because it is
desirable to capture rasterized output into a texture for various post-
processing/computing effects, and not be restricted to screen
dimensions. It has its own API and is less of a buffer then a way to
render into textures.
- PBO: Pixel buffer Object: Very similar to VBOs, but uses a different
enumerant (GL_PIXEL_PACK_BUFFER_ARB). On ATI card you can use a PBO as
a texture (or framebuffer attachment). On NVIDIA cards you can use it
to copy texture data into that buffer. This is useful for a technique
labeled RTVBO (render to vertex buffer object). It's a way to get
rasterizing stage calculations into a geometry containing buffer.
- VAO: Vertex Array Object: similar to the FBO it does not represent
its own buffer, but rather is a state definition binding a variety of
OpenGL state together and setting it with one bind call. It's
generally not as useful as the other buffers, with one exception, if
you issue a lot of draw calls, it can save quite a bit of time (that
is because presumably the driver can optimize changing its state
better then you can issue the individual state changes).
- TFO: Transform Feedback Object: This stands for its own data, it
does have its own API. The idea of this buffer is to be able to
capture geometry *before* it is rasterized (just before the fragment
shader) into a buffer (and count how many primitives where captured).
This is useful for a variety of computation that happens at the
geometry stages that you either want to perform at less frequencies
then per frame, or that you have some results you reuse in the same
frame in a second geometry pass.
Uniform Buffers: This kind of buffer allows you to pass a uniform
array of values into a shader without incuring bus transfer, the
values for the uniform live in VRAM

I think you can see the common theme of this jungle of buffers. Avoid
per frame cpu work and bus transfer. There was some work from the
superbuffers group at Khronos to unify all that into a single buffer/
API, but it didn't get trough yet (or perhaps never will).

There's yet more concepts that help you avoid per frame cpu work:
- Instancing: Comes in various flavors, the aim is to render the same
geometry with different parameters many times from a single draw call
- Deferred shading/lighting: Avoid having to update buffers per frame
with cpu computed light information and implement per pixel lighting
as a function of rasterizing light bounding volumes into an already
rendered scene
- Generally post processing effects
- Texture/FBO ping pong: A variation of post processing, ping pong
between two (or cycle between more then 2) textures attached to a FBO
to perform some general computation on the GPU (like a blur filter for
instance, or erosion effects, edge detection etc.)
- Geometry ping pong: Involves switching rendering between two (or
cycle between more then two) transform feedback buffers, perform some
geometry buildup and computation (like instancing and fractals or L-
systems)
- GPU skinning: pass in all parameters (bone matrices, weights, mesh)
required for skinning into the pipeline (from buffers), and use the
vertex shader to transform a mesh according to each vertices weight
for the relevant matrices.
- Multi pass alpha/coverage post processing: gives you nice alpha
blending without having to order your primitives on the CPU before
rendering (see GPU Pro)
- Raycasting into volumetric data from shaders: Various effects like
accentuated fog/clouds, ambient occlusion for volumetric data on the
GPU, displacement mapping, contur reconstruction etc.
- and many more.... (it's mind boggling how many different things
people do)

Casey Duncan

unread,
Dec 3, 2010, 11:56:28 AM12/3/10
to pyglet...@googlegroups.com
Learning all of this seems all well and good, but why do I get the
feeling it will all be nearly useless knowledge 6 months to a year
from now? I'm probably being cynical, but that feeling makes me
reluctant to really dive into this, particularly for hobby work. If I
was getting paid big bucks to know this stuff, that'd be different 8^)

-Casey

> --
> You received this message because you are subscribed to the Google Groups "pyglet-users" group.
> To post to this group, send email to pyglet...@googlegroups.com.
> To unsubscribe from this group, send email to pyglet-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/pyglet-users?hl=en.
>
>

Florian Bösch

unread,
Dec 3, 2010, 12:53:16 PM12/3/10
to pyglet-users
On Dec 3, 5:56 pm, Casey Duncan <casey.dun...@gmail.com> wrote:
> Learning all of this seems all well and good, but why do I get the
> feeling it will all be nearly useless knowledge 6 months to a year
> from now? I'm probably being cynical, but that feeling makes me
> reluctant to really dive into this, particularly for hobby work. If I
> was getting paid big bucks to know this stuff, that'd be different 8^)

Well, I can say:
1) The concepts are about the same for direct3d
2) Most of that stuff has been brewing for decades before getting
something like ARB status. Judging by the fact that textures are still
around, it's save to say most of it will stay around for decades to
come.
3) Khronos has gotten most of the new hardware stuff out of their
system into a standard now, OpenGL 5, 6, 7, 8 or 10 won't simply make
it irrelevant
4) Not learning new things because things change, seems like a
paradoxical attitude anyway.
5) If you won't learn new things in OpenGL/Direct3d because they're
new, then you'll be doomed to write slow ugly looking graphics apps as
compared to your peers who did put in the time to keep up.

Tristam MacDonald

unread,
Dec 3, 2010, 1:51:46 PM12/3/10
to pyglet...@googlegroups.com

I would add to that list that once you get into it, the new ways of doing things are really much cleaner and simpler. The only caveat being that you do need to know more a bit more of the theory than you used to.

Olivier Dormond

unread,
Dec 3, 2010, 7:53:59 AM12/3/10
to pyglet...@googlegroups.com
Wow !

That's a very very useful high level description of what's behind the
openGL current design !
I've been longing for such an explanation for a long time. Thanks a
lot for it !!!

Cheers,


Olivier

Florian Bösch

unread,
Dec 4, 2010, 5:24:12 AM12/4/10
to pyglet-users
> I would add to that list that once you get into it, the new ways of doing
> things are really much cleaner and simpler. The only caveat being that you
> do need to know more a bit more of the theory than you used to.

I can fully subscribe to that. Actually I must admit that I find
modern OpenGL so much more convenient, that I'll probably forget about
supporting Anything below 4 (you can do some forward compatible coding
in older versions (like generic attribs), but there's also some things
like uniform function pointers or tessellation shaders which I find I
wouldn't want to miss).

And the knowledge of more theory isn't really a caveat either. Sure,
it does mean you'll have to put in a bit more time at first. But this
will save you tons of time later. For instance, opengl4 requires you
to compute your own matrices and pass them into shaders as uniforms.
It is good to know the matrix math. It is also good to be able to
replicate the pipeline transform in your own application code (for
instance if you want to be able to position UI elements in a 3d
scene). You're also being independent of the opengl matrix stack,
which basically makes for less code and a more flexible way to do
things.

For me the new way to do thing is happiness all around :)

Jonathan Hartley

unread,
Dec 4, 2010, 5:32:20 AM12/4/10
to pyglet-users

Hey Casey.
I'm a bit more optimistic about it. The core of the old pre-3.0 opengl
contained concepts that were stable for decades. I feel like this new
functionality has been brewing for a while, so there has been a flurry
of updates recently, but I'm hopeful the new stuff, with refinements,
will go on to be relatively stable for decades again.

I may be wrong. We'll see.

Jonathan Hartley

unread,
Dec 6, 2010, 5:04:42 AM12/6/10
to pyglet-users
Thanks heaps for all the opengl overview, people. Really useful and
interesting for me, too.

I worry though, that if I use opengl3 or higher, that end users who
aren't AAA gaming fanatics won't be able to run it. For example I
consider myself a gamer, but none of the three PCs I use at home and
work can support higher than opengl2.1. Nor could my wife's three PCs,
until she took delivery of a new alienware last week. Is this a
realistic worry, or is opengl3+ penetration higher than I estimate?

Tristam MacDonald

unread,
Dec 6, 2010, 7:17:12 AM12/6/10
to pyglet...@googlegroups.com
On Mon, Dec 6, 2010 at 5:04 AM, Jonathan Hartley <tar...@tartley.com> wrote:
I worry though, that if I use opengl3 or higher, that end users who
aren't AAA gaming fanatics won't be able to run it. For example I
consider myself a gamer,  but none of the three PCs I use at home and
work can support higher than opengl2.1. Nor could my wife's three PCs,
until she took delivery of a new alienware last week.  Is this a
realistic worry, or is opengl3+ penetration higher than I estimate?

Sadly, no. If you are aiming for a more casual demographic, then 2.1 plus some extensions are about as high as you want to pitch it.

All Macs are still running 2.x plus a heap of extensions, and likely to remain that way at least until Mac OS Lion comes in the summer. All netbooks are stuck with OpenGL 2.1 or less (those damn Intel graphics cards). Most cheap PC's ship also with Intel integrated graphics, so similarly stuck.

Florian Bösch

unread,
Dec 7, 2010, 6:48:33 AM12/7/10
to pyglet-users
On Dec 6, 11:04 am, Jonathan Hartley <tart...@tartley.com> wrote:
> or is opengl3+ penetration higher than I estimate?

I think the question's more complex, but the simple thing first.
According to the steam survey http://store.steampowered.com/hwsurvey/videocard/
we can look at dx11 capable cards (which is synonymous with opengl4
ready) and dx10 ready, which is synonymous with opengl3 ready (correct
me if you think that is a wrong assumption) we have these absolute
hardware numbers:

DX11: Jul. 08.62%, Aug. 10.17%, Sep. 11.93%, Oct. 12.99%, Nov. 14.15%
DX10: Jul. 71.56%, Aug. 71.74%, Sep. 73.52%, Oct. 73.19%, Nov. 73.16%

That's something between 8%-17% growth *relative* for DX11. The %
numbers cited are exclusive, which of course means that total support
for DX10 is around 80%-90% of steam users.

Now I don't know how representative steam users are for whatever group
of people you target. Maybe very representative, maybe little. They're
certainly very representative of the people valve targets, and valve
is doing pretty well for themselves (so it would appear that it's also
a demographic with enough spare capital).

So what you need to consider is:
- What demographic do you target?
- What are the hardware capabilities of that demographic?
- What is the spending capacity of that demographic on a product like
yours?
- How much of a graphics quality tradeoff are you going to accept to
reach your targeted demographic?
- How much more work are you willing to put in for how many more
percent people reached?
- How is the hardware/driver landscape going to change during the time
you do this project?
- How relevant are all those decisions by the time you're done (could
be years from now)?
- Factoring in risks like: project overruns, hardware changes,
spending capacity, etc. what balance to you strike in work investment
vs. returns you will target?

Those are seriously difficult questions, and I don't think there's one
single right answer. There isn't even one simply wrong one. It is all
relative and saddled with so many degrees of uncertainty, that even if
you're fairly certain what hardware you're aiming for, you could still
make a bad decision.

Jonathan Hartley

unread,
Dec 7, 2010, 9:35:19 AM12/7/10
to pyglet-users
Good points throughout. Thanks for the numbers, which do seem
relevant, and for broadening my perception if the issue.
Reply all
Reply to author
Forward
0 new messages