Might be a bit out of place here but I was wondering when you are
compiling a displaylist does the amount of calls to glVertex3f (or
other data) have an impact on the time it takes to execute that
display list afterwards ?
And if so would using glDrawArray make a significant reduction in that
time ?
I'm having an issue where calling a displaylist (containing about 70
faces) a 100 times is dropping my framerate in the low 20's on a
GeForce 9800X2 .... Cpu usage is also a measly 40-50% so it shouldn't
be bottlenecking the processor and since I'm using glCallList it
shouldn't send all the vertex data over the pipeline each frame
right?
I guess it will be worth a shot then, but unfortunately it will
require a rather difficult rewrite process to make sure I can group
all triangles and quads into their own respective array.
The data I'm reading comes from a Wavefront OBJ file and thus
triangles and quads are mixed without much regard.
Would there be any performance-wise reason to use pyglet's vertex_listover the regular glDrawArray/DrawElements?
This is basically an alternative opinion to Tristam's -- I haven't
measured the performance difference. It's my understanding that
whether you use vertex arrays or immediate mode (glVertex3f, etc)
should make no difference once the display list is compiled (of
course, it will take longer to specify the vertex data while compiling
the display list, but we usually don't care about load times much).
Probably not. I expect they both end up putting vertex
data in the display list in the same format.
--
Greg
Done, a bit of a downer to this imo is the disk size but I guess in a
> Graphics cards don't draw quads, they will be split up into triangles by the
> driver anyway. You are probably better off pre-triangulating (most exporters
> and modelling packages have this option), and then rendering everything as
> triangles.
world where google throws gigabytes at your head you shouldn't bicker
about 1-2k more or less :)
def compile(self):
self.gllist = glGenLists(1)
glNewList(self.gllist,GL_COMPILE)
for mat, vlist in self.ivlists: # ivlists are dicts where key =
material name and value = IndexedVertexList
# Disabling next statments gives me frame boost
glMaterialfv(GL_FRONT, GL_DIFFUSE, materials[mat]['diffuse'])
Do notice this is while compiling the list, apparently the list
actually also does the glMaterialfv and glBindTexture call each frame
which (I suspect) is pushing the texture down the pipeline each frame
(Ouch, poor PCIe bus ;)
You would not expect manual (in python) iteration over objects to go
faster than a glCallList would you ?
While vertex arrays are deprecated in GL 3, they are still present in GL ES.
Alex.
WRT fixed-functionality pipeline deprecation, unfortunately I have
found that there are still many many machines out there (for example
the prev gen iBooks, etc) that do not have shader support. Unless you
are writing hardcore games targeted for the latest hardware (which
seems highly unlikely in the hobby sphere given their big content
budgets), or simply don't care about supporting a large fraction of
the installed hardware, fixed-function coding is a fact of life. Of
course this is changing but it will be a few years before shader
support reaches a critical mass IMO for the hobby "market".
As that is happening I imagine better shader support will appear in
projects like Pyglet, and other high-level gaming/graphics libraries
(though Pyglet's 2D aspirations make this somewhat less compelling).
-Casey
On Mon, Feb 9, 2009 at 9:58 PM, Tristam MacDonald <swift...@gmail.com> wrote:
WRT fixed-functionality pipeline deprecation, unfortunately I havefound that there are still many many machines out there (for example
the prev gen iBooks, etc) that do not have shader support. Unless you
are writing hardcore games targeted for the latest hardware (which
seems highly unlikely in the hobby sphere given their big content
budgets), or simply don't care about supporting a large fraction of
the installed hardware, fixed-function coding is a fact of life. Of
course this is changing but it will be a few years before shader
support reaches a critical mass IMO for the hobby "market".