(0.0, 0.0, 0.0, 0.015625, 0.0, 0.0, 0.015625, 0.015625, 0.0, 0.0,
0.015625, 0.0)
With a size of 1023x1023 (or any non-power of 2), they are pixel
based:
(0.0, 0.0, 0.0, 16.0, 0.0, 0.0, 16.0, 16.0, 0.0, 0.0, 16.0, 0.0)
Is this intentional?
I don't have any previous experience with that portion of pyglet, but
looking at the source code for TextureAtlas (image/atlas.py:208 from
hg tip), it appears that it intends to return a TextureRegion with the
same pixel-dimensions as the original image.
So my guess would be that that's a bug.
That seems like a pretty big image to be making an atlas of. Are you
sure this behavior happens with _any_ image with a size that's a power
of 2? (e.g. 64x64?)
~ Nathan
> --
> You received this message because you are subscribed to the Google Groups "pyglet-users" group.
> To post to this group, send email to pyglet...@googlegroups.com.
> To unsubscribe from this group, send email to pyglet-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/pyglet-users?hl=en.
>
>
Yeah, I saw where it doing it, I just wanted to know if I could count
on that behavior or if it was a bug that would eventually be fixed.
I've written a C extension to blit the tiles and that code needs it to
be one way or the other consistently. For now, I can control it by the
size of the atlas.
> That seems like a pretty big image to be making an atlas of.
I am using the atlas to store a currently unknown number of 16x16
tiles. I just picked 1024x1024 as a starting point.
>Are you sure this behavior happens with _any_ image with a size that's a power
> of 2? (e.g. 64x64?)
So far I've only added 16x16 and 256x16 images, but they are both one
way or the other depending on the power of 2-ness of the atlas.
Thanks
That is what it is doing.
Do you have any idea why this works:
...
self.tileAtlas = pyglet.image.atlas.TextureAtlas(1025,1025) #non po2
... add some 16x16 images
glEnable(self.tileAtlas.texture.target)
glBindTexture(self.tileAtlas.texture.target,
self.tileAtlas.texture.id)
glBegin(GL_QUADS)
glTexCoord2i(0,0)
glVertex3i(0,0,0)
glTexCoord2i(1025,0)
glVertex3i(1025,0,0)
glTexCoord2i(1025,1025)
glVertex3i(1025,1025,0)
glTexCoord2i(0,1025)
glVertex3i(0,1025,0)
glEnd()
and displays the whole texture on the screen.
But this does not work:
...
self.tileAtlas = pyglet.image.atlas.TextureAtlas(1024,1024) #po2
... add some 16x16 images
glEnable(self.tileAtlas.texture.target)
glBindTexture(self.tileAtlas.texture.target,
self.tileAtlas.texture.id)
glBegin(GL_QUADS)
glTexCoord2f(0.0, 0.0)
glVertex3f(0,0.0, 0.0)
glTexCoord2f(1.0, 0.0)
glVertex3f(1.0, 0.0, 0.0)
glTexCoord2f(1.0 ,1.0)
glVertex3f(1.0, 1.0, 0.0)
glTexCoord2f(0.0, 1.0)
glVertex3f(0.0, 1.0, 0.0)
glEnd()
If I add:
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
before the glBegin, it does display the texture, but almost freezes my
video and it gets about .01 fps.
Nevermind. I was assuming since I was calling glVertex3f instead of
glVertex3i that I had to use 0.0 - 1.0 instead of pixel coords. This
works:
glEnable(self.tileAtlas.texture.target)
glBindTexture(self.tileAtlas.texture.target,
self.tileAtlas.texture.id)
glBegin(GL_QUADS)
glTexCoord2f(0.0, 0.0)
glVertex3f(0.0,0.0, 0.0)
glTexCoord2f(1.0, 0.0)
glVertex3f(1024.0, 0.0, 0.0)
glTexCoord2f(1.0 ,1.0)
glVertex3f(1024.0, 1024.0, 0.0)
glTexCoord2f(0.0, 1.0)
glVertex3f(0.0, 1024.0, 0.0)
glEnd()