glBindTexture(GL_TEXTURE_2D, texture_id);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
3,
3,
1,
1,
GL_RGBA,
GL_UNSIGNED_BYTE,
(1, 0, 0, 1)
)
But it gives me the error: ctypes.ArgumentError: argument 9: <type
'exceptions.TypeError'>: wrong type
Obviously the last argument I'm passing to glTexSubImage2D is
incorrect. How do I correctly tell the glTexSubImage2D method that I
want the specified pixel to be red? And how do I get the original
color of that pixel in the first place for the purpose of making it
slightly more transparent?
Best regards,
Dex
You need to pass a ctypes pointer in, probably made with a ctypes string buffer.
However, keep in mind that this is going to be terrible from a
performance standpoint - OpenGL is not designed for pixel access in
this manner. A better approach would be to use render-to-texture with
an FBO, and render points as needed to the texture.
--
Tristam MacDonald
http://swiftcoder.wordpress.com/
I suppose I better question than my original may be: if I'm
exclusively using 2D graphics, and if my intent is to have a lot of
pixel-level manipulation, is OpenGL even the right tool? I realize
that OpenGL can handle what I'm after, but is the purely-2D usage of
OpenGL unorthodox, or will it cause problems in the future should the
project grow?
Best regards,
Dex
On Mar 30, 2:51 pm, Tristam MacDonald <swiftco...@gmail.com> wrote:
That's not to say that you cannot move data from CPU to GPU, though
minimizing that is best as well, and it is helpful to do so in bulk as
much as possible, therefore minimizing individual transfer operations.
Reading from GPU to CPU is especially bad, because it requires the GPU
to operate synchronously with the CPU, typically ruining any
opportunity for parallelism.
So in the spirit of the above, I would suggest the following
approaches to a "dissolve":
1. If you must do this by manipulating pixels individually, the most
optimal approach would be to use a shader (fragment shaders can
manipulate the output in this way). Since these run entirely on the
GPU, there is no transfer cost to worry about, and essentially zero
CPU cost (assuming the shader runs on the GPU). However, this requires
you to write a GLSL shader program, and this method may not be
supported on certain less capable graphics processors. Mobile and
bargain integrated graphics processors are a sore spot, though it gets
better over time as they slowly return to dust.
2. Another, less general, but simpler approach would be to load a
texture "flipbook" that contained a series of mask images that have
"holes" (using alpha values or whatnot) that multiply as you progress
though them. Parts of this texture would be blended by composing it
with the texture being dissolved, either using multitexture, the
stencil buffer or any other composition technique you desire. This
dissolve would not be random, but could appear less predicable by
randomizing the mask's orientation each time it's used (i.e., flipping
one of 4 ways, or making it large enough so that it could be oriented
at an arbitrary angle each time).
This requires you to create the mask flipbook texture, which could be
done with the CPU and loaded once at program startup. Running the
dissolve would be essentially like running an animation, you would
manipulate the texture coordinates of the mask texture over time to
display each "frame". If you wanted to be clever, you could use a 3D
mask texture, which could smoothy interpolate between mask frames
making the effect look smooth even if applied slowly with relatively
few mask frames.
3. The simplest approach would be to use a "fade" rather than a
dissolve. That is, just draw the image with progressively lower alpha
color values over time until it is fully transparent. This is not a
true dissolve, but is trivial to implement because it requires no
compositing.
-Casey
> --
> You received this message because you are subscribed to the Google Groups "pyglet-users" group.
> To post to this group, send email to pyglet...@googlegroups.com.
> To unsubscribe from this group, send email to pyglet-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/pyglet-users?hl=en.
>
>
Thanks again!
Dex
OpenGL can be useful for some kinds of 2D work, but if all
you're doing is calculating images pixel by pixel on the CPU
and then displaying them, you may find that OpenGL just gets
in the way.
On the other hand, if you could find some way of using shaders
to do your pixel calculations, you could take advantage of
hardware acceleration.
--
Greg