First off, I think throwing ctypes in there is unnecessarily
complicating things. The standard way to do this would be
cdef extern from "opengl.h": # link to this when building the extension
ctypedef int GLenum
ctypedef int L
ctypedef int GLsizei
void glMultiDrawArrays(GLenum mode, GLint *first, GLsizei *count,
GLsizei primcount)
def draw_stuff(mode, first_py, count_py):
primcount = len(first_py)
cdef GLint* first_c = <GLint*>malloc(sizeof(GLint) * primcount)
cdef GLint* count_c = <GLint*>malloc(sizeof(GLint) * primcount)
try:
for i in range(primcount):
first_c[i] = first_py[i]
count_c[i] = count_py[i]
glMultiDrawArrays(mode, first_c, count_c, primcount)
finally:
free(first_c)
free(count_c)
Of course you may have a more pythonic way of storing your data than a
pair of lists of ints, but that's the general idea. The line xxx_c[i]
= xxx_c[i] will convert the Python object (it could be a ctypes
c_long, or any other convertable-to-int object) to a C int of the
right size.
If you really want to get glMultiDrawArrays from ctypes you could do
the cast that you have above, but without the [0], and then call this
function pointer just the same.