I'm using the NDK to stream images to an ImageView. My refresh rate is at least 20hz so rendering performance is very important to me.
On startup I create a temporary Bitmap object to draw frames into, since that's the most efficient way I'm aware of to draw frames onto the canvas. When it comes time to draw the next frame, I must do these steps:
- it only accepts a Java int array. That means you must use JNI to create a temporary jintArray, copy your frame from native memory, before calling this function.
- it actually creates a temporary bitmap, copies your color array to it, then draws the bitmap. That's even worse than my current setup! (at least I reuse the temporary bitmap)
I wish I could just store frames directly as Bitmaps (and not worry about copying native memory into a temporary Bitmap object), but that's not a viable option because Bitmap memory is stored on the VM heap, not native memory. I would quickly blow through the VM heap if I just created Bitmaps. Keep in mind when streaming, you buffer the next few frames, and that each ImageView has its own buffer. It adds up. One of the advantages of using the NDK is that I can allocate a lot more memory than I could from Java, and my allocations don't come out of the VM heap.
What would be great is to skip the "copy frame from native memory to temporary bitmap" step above, and draw directly from native memory to the canvas using an NDK function. The API might look like this:
<android/canvas.h>
int AndroidCanvas_drawBitmap( jobject canvas, const void* colors, AndroidBitmapFormat format, etc. );
The NDK already provides raw bitmap pixel access via AndroidBitmap_lockPixels(), so this wouldn't be much of a leap.
Thanks!