My current code as below:
final static int W = 320;
final static int H = 480;
static short[] sPixels = new short[W*H];
static ShortBuffer sScreenBuff = ShortBuffer.allocate(W*H);
static Bitmap sScreenBitmap = Bitmap.createBitmap(W, H, Bitmap.Config.RGB_565);
void drawFrame(Canvas canvas) // java
{
xUpdateScreen(sPixels); // call Native C code
sScreenBuff.put(sPixels);
sScreenBuff.rewind();
sScreenBitmap.copyPixelsFromBuffer(sScreenBuff);
canvas.drawBitmap(sScreenBitmap, 0, 0, null);
}
// Native C
short* g_Buff = NULL;
void Java_xxx_xInit(...)
{
g_Buff = (short*)malloc(W*H*sizeof(short));
...
}
void Java_xxx_xUpdateScreen(JNIEnv* jEnv, jobject jObj, jshortArray jArr)
{
// update offscreen g_Buff
....
jEnv->SetShortArrayRegion(jArr, 0, BUFF_SIZE, g_Buff);
}
I have couple of questions here:
(1). In Native C code, the SetShortArrayRegion operation must copy all the data from g_Arr, which is a performance cost.
In Java code, sScreenBuff.put(sPixels) should do copying the whole buffer again.
And then, sScreenBitmap.copyPixelsFromBuffer(sScreenBuff) do it again.
So, I think the 3 times of buffer copy looks so ugly. How can I optimize those codes?
(2). After read some Android OpenGL ES related stuff, GLSurfaceView seems the best way for screen update.
Does it update the physical screen buffer directly?
(3). For GL solution, how to render a raw pixel buffer to GL surface? Where can I find some sample codes?
(4). Bitmap only offers Config.RGB_565 and Config.ARGB_4444/8888, why not RGB_888(with out alpha)?
Because my own canvas is in RGB_888, if I use Bitmap with ARGB_8888, it's very very slow due to the alpha blending.
- Eric