Hi all,
I am working on a video player app. Scaling (via ffmpeg) is a major bottleneck and I'm hoping to do something faster. I want to scale an RGB565 image to match the resolution of a SurfaceView created in Java. From what I can tell, a call to ANativeWindow_setBuffersGeometry is probably what I want, but I haven't been able to get a good image on the screen with it - I get a scrambled image that makes me think things are close to working (colors are right, etc). I'm calling SurfaceHolder.setFixedSize once in my Java activity to set the size of the surface. This surface is passed via JNI to native code. I have seperate render thread doing the ANativeWindow locking/memcpy.
I've seen a few posts in this group about similar issues but so far I haven't had any luck getting anything working.
Condensed version of relevant code:
// Surface size set to 704x480 in Java
...
// jni init code
const int bpp = 2;
const int bufferWidth = 720;
const int bufferHeight = 576;
unsigned char* pixelBuffer = new unsigned char[bufferWidth*bufferHeight*bpp];
...
ANativeWindow* window = ANativeWindow_fromSurface(env, surface);
ANativeWindow_setBuffersGeometry(window, bufferWidth, bufferHeight, RGB_565);
...
// end init code
...
// inside render thread loop
ANativeWindow_Buffer buffer;
if (ANativeWindow_lock(window, &buffer, NULL) == 0)
{
if (pixelBuffer)
{
memcpy(buffer.bits, pixelBuffer, bufferWidth*bufferHeight*2);
}
ANativeWindow_unlockAndPost(window);
Some specific questions:
1. Is ANativeWindow_setBuffersGeometry the correct API for what I need to do?
2. Do I call it once prior to rendering or on each pass? (right now I'm calling it once)
3. Am I at least on the right track? Any suggestions for debugging?
Thanks,
Nick