[Android] Video Capture and Encoding

813 views
Skip to first unread message

Charles (gtcompscientist)

unread,
Dec 16, 2011, 10:53:46 AM12/16/11
to javacv
I've been struggling with trying to get an image I could process from
the camera for a couple days now.

My current setup:
- OpenCV2.3.1-android-arm
- ffmpeg-0.7.5-android-arm
- HTC Evo 3D running a 2.3.4 (rooted, but just a basic ROM to remove
CarrierIQ)
- Eclipse and the Android SDK, NDK, etc are all the latest versions.

My goal:
- Capture an image somehow from OpenCV that I can turn into an
AVFrame and send to a custom video server.

Methods I have tried to get an image:
- CvCapture cap*
- FFMpegFrameGrabber
- OpenCVFrameGrabber

*Used this to get a CvCapture =
opencv_highgui.cvCreateCameraCapture(opencv_highgui.CV_CAP_ANDROID)
Also tried a lot of other device numbers just to see, none worked.

Other notes:
- I am very flexible about which codecs I use, but would prefer H.264
over H.263.
- Preferred resolution is CIF (352x288), but I would settle for QCIF
if the processing overhead is too much at the higher resolution.
- To send my video, all I need is an encoded video frame as a byte[].

Thanks - Charles

Samuel Audet

unread,
Dec 16, 2011, 11:05:41 AM12/16/11
to jav...@googlegroups.com
Hello,

Since Android has its own Java API for camera capture, why not simply
use that? I understand that the OpenCV guys added some hack to highgui
to capture from the native side, but if you're going with Java anyway,
I'd use the official API...

Samuel

Charles Anderson

unread,
Dec 16, 2011, 11:09:48 AM12/16/11
to jav...@googlegroups.com
I haven't been able to figure out how to capture video in a background thread using the official API.  Every example uses a surface holder that (as far as I can tell) has to be visible to provide updates. If there is an example of how to do this, then I'd be more than happy to use it.

Charles
--
Charles Anderson
charles.s...@gmail.com
www.aubreyandcharles.com
Smyrna, Georgia

Samuel Audet

unread,
Dec 16, 2011, 11:18:31 AM12/16/11
to jav...@googlegroups.com
Ah I see... I never tested the OpenCV hack, but I know it's pretty
hacky. You may want to try some other precompiled version of OpenCV for
Android, or recompile JavaCV "properly" for the hack to work (I really
have no clue.. it apparently needs files I don't have, but then it says
everything's fine and compiles anyway, go figure..) Please do let me
know if you figure it out, thanks

Samuel

> opencv_highgui.__cvCreateCameraCapture(opencv___highgui.CV_CAP_ANDROID)

Charles (gtcompscientist)

unread,
Dec 20, 2011, 10:47:32 PM12/20/11
to javacv
I've since figured out how to get the camera data without displaying
the preview, etc. However, my new issue is that I would like to be
able to have different codecs available from ffmpeg. Is there a way
to use the ffmpeg-android*.jar to access custom compiled ffmpeg
libraries (.so)?

Charles

On Dec 16, 11:18 am, Samuel Audet <samuel.au...@gmail.com> wrote:
> Ah I see... I never tested the OpenCV hack, but I know it's pretty
> hacky. You may want to try some other precompiled version of OpenCV for
> Android, or recompile JavaCV "properly" for the hack to work (I really
> have no clue.. it apparently needs files I don't have, but then it says
> everything's fine and compiles anyway, go figure..) Please do let me
> know if you figure it out, thanks
>
> Samuel
>
> On 2011-12-17 01:09, Charles Anderson wrote:
>
>
>
>
>
>
>
> > I haven't been able to figure out how to capture video in a background
> > thread using the official API.  Every example uses a surface holder that
> > (as far as I can tell) has to be visible to provide updates. If there is
> > an example of how to do this, then I'd be more than happy to use it.
>
> > Charles
>

Samuel Audet

unread,
Dec 20, 2011, 11:09:19 PM12/20/11
to jav...@googlegroups.com
On 2011-12-21 12:47, Charles (gtcompscientist) wrote:
> I've since figured out how to get the camera data without displaying
> the preview, etc. However, my new issue is that I would like to be

How did you accomplish this? Might be useful for others to know, thanks

> able to have different codecs available from ffmpeg. Is there a way
> to use the ffmpeg-android*.jar to access custom compiled ffmpeg
> libraries (.so)?

The ffmpeg-*-android-arm.zip packages *are* compiled versions of FFmpeg.
If the ABI of some other compiled version of FFmpeg is the same as
those, then we should be able to use them directly with JavaCV, if this
was your question

Samuel

Charles Anderson

unread,
Dec 20, 2011, 11:18:38 PM12/20/11
to jav...@googlegroups.com
So, the first piece is how I got the camera data without displaying a preview.  You still have to set everything up as though you are going to display it and just set the View to INVISIBLE.  And then set the appropriate callback for your video processing thread to handle the "video" data.

My view looks like this:
------------------------------------------------------------------
public class TestVideoPreview extends SurfaceView implements
        SurfaceHolder.Callback {

    SurfaceHolder mHolder;
    Camera mCamera;
    Camera.PreviewCallback previewCallback;
    boolean created = false;

    public TestVideoPreview(Context parent,
            Camera.PreviewCallback previewCallback) {
        super(parent);
        this.previewCallback = previewCallback;

        // Install a SurfaceHolder.Callback so we get notified when the
        // underlying surface is created and destroyed.
        mHolder = getHolder();
        mHolder.addCallback(this);
        mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
        mCamera = Camera.open();
        try {
            mCamera.setPreviewDisplay(mHolder);
        } catch (IOException exception) {
            mCamera.release();
            mCamera = null;
            // TODO: add more exception handling logic here
        }
        Camera.Parameters parameters = mCamera.getParameters();

        List<Size> sizes = parameters.getSupportedPreviewSizes();
        Size optimalSize = getOptimalPreviewSize(sizes, 352, 288);
        parameters.setPreviewSize(optimalSize.width, optimalSize.height);

        mCamera.setParameters(parameters);
        if (previewCallback != null) {
            mCamera.setPreviewCallback(previewCallback);
        }
        mCamera.startPreview();
        created = true;
    }

    public void surfaceCreated(SurfaceHolder holder) {
        if (created)
            return;
        // The Surface has been created, acquire the camera and tell it where
        // to draw.
        mCamera = Camera.open();
        try {
            mCamera.setPreviewDisplay(holder);
        } catch (IOException exception) {
            mCamera.release();
            mCamera = null;
            // TODO: add more exception handling logic here
        }
    }

    public void surfaceDestroyed(SurfaceHolder holder) {
        // Surface will be destroyed when we return, so stop the preview.
        // Because the CameraDevice object is not a shared resource, it's very
        // important to release it when the activity is paused.
        mCamera.stopPreview();
        mCamera.release();
        mCamera = null;
        created = false;
    }

    private Size getOptimalPreviewSize(List<Size> sizes, int w, int h) {
        final double ASPECT_TOLERANCE = 0.05;
        double targetRatio = (double) w / h;
        if (sizes == null)
            return null;

        Size optimalSize = null;
        double minDiff = Double.MAX_VALUE;

        int targetHeight = h;

        // Try to find an size match aspect ratio and size
        for (Size size : sizes) {
            double ratio = (double) size.width / size.height;
            if (Math.abs(ratio - targetRatio) > ASPECT_TOLERANCE)
                continue;
            if (Math.abs(size.height - targetHeight) < minDiff) {
                optimalSize = size;
                minDiff = Math.abs(size.height - targetHeight);
            }
        }

        // Cannot find the one match the aspect ratio, ignore the requirement
        if (optimalSize == null) {
            minDiff = Double.MAX_VALUE;
            for (Size size : sizes) {
                if (Math.abs(size.height - targetHeight) < minDiff) {
                    optimalSize = size;
                    minDiff = Math.abs(size.height - targetHeight);
                }
            }
        }
        return optimalSize;
    }

    public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
        // Now that the size is known, set up the camera parameters and begin
        // the preview.
        Camera.Parameters parameters = mCamera.getParameters();

        List<Size> sizes = parameters.getSupportedPreviewSizes();
        Size optimalSize = getOptimalPreviewSize(sizes, w, h);
        parameters.setPreviewSize(optimalSize.width, optimalSize.height);

        mCamera.setParameters(parameters);
        if (previewCallback != null) {
            mCamera.setPreviewCallback(previewCallback);
            // Camera.Size size = parameters.getPreviewSize();
            // byte[] data = new byte[size.width*size.height*
            // ImageFormat.getBitsPerPixel(parameters.getPreviewFormat())/8];
            // mCamera.addCallbackBuffer(data);
        }
        mCamera.startPreview();
    }
}
------------------------------------------------------------------

Just pass in a callback to handle the frames.

The second piece of my question is back to ffmpeg.  I understand that the provided zip is compiled libraries. The problem I'm running into is that I want to be able to build more codecs in (there are many that aren't active currently, such as H.264 and VP8) down the road.  I may be able to use the current libraries for proof of concept, but ultimately the ability to build those libraries from source (see: http://bambuser.com/opensource) would be a HUGE help if it still worked with the interface provided within javacv.

Charles

Charles (gtcompscientist)

unread,
Dec 21, 2011, 7:24:10 PM12/21/11
to javacv
So, I have made some progress, but am now stuck again with no video
encoded again. I feel like I am missing something obvious.

This is the code that sets up the codecs:
public TestVideoBase()
{
//Instantiate encoders/decoders, set up screen for surface
Logging.Log("Instantiated new " + this.getClass());
avcodec.avcodec_register_all();
avcodec.avcodec_init();
avformat.av_register_all();

mCodec = avcodec.avcodec_find_encoder(avcodec.CODEC_ID_H263);
if (mCodec == null)
{
Logging.Log("Unable to find encoder.");
return;
}
Logging.Log("Found encoder.");

mCodecCtx = avcodec.avcodec_alloc_context();
mCodecCtx.bit_rate(300000);
mCodecCtx.codec(mCodec);
mCodecCtx.coded_width(VIDEO_WIDTH);
mCodecCtx.coded_height(VIDEO_HEIGHT);
mCodecCtx.time_base(new AVRational(VIDEO_FPS));
mCodecCtx.pix_fmt(avutil.PIX_FMT_YUV420P);
mCodecCtx.codec_id(avcodec.CODEC_ID_H263);
mCodecCtx.codec_type(avutil.AVMEDIA_TYPE_VIDEO);

if (avcodec.avcodec_open(mCodecCtx, mCodec) == 0)
{
Logging.Log("Unable to open encoder.");
return;
}
Logging.Log("Encoder opened.");

mPicSize = avcodec.avpicture_get_size(avutil.PIX_FMT_YUV420P,
VIDEO_WIDTH, VIDEO_HEIGHT);
mPic = new AVPicture(mPicSize);
mFrame = avcodec.avcodec_alloc_frame();
}

This is where the Camera.Callback data is processed:

public void onPreviewFrame(byte[] data, Camera camera)
{
//Goal here is to encode and then decode the data completely
Logging.Log("Got a preview frame.");
BytePointer picPointer = new BytePointer(data);
BytePointer bBuffer = new BytePointer(mPicSize);

if (avcodec.avpicture_fill((AVPicture)mFrame, picPointer,
avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT) == 0)
{
Logging.Log("Couldn't convert preview to AVPicture");
return;
}
Logging.Log("Converted preview to AVPicture");
VCAP_Package vPackage = new VCAP_Package();

//encode the image
int size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer,
mPicSize, mFrame);
int totalSize = size;
while (size >= 0)
{
Logging.Log("Encoded '" + size + "' bytes.");
//Get any delayed frames
size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer, mPicSize,
null);
totalSize += size;
}
Logging.Log("Finished encoding. (" + totalSize + ")");
}

I always get nothing to encode. Any suggestions to fix this or
requests for more information to help me narrow this down?

Thanks,
Charles

On Dec 20, 11:18 pm, Charles Anderson <charles.s.ander...@gmail.com>
wrote:
> On Tue, Dec 20, 2011 at 11:09 PM, Samuel Audet <samuel.au...@gmail.com>wrote:
>
>
>
>
>
>
>
>
>
> > On 2011-12-21 12:47, Charles (gtcompscientist) wrote:
>
> >> I've since figured out how to get the camera data without displaying
> >> the preview, etc.  However, my new issue is that I would like to be
>
> > How did you accomplish this? Might be useful for others to know, thanks
>
> >  able to have different codecs available from ffmpeg.  Is there a way
> >> to use the ffmpeg-android*.jar to access custom compiled ffmpeg
> >> libraries (.so)?
>
> > The ffmpeg-*-android-arm.zip packages *are* compiled versions of FFmpeg.
> > If the ABI of some other compiled version of FFmpeg is the same as those,
> > then we should be able to use them directly with JavaCV, if this was your
> > question
>
> > Samuel
>
> --
> Charles Anderson
> charles.s.ander...@gmail.comwww.aubreyandcharles.com
> Smyrna, Georgia

Charles (gtcompscientist)

unread,
Dec 22, 2011, 5:08:42 AM12/22/11
to javacv
I've solved the first part of my problem. I was setting the
"coded_width" and "coded_height", but not the width/height properties
of the Codec Context, which caused the encode call to fail
"avcodec_check_dimensions()". However, after fixing that I'm actually
in MORE of a bind, because I am now getting a segfault with a HUGE
memory dump. Does anybody have any suggestions on how to debug this?

Charles

On Dec 21, 7:24 pm, "Charles (gtcompscientist)"

Charles (gtcompscientist)

unread,
Dec 22, 2011, 3:19:46 PM12/22/11
to javacv
To aid in solving this I've created a Google Code project space that
exactly replicates the issue when the main activity is started.

The project is here: http://code.google.com/p/test-video-encode/

I'm desperate for any help I can get at this point.

Charles

On Dec 22, 5:08 am, "Charles (gtcompscientist)"
> ...
>
> read more »

Charles (gtcompscientist)

unread,
Dec 22, 2011, 7:48:55 PM12/22/11
to javacv
I'm still trying to gather more data on what's going on here.

I've found a way to [start to] diagnose the source of the crash. The
crash appears to be originating:

MPV_encode_picture + 0x002c (44)
avcodec_encode_video + 0x0090 (144)

Can anybody help with figuring out further where this is? The best I
can figure is that it's something in MPV_frame_end(); Beyond that I'm
pretty much clueless.

Charles

On Dec 22, 3:19 pm, "Charles (gtcompscientist)"
> ...
>
> read more »

Samuel Audet

unread,
Dec 23, 2011, 8:00:21 AM12/23/11
to jav...@googlegroups.com
On 2011-12-21 13:18, Charles Anderson wrote:
> So, the first piece is how I got the camera data without displaying a
> preview. You still have to set everything up as though you are going to
> display it and just set the View to INVISIBLE. And then set the
> appropriate callback for your video processing thread to handle the
> "video" data.

Ah, nice trick. It's a bit hacky and probably not very efficient, but it
has the advantage of working on all devices... thanks!

> The second piece of my question is back to ffmpeg. I understand that
> the provided zip is compiled libraries. The problem I'm running into is
> that I want to be able to build more codecs in (there are many that
> aren't active currently, such as H.264 and VP8) down the road. I may be
> able to use the current libraries for proof of concept, but ultimately
> the ability to build those libraries from source (see:
> http://bambuser.com/opensource) would be a HUGE help if it still worked
> with the interface provided within javacv.

So, simply apply the patch I provide in the package, and recompile from
source. I am not sure where the problem is ...?

Samuel

Samuel Audet

unread,
Dec 23, 2011, 8:02:27 AM12/23/11
to jav...@googlegroups.com
On 2011-12-22 09:24, Charles (gtcompscientist) wrote:
> I always get nothing to encode. Any suggestions to fix this or
> requests for more information to help me narrow this down?

Hum, why not start with FFmpegFrameRecorder and work your way from
there? AFAIK, FFmpegFrameRecorder works fine, so try to do the same
thing as that.

Samuel

Charles (gtcompscientist)

unread,
Dec 23, 2011, 5:21:01 PM12/23/11
to javacv
> has the advantage of working on all devices... thanks!
Thanks on that... I actually looked through the implementation of the
preview pieces and I'm pretty sure that it's actually done fairly
efficiently.

> So, simply apply the patch I provide in the package, and recompile from
> source. I am not sure where the problem is ...?
I will have to look more into this. I have apparently missed
something.

> Hum, why not start with FFmpegFrameRecorder and work your way from
> there? AFAIK, FFmpegFrameRecorder works fine, so try to do the same
> thing as that.
OK... I have been trying this and can't seem to get it right, either.
The data I am getting is a byte[] of YUV420P, but I can't figure out
how to load the data into the IplImage to use the FFMpegFrameRecorder
directly. So, I tried to essentially follow the steps and can't get
that to work either. I feel like I'm just missing something being
initialized somewhere in this chain to get the frames. Part of the
issue with the frame recorder is also that I don't want to write a
file, I am looking to stream the data. Do you have any suggestions
for where to look/try?

Charles

Samuel Audet

unread,
Dec 27, 2011, 4:41:00 AM12/27/11
to jav...@googlegroups.com
On 2011-12-24 07:21, Charles (gtcompscientist) wrote:
>> has the advantage of working on all devices... thanks!
> Thanks on that... I actually looked through the implementation of the
> preview pieces and I'm pretty sure that it's actually done fairly
> efficiently.

The preview is done efficiently, yes, but if it's invisible, does it
actually skip any drawing entirely?

>> Hum, why not start with FFmpegFrameRecorder and work your way from
>> there? AFAIK, FFmpegFrameRecorder works fine, so try to do the same
>> thing as that.
> OK... I have been trying this and can't seem to get it right, either.
> The data I am getting is a byte[] of YUV420P, but I can't figure out
> how to load the data into the IplImage to use the FFMpegFrameRecorder
> directly. So, I tried to essentially follow the steps and can't get
> that to work either. I feel like I'm just missing something being
> initialized somewhere in this chain to get the frames. Part of the
> issue with the frame recorder is also that I don't want to write a
> file, I am looking to stream the data. Do you have any suggestions
> for where to look/try?

Ah, I am afraid I have never tried that. I am sure FFmpeg can be
tortured in many ways, but I have not attempted to encode a stream with
it alas... That's something FFmpeg experts could help you with though.

Samuel

Reply all
Reply to author
Forward
0 new messages