> I want to start a discussion on how the Camera API could be substantially
> improved regarding the performance of preview callbacks. As far as I am able
> to tell (by browsing the git repository) no coding has been done to address
> The core of the problem is that the glue between the native Camera code and
> the Java framework (in the form of android_hardware_Camera.cpp) allocates a
> new Java array on each preview frame delivered to the PreviewCallback
> supplied by the application. In short, this generates crazy amounts of
> garbage and kills performance. A defect has been reported
> at: http://code.google.com/p/android/issues/detail?id=2794
Yes, I'd like to chime in on this - I've encountered the same issue, and
it's absolutely killing the performance.
> Having read some of the source code I remain ignorant as to why the API
> operates in the way it does, and have assumed that the API is strongly
> moulded by hardware/driver constraints. So I've tried to remain within
> conservative (but guessed) boundaries for how the API can be adjusted.
I've got another suggestion, that ought to fit in as a backwards
compatible enhancement that doesn't break existing applications using the
In addition to the current PreviewCallback interface, another interface
GetBufferCallback would be added. This interface has one method, byte
getPreviewBuffer(int size), that is called for each frame that would be
returned by onPreviewFrame.
If no GetBufferCallback is set, the API would work just as it does now,
allocating a new byte for each frame, otherwise the buffer returned by
the callback is used for returning the next frame in onPreviewFrame.
If the getPreviewBuffer method returns null, it signals that the
application doesn't want any frame at the moment (e.g. still busy
processing the last one).
Then all buffer management is up to the application, which can choose to
reuse one single buffer for all processing, or use a pool of buffers.
Does this sound sensible? I could try hacking this together as a proper
patch if you want to try my idea out - the modifications it requires
really are quite minimal.
I think it would be a good idea to collect the use cases of the API
that we want to create.
For the Camera.PreviewCallback I can see 1 main use case: get access
to the raw frames without compression for further processing. (Simply
displaying the frames is not an issue, since there is an extra API for
Now there are basically 2 variants that I can see:
1/a: Process the frames in java code
1/b: Process the frames in native code
I think that in such resource intensive algorithms, like image
processing, the usage of native code will increase, so the API should
try to be efficient for such use cases.
For handling the actual callback I would like to make the following proposal:
byte toArray(byte buffer)
This interface implements the Dispose design pattern, which can be
probably seen as a corner case of the reference counted object design
A Camera implementation would allocate a set of objects that implement
the CameraBuffer interface. These objects are filled with the image
data and passed to the callback. When the application is done with the
processing it calls the release() method on the buffer, which in turn
tells the Camera implementation, that the buffer can be reused for the
The frame data can be accessed by using the toArray() method. It
follows the pattern of the Java Runtime: If the parameter is not null,
and big enough to hold the image data, it is used, otherwise a new
array is allocated.
Why not simply have a byte getBuffer() method in the interface? Here
comes the 1/b use case.
If this object is passed to native code, an appropriate NDK API may be
used to get direct access to the memory buffer allocated by the camera
implementation, potentially without copying it.
const char *CameraBuffer_Get_Pointer(int handle) // value of the
This solution has the following advantages in my opinion:
- It makes the handling of callbacks more efficient, no GC hell
- It keeps the buffer allocation handling in the camera
implementation. This way, depending on the camera hardware, the
implementation can perform additional memory optimizations.
- The interface makes sure that the image data is only copied to a
java array, when it is needed. If it will be processed by an algorithm
that is implemented in native code, it can be passed through without
- The application may still apply its own additional buffer management
What do you think?
There are of course many details to be worked out, e.g. whether
putting a releaseBuffer(CameraBuffer) method into the Camera class
makes more sense than using release() on CameraBuffer. I am not sure
whether the handle() method is necessary or not.
Also, instead of changing PreviewCallback, probably a new name will
have to be introduced to maintain backwards compatibility.
> Maintaining a recyclable pool of buffers available to the camera
> implementation does prevent the application from blocking the camera thread
> and would reduce garbage collection. But it introduces other considerations
> - how large should the pool be? What happens when the pool is exhausted?
The pool size could be determined by the implementation, based on the
size of available memory in the device. It could even change the pool
size when conditions change (e.g. if the user starts another
It would be probably a good idea to add an API which can set the
minimum size of the pool.
If the pool is exhausted, then the Camera would simply drop the frame.
Maybe it would make sense to change the callback interface so the
Camera can report the dropped frames, so the application can detect
this situation and act on it (increase the pool size if possible)
> 2009/10/4 Gergely Kis <gerge...@gmail.com>