What's in raw data from PictureCallback of camera?

2211 views
Skip to first unread message

Xster

unread,
Feb 9, 2009, 6:57:47 PM2/9/09
to Android Developers
Hi,
Our university is intending to use the Android as a platform for
mobile image analysis. We're wondering what kind of information is
returned in the raw format when android.hardware.Camera.takePicture()
is called with a raw Camera.PictureCallback. I can't seem to find more
information about it on the http://code.google.com/android/reference/android/hardware/Camera.PictureCallback.html
page.

Thanks,
Xiao

Dave Sparks

unread,
Feb 10, 2009, 3:01:29 AM2/10/09
to Android Developers
On the G1, no data is returned - only a null pointer. The original
intent was to return an uncompressed RGB565 frame, but this proved to
be impractical.
> information about it on thehttp://code.google.com/android/reference/android/hardware/Camera.Pict...
> page.
>
> Thanks,
> Xiao

gjs

unread,
Feb 10, 2009, 9:17:59 PM2/10/09
to Android Developers
Hi,

I'm hoping that Android will supported this in future for RGB888 and
(at least) 8mp images.

I know this is a big ask, probably requiring much larger process heap
size ( > 16MB & high speed bus/memory ) but there already is 8mp
camera phones available on other platforms.

And yes I know these phones do not yet support loading such large
images into (java) application memory but I'm hoping the Android
architecture can accommodate this in the not to distant future.

Regards

Dave Sparks

unread,
Feb 10, 2009, 11:27:29 PM2/10/09
to Android Developers
Highly unlikely. Applications are restricted to a heap of 16MB. An 8MP
image in RG888 will use 32MB.

I am inclined to deprecate that API entirely and replace it with hooks
for native signal processing.

gjs

unread,
Feb 11, 2009, 1:00:13 AM2/11/09
to Android Developers
Hi David,

Thanks for quick reply, I think access via native signal processing
hooks is fine provided this includes access to the -full- 'snapshot'
buffer as well as the 'preview' buffer, RGB format would be nice as
well.

And yes 16MB heap limit is luxury for a phone application right now,
but I think this will become limiting over time and on different
platforms eg MIDs, netbooks etc ( for that matter, Android would be a
great OS for dedicated SLR and video cameras ;-)

Regards

blindfold

unread,
Feb 11, 2009, 3:57:07 AM2/11/09
to Android Developers
Hi David,

> I am inclined to deprecate that API entirely and replace it with hooks
> for native signal processing.

Can you please elaborate on that? First of all I am using the existing
APIs in my app and don't like the idea of things breaking through
deprecation of the camera APIs that are in the official SDK 1.0.
Secondly, *every* Android API *is* in most if not all cases already a
hook to native processing - but a hook that is platform-neutral. I'd
really hate to see everything move into JNI for instance, but I hope
that is not what you meant. I also don't see how you cannot create a
separate camera "heap" (4n MB reserved memory area for an n-megapixel
camera) that one could communicate with and get image chunks from
through Android APIs that *could* be upward compatible extensions of
the existing camera callback APIs (even while these are ugly)? After
all, there is no fundamental difference between communicating with a
camera and communicating with an external memory block. For image
processing purposes, communication can be line-by-line as a good trade-
off between speed (a whole image line gets transferred per - slow -
Android call), memory needs (never load an entire hi-res snapshot to
the app's limited heap) and algorithmic needs (storing a couple of
lines within the Android heap suffices for most filtering, feature
detection and subsampling operations, although camera API *extensions*
that further support this are welcome). Are there any image processing
experts involved in the camera API decision making? No offense but the
camera API for SDK 1.0 gives me the impression that things were -
perhaps due to time pressure - thrown together without *any*
background in (real-time) image processing as required for future
augmented reality applications and location-based image processing.

Regards

Dave Sparks

unread,
Feb 11, 2009, 5:03:50 AM2/11/09
to Android Developers
I'm talking about deprecating the raw picture callback that has never
worked. It won't affect any existing applications. As for the camera
API in SDK 1.0: It was never intended for signal processing. It was
intended only for taking snapshots. It just happens that creative
people like yourself have found other uses for it.

I certainly don't want to break your application. I do want to give
you a better API in the future. Rather than trying to do all your
image processing in Java, wouldn't you prefer to have built-in native
signal processing kernels that are optimized for the platform?

blindfold

unread,
Feb 11, 2009, 7:08:45 AM2/11/09
to Android Developers
Thank you David, I feel relieved to hear that. :-)

> Rather than trying to do all your image processing in Java, wouldn't you
> prefer to have built-in native signal processing kernels that are optimized
> for the platform?

Yes, of course. One can parameterize and wrap under an API a number of
useful low-level image (and audio) operations, such as various
filtering operations, edge detection, corner detection, segmentation,
convolution and so on. That is certainly very nice to have, but not
enough. I would want to implement my own pixel-level (and audio-byte-
level) processing algorithms that do not fit within the pre-canned
categories, while benefiting from the platform and CPU independence of
Android (no JNI if I can avoid it). It would be purely computational
code with loops and conditional branches, operating on arrays, array
elements and scalars. At that level, C code (minus any pointering) and
Java code actually looks almost the same. No real need for any fancy
data structures etc at *that* level that then covers the "expensive"
parts of the processing, so I feel that a simple, light-weight,
targeted JIT compiler could come a long way to meeting all these
needs: I really would not mind if it would only compile a
(computational) subset of Java, and it may leave code optimizations to
the developer by compiling rather straightforwardly. (Cannot help
being reminded of FORTRAN-77 on old IBM mainframes getting compiled
almost 1-to-1 to easy-to-read machine code.) So perhaps rather than a
built-in native signal processing *kernel* I am here thinking of a
built-in native signal processing (JIT) *compiler*. ;-)

Regards

Dave Sparks

unread,
Feb 11, 2009, 11:29:35 PM2/11/09
to Android Developers
I think we'll be able to give you something that will meet your needs.
It's always a balancing act between taking the time to get the API
just right and getting a product to market.

Keep making suggestions, we are listening.

blindfold

unread,
Feb 12, 2009, 12:36:51 PM2/12/09
to Android Developers
Fine! As a comparatively easy to design-and-implement yet very
powerful solution to boost the computational performance of Android,
you might consider adding a basic vector function API. Think
java.lang.System.arraycopy() and java.util.Arrays.fill(), but then
much enriched to cover most computational needs in media and signal
processing. For instance, array_add(a, b) would add arrays a and b
through native code, potentially further exploiting SIMD based
hardware acceleration if/once available, making it all nicely future-
proof. If the arrays are long enough, say > 100, the overhead of the
Android vector function calls over the native code execution times
becomes negligible. This approach easily extends to logical operators,
say array_xor(a, b), left/right shift operators (needed for
convolution type of processing and typical pixel-level image filtering
operations involving nearest neighbors within a couple of image
lines), multiplication by and addition of scalars, and so on.
Importantly, it also to a large extent covers (replaces) the need for
costly looping over conditional branches at Android level through the
use of mask vectors, as you can find explained in section 2.5 of the
online tutorial

http://www.kernel.org/pub/linux/kernel/people/geoff/cell/ps3-linux-docs/ps3-linux-docs-08.06.09/CellProgrammingTutorial/BasicsOfSIMDProgramming.html

An image processing example of how to use vector operators to
implement Prewitt or Sobel edge enhancement can be found in Table 1
(p. 30) of

http://wsnl.stanford.edu/papers/icdsc07_mapp.pdf

Having support for a basic set of vector operators may replace a
(likely more restrictive) image processing kernel, as well as avoid
the alternative of developing a dedicated compiler for a Java language
subset (a balancing act with a possible future generic JIT compiler).
A first generation implementation - for an early performance boost to
Android - can simply be realized by implementing the array operators
as for-loops in C code, generating native code from that and linking
the object code through JNI (all under the hood, such that the Android
application developer never sees any JNI stuff!). For computational
work in media and signal processing this will likely give an order of
magnitude speed improvement over the Dalvik interpreter, and/or
improve battery life because far fewer CPU cycles are spent on a given
task. The memory footprint will remain small too because it involves
only a small amount of code underneath the API. Later speed
optimizations may use platform dependent hand-coded assembly and
hardware acceleration.

Any future JIT compiler support in Android would be complementary, by
improving efficiency in higher level processing. The vector operators
would not become obsolete because a JIT compiler would in all
likelihood never reach the efficiency for low level processing
obtainable through hand-crafted or even hardware accelerated
implementations of the small set of vector operators under the
proposed media and signal processing API.

Of course it remains the responsibility of the Android application
programmer to make good use of the vector operators to replace the
most costly Android for-loops and conditional branches.

Regards

gjs

unread,
Feb 12, 2009, 5:59:20 PM2/12/09
to Android Developers
Hi David,

I too would appreciate some of these suggested operations for image
processing, particularly the arraycopy, xor and shift operations to
use primarily for image edge detection. Just hope that you can access
multiple full width 'scan lines' in a random io kind of fashion
allowing backtracking, no just a single pass over the raw image data,
which is probably obvious but I will state it anyway.

On a different note, could you also consider providing the ability to
add more sensor details to
the EXIF data for jpeg camera images. Camera name, picture
orientation, location
NET/GPS lat, lon, alt, tod etc is a good first step but I'd also like
provision for -

GPS - bearing, speed & accuracy.
Orientation - azimuth, pitch roll
Acceleration - x, y,z
Magnetic Field - x, y,z

- where these sensors are available.

As well as some of the standard camera meta data - shutter speed,
aperture, ISO speed, lens, zoom & flash settings etc ( where available
)

Ideally I would like to add additional EXIF data to any jpeg image I
create, no just camera images. And the ability to preserve ( extract
and re-inject ) EXIF data across JPEG edits.

Some of this may conflict with existing EXIF 'standards' and
limitations ( EXIF data must fit into a single 64k segment for jpeg
images etc ) so an XMP segment may be more appropriate for some of
this stuff ?

My vision (pun intended) is that panaramio, picasa, google maps and
others could put some of
these additional camera image attributes to good use.

Thanks for listening.

Regards




On Feb 13, 4:36 am, blindfold <seeingwithso...@gmail.com> wrote:
> Fine! As a comparatively easy to design-and-implement yet very
> powerful solution to boost the computational performance of Android,
> you might consider adding a basic vector function API. Think
> java.lang.System.arraycopy() and java.util.Arrays.fill(), but then
> much enriched to cover most computational needs in media and signal
> processing. For instance, array_add(a, b) would add arrays a and b
> through native code, potentially further exploiting SIMD based
> hardware acceleration if/once available, making it all nicely future-
> proof. If the arrays are long enough, say > 100, the overhead of the
> Android vector function calls over the native code execution times
> becomes negligible. This approach easily extends to logical operators,
> say array_xor(a, b), left/right shift operators (needed for
> convolution type of processing and typical pixel-level image filtering
> operations involving nearest neighbors within a couple of image
> lines), multiplication by and addition of scalars, and so on.
> Importantly, it also to a large extent covers (replaces) the need for
> costly looping over conditional branches at Android level through the
> use of mask vectors, as you can find explained in section 2.5 of the
> online tutorial
>
> http://www.kernel.org/pub/linux/kernel/people/geoff/cell/ps3-linux-do...

Dave Sparks

unread,
Feb 13, 2009, 12:01:17 PM2/13/09
to Android Developers
Some good suggestions.

Please write up a feature request in the bug tracker. Otherwise I'll
probably never remember how to find this thread and we'll lose track
of these good ideas.

Just in terms of expectations, advanced features like this are
probably several quarters out. Right now, we're working on improving
the common use cases. However, if there's an opportunity to implement
a feature in such a way that it enables more advanced use cases, I am
definitely in favor of spending the extra time on it, even if we
aren't able to take full advantage of it right away.

gjs

unread,
Feb 16, 2009, 1:08:41 AM2/16/09
to Android Developers, garyjam...@gmail.com
Hi David,

OK done, see http://code.google.com/p/android/issues/detail?id=2022

Many thanks !

regards
Reply all
Reply to author
Forward
0 new messages