How is video display size and orientation set in Android?

87 views
Skip to first unread message

jasperr

unread,
Dec 14, 2008, 9:58:12 PM12/14/08
to android-platform
I'm a newbie to Android and want to explore the video display flow
under Android.

I browsed the video surface output in opencore, and found that it can
not get the full video display info such as coordinate and
orientation. So it just allocate buffers from pmem heap according to
the original video size and post the buffer out of opencore ( to a
layer buffer module).

So my question is:
(1) Where the video display size and orientation?

(2) which module does the scaling and rotation? by hardware or
software?

(3) Can anybody give a further explanation how the surface flinger and
layer buffer work?

(4) is there any way to let famework base libs (surface flinger and
layer buffer) output some log info like opencore?
It's will be very helpful for me to trace the code.

Many thanks
Jasperr

Dave Sparks

unread,
Dec 15, 2008, 11:10:24 AM12/15/08
to android-platform
The video display size comes from the surface allocated by the
application that creates the MediaPlayer and is passed as an ISurface.
SurfaceFlinger is notified by the Window Manager of any changes in
position, size, or orientation.

SurfaceFlinger does the scaling and rotation. In the case of the G1,
this allows it to take full advantage of the display processor, which
also does color conversion.

pmem is the driver that allocates physically contiguous memory for
devices like the display processor, DSP, and camera. For security, we
don't pass around physical addresses in user space. Instead we pass
around a file descriptor that is backed by the physical memory and can
be mapped into the user process space as virtual memory. A kernel
driver can then map the file descriptor back to physical addresses
when needed.

When the video MIO gets a new frame to display, it passes the fd/
offset to SurfaceFlinger, which in turn programs up the display
processor to scale, position, rotatate, and color convert the frame.

If there is any logging in SurfaceFlinger, it can probably be turned
on by setting the preprocessor symbol LOG_NDEBUG to 0. I'm not sure
that it has any. If not, you can add your own logging using the LOGV
macro and using LOG_NDEBUG to enable or disable it.

jasperr

unread,
Dec 16, 2008, 4:40:39 AM12/16/08
to android-platform

Thank you very much, Dave!

I saw there is handling of rotation and scaling in the blitter
(BlitHareware.cpp).
Does SurfaceFlinger use this blitter for such operation?

And could you explain further on the difference of “surface” and
“layer”?
I saw there are LayerBitmap, LayerBlur and LayerBuffer etc. And it's
the LayerBuffer that receive the output from the video surface MIO of
the opencore.

I'm confused about the relationship between these modules. Thanks
again!

Best Regards
Jasperr

Dave Sparks

unread,
Dec 16, 2008, 11:51:22 AM12/16/08
to android-platform
I'm not really familiar with the SurfaceFlinger code - only where the
media framework intersects with it. However, I'll try to answer your
questions:

SurfaceFlinger does use the hardware for scaling, rotation, and color
conversion. BlitHardware.cpp is an abstraction layer for the hardware
blitter.

Surface is the class that the application us and Layer is the remote
object in SurfaceFlinger that corresponds to the surface. There is a
process boundary between the app and SurfaceFlinger and all
communication takes place via the binder interface.

LayerBitmap is the normal drawing surface that apps use. LayerBlur and
LayerBuffer are special layers that have no backing store. LayerBlur
simply blurs everything behind it, which is used to make modal dialogs
stand out from the background. LayerBuffer, as you surmised, is the
"push buffer" layer used for video.
Reply all
Reply to author
Forward
0 new messages