android and skia interaction

1,052 views
Skip to first unread message

Manu Bharghav Reddy

unread,
Jun 1, 2011, 1:31:27 AM6/1/11
to android-...@googlegroups.com
hi
I am new to android. I was going through the source code to figure how images are displayed on the screen. as we all know that canvas is the api responsible for all the drawing on the screen. and skia is the 2d graphics engine that does all the graphics part of rendering, decoding the images and loads more. But i couldn't find any code where in skia copies the pixel arrays to the frame buffer. I am eager to know which class actually does the function of copying the pixel arrays or bitmaps to the frame buffer.?

can anyone tell me where surface flinger gets involved in the whole process of view rendering???

extrapedestrian

unread,
Jun 1, 2011, 6:43:56 AM6/1/11
to android-platform
hi,

I've been analyzing android graphics and this is what i got so far:
It's not that simple. There is no class that copies pixel arrays to
framebuffer.

Surface flinger is responsible for allocating surfaces and drawing
Layers.
Layers are drawn using OpenGLES that can be hardware implemented or
software (pixelflinger).
DisplayHardware.cpp uses FramebufferNativeWindow.cpp to open
framebuffer device.
FramebufferNativeWindow allocates framebuffer memory surface and
returns this surface.
DisplayHardware initializes OpenGLES and EGL with this native surface.
SurfaceFlinger.cpp calls draw function on each Layer (LayerBase.cpp)
and they use OpenGLES call to draw, and somewhere inside OpenGLES
Layer image is being drawn to framebuffer surface using 2d triangle
rendering.

Skia is up there doing rendering on provided surface (canvas) and
doesn't bother with Layers, Flipping, Blending...

Manu Reddy

unread,
Jun 7, 2011, 1:37:09 AM6/7/11
to android-...@googlegroups.com
hi,

thank you for helping me out.

I have one more doubt. consider an application that just wants to draw an image to the screen(Image view). First the view root get initialized and a surface gets allocated which create a layer and gets some buffers allocated for drawing the image on to the screen. from here on where actually the image gets decoded( to the best of my knowledge this happens in skia correct me if i am wrong) . now  once the surface is set it is passed upon to the surface flinger which uses openGL to draw it.


In the part where in the actual drawing to the surface happens. I think the image is applied as a texture to the view box(2 triangles making a rectangle). can you tell me where exactly this drawing of rectangles happens??

--
You received this message because you are subscribed to the Google Groups "android-platform" group.
To post to this group, send email to android-...@googlegroups.com.
To unsubscribe from this group, send email to android-platfo...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-platform?hl=en.




--
Manu Reddy
3rd year, CSE
IIT Ropar,
Punjab

extrapedestrian

unread,
Jun 9, 2011, 5:13:51 AM6/9/11
to android-platform
Image is decoded with SKIA (not hardware accelerated) and drawn to
bitmap.

if device has graphic hardware - pixel flinger won't be involved at
all.

Layer.cpp OnDraw() calls drawWithOpenGL.
LayerBase.cpp drawWithOpenGL calls glDrawArrays(that draws array of to
triangles for one square surface).

glDrawArrays is standard OpenGLES api - if you have hardware GPU, this
function is loaded from hardware OpenGLES lib. if you don't have GPU
function is loaded from software OpenGLES implementation (that uses
pixelflinger).

Now I culdn't find where actual bit copy is performed here. If you
find it, please tell me.

I think every canvas has one layer. This is like layers in photoshop.
If SKIA wants to draw transparent image, it doesn't know what is under
the image in the framebuffer, so it draws to Layer with transparent
pixels empty. Then Surfaceflinger draws Layers so that OpenGL makes
sure they blend correctly.

On Jun 7, 7:37 am, Manu Reddy <manubharg...@gmail.com> wrote:
> hi,
>
> thank you for helping me out.
>
> I have one more doubt. consider an application that just wants to draw an
> image to the screen(Image view). First the view root get initialized and a
> surface gets allocated which create a layer and gets some buffers allocated
> for drawing the image on to the screen. from here on where actually the
> image gets decoded( to the best of my knowledge this happens in skia correct
> me if i am wrong) . now  once the surface is set it is passed upon to the
> surface flinger which uses openGL to draw it.
>
> In the part where in the actual drawing to the surface happens. I think the
> image is applied as a texture to the view box(2 triangles making a
> rectangle). can you tell me where exactly this drawing of rectangles
> happens??
>
> On Wed, Jun 1, 2011 at 4:13 PM, extrapedestrian
> <extra.pedestr...@gmail.com>wrote:

Zach Pfeffer

unread,
Jun 9, 2011, 6:05:14 PM6/9/11
to android-...@googlegroups.com
On Thu, Jun 9, 2011 at 4:13 AM, extrapedestrian
<extra.pe...@gmail.com> wrote:
> Image is decoded with SKIA (not hardware accelerated) and drawn to
> bitmap.
>
> if device has graphic hardware - pixel flinger won't be involved at
> all.

Is it smart about which images it sends down to the hardware and which
images it decodes in hardware? Handling small images in software and
big images in hardware.

Dianne Hackborn

unread,
Jun 9, 2011, 6:49:13 PM6/9/11
to android-...@googlegroups.com
That question doesn't really make sense, because that is not how it works.  SurfaceFlinger makes a surface for the window.  SurfaceFlinger uses hardware for compositing those *entire* window surfaces to the screen when the contents of a window changes.  All drawing done *inside* the window/surface (that is the entire draw traversal of a single view hierarchy) is software rendering into the surface.

In Android 3.0 an application can turn on hardware accelerated drawing, in which case the surface the system creates for a window is an OpenGL drawing surface instead of a software frame buffer.  Here all drawing to the surface via the view hierarchy goes through OpenGL, which presumably is hardware accelerated.  This is still composited through hardware to the screen by surface flinging when a window on the display changes.

You also really don't want to switch between hardware and software drawing in a particular drawing context.  This will overall be a net loss: you can't have the surface buffer in whatever special layout the hardware works best at (for example Tegra 2 likes to use some non-linear layout), you need to go through a very expensive context switch each time you change (have to flush the GL drawing buffer and wait for those commands to complete before you can start drawing in software), and things like blending can be much more expensive since an OpenGL surface's memory may be a *lot* more expensive to read back than normal RAM.

Anyway SurfaceFlinger just doesn't let you mix software and hardware drawing for a particular frame of a surface.  You can either get a raw frame buffer in RAM to do software drawing in to, or an OpenGL context for hardware, not both.


--
You received this message because you are subscribed to the Google Groups "android-platform" group.
To post to this group, send email to android-...@googlegroups.com.
To unsubscribe from this group, send email to android-platfo...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-platform?hl=en.




--
Dianne Hackborn
Android framework engineer
hac...@android.com

Note: please don't send private questions to me, as I don't have time to provide private support, and so won't reply to such e-mails.  All such questions should be posted on public forums, where I and others can see and answer them.

manu

unread,
Jun 10, 2011, 2:02:38 AM6/10/11
to android-...@googlegroups.com
hi ,

is there any good source which explains the details about how the drawing in the window or surface is done. I mean where actually the drawing into the surface happens( I know it happens in the canvas), who is telling where the canvas should draw into.?

i have one more question, how cum no one tried changing the android source code of 2.3  to enable hardware acceleration. because big companies with good research capability are using android( obviously not as good as Google). it would require modifying the canvas api to use openGL. to render into the surface. (I agree its not as simple as it sounds)but still its worth the try??

Dianne Hackborn

unread,
Jun 10, 2011, 2:37:43 AM6/10/11
to android-...@googlegroups.com
On Thu, Jun 9, 2011 at 11:02 PM, manu <manubh...@gmail.com> wrote:
is there any good source which explains the details about how the drawing in the window or surface is done. I mean where actually the drawing into the surface happens( I know it happens in the canvas), who is telling where the canvas should draw into.?

Mostly the source.  Start with Canvas and follow it.  Per-3.0, this is all just Skia code. It's just software drawing.  I don't know what you are really looking for beyond that.
 
i have one more question, how cum no one tried changing the android source code of 2.3  to enable hardware acceleration. because big companies with good research capability are using android( obviously not as good as Google). it would require modifying the canvas api to use openGL. to render into the surface. (I agree its not as simple as it sounds)but still its worth the try??

Worth a try?  Worth to who?

Romain Guy

unread,
Jun 10, 2011, 2:40:57 AM6/10/11
to android-...@googlegroups.com
>> i have one more question, how cum no one tried changing the android source
>> code of 2.3  to enable hardware acceleration. because big companies with
>> good research capability are using android( obviously not as good
>> as Google). it would require modifying the canvas api to use openGL. to
>> render into the surface. (I agree its not as simple as it sounds)but still
>> its worth the try??
>
> Worth a try?  Worth to who?


Doing so on Android < 3.0 would be a bad idea for compatibility.

--
Romain Guy
Android framework engineer
roma...@android.com

Note: please don't send private questions to me, as I don't have time

to provide private support.  All such questions should be posted on

Manu Reddy

unread,
Jun 10, 2011, 3:14:56 AM6/10/11
to android-...@googlegroups.com
hi , 

ok all the companies which use your OS on their handsets can get advantage if their Ui are more responsive,faster and all that..... people are getting out phones with good graphics hardware and if you have the hardware y not go the extra mile and provide that software functionality that enables it.

I am new to android, can you tell me the nature of those compatibility issues or sources where in i can find them. Ok what I am trying to do is this. i have a  hardware which supports PVRTC decompression(POWERVR SGX540) on the hardware itself . Android in itself doesnt support PVRTC support so i am trying to implement this, and probably if i get lucky i might get some good performance..

PS: @Romain Guy..i watched your keynote video at the GoogleIO , and at the Devoxx conference . It was really helpful in understand the graphics subsystem of android..as going through the code was slightly painful. Is there any place where i can find more detailed documentation of the architecture of the graphics subsystem of android.

--
You received this message because you are subscribed to the Google Groups "android-platform" group.
To post to this group, send email to android-...@googlegroups.com.
To unsubscribe from this group, send email to android-platfo...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-platform?hl=en.




--
Manu Reddy
3rd year, CSE
IIT Ropar,
Punjab
India

Manu Reddy

unread,
Jun 10, 2011, 6:40:51 AM6/10/11
to android-...@googlegroups.com
hi can anyone tell me the compatibility issues that arise when we try to render the views on the hardware using openGL instead of the software rendering currently employed in version earlier to 3.0 .

崔凯

unread,
Jun 10, 2011, 3:53:56 AM6/10/11
to android-...@googlegroups.com

Thanks,
Kai

Dianne Hackborn

unread,
Jun 10, 2011, 12:47:14 PM6/10/11
to android-...@googlegroups.com
Well for starters -- you may have noticed that part of this in 3.0 was a new API for applications to *opt in* to the feature.  Not even the implementation in 3.0 is able to do it in a way that maintains compatibility with all existing applications.  You can watch Romain's presentation and documentation about what developers should be aware of when opting in to hardware acceleration for a variety of compatibility issues.

On Fri, Jun 10, 2011 at 3:40 AM, Manu Reddy <manubh...@gmail.com> wrote:
hi can anyone tell me the compatibility issues that arise when we try to render the views on the hardware using openGL instead of the software rendering currently employed in version earlier to 3.0 .

--
You received this message because you are subscribed to the Google Groups "android-platform" group.
To post to this group, send email to android-...@googlegroups.com.
To unsubscribe from this group, send email to android-platfo...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-platform?hl=en.



--
Dianne Hackborn
Android framework engineer
hac...@android.com

Note: please don't send private questions to me, as I don't have time to provide private support, and so won't reply to such e-mails.  All such questions should be posted on public forums, where I and others can see and answer them.

manu

unread,
Jun 13, 2011, 6:13:26 AM6/13/11
to android-...@googlegroups.com
how many openGl contexts can be in operation at a single time???how is this context related to the window system??

Dianne Hackborn

unread,
Jun 13, 2011, 3:39:46 PM6/13/11
to android-...@googlegroups.com
It depends on the hardware and its drivers.

On Mon, Jun 13, 2011 at 3:13 AM, manu <manubh...@gmail.com> wrote:
how many openGl contexts can be in operation at a single time???how is this context related to the window system??

--
You received this message because you are subscribed to the Google Groups "android-platform" group.
To view this discussion on the web visit https://groups.google.com/d/msg/android-platform/-/E6p7sdxb6yAJ.

To post to this group, send email to android-...@googlegroups.com.
To unsubscribe from this group, send email to android-platfo...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-platform?hl=en.



--
Dianne Hackborn
Android framework engineer
hac...@android.com

Note: please don't send private questions to me, as I don't have time to provide private support, and so won't reply to such e-mails.  All such questions should be posted on public forums, where I and others can see and answer them.

Nishanth Peethambaran

unread,
Jul 28, 2011, 3:25:52 AM7/28/11
to android-...@googlegroups.com
Hi,

I am relatively new to Android - have been looking into the source 2.3 for the last 2 months, mostly in frameworks/base - surfacflinger code and kernel side.

My understanding goes as below. 
From app to final display, you can broadly split it into two phases - app doing composition into layer buffers and surfaceflinger composing the layer buffers into the framebuffer.

First phase involves app creating a layer with surfaceflinger - could be layer(Layer.cpp) or layerbuffer(LayerBuffer.cpp (ignoring layerdim and layerblur). Layer would internally allocate a surface which is bidirectionally linked to the layer and pass the surface back to the app. App would continue using the surface to do the drawings. My guess here is that layerbuffer is for push mechanism and used by apps like camera mediaplayback etc. They allocate the buffers and register them with the surface and then use the 'post' function in surface to push the buffers to surfaceflinger. Layer works normally like a pull mechanism (guessing so). This is also similar to Layerbuffer concept wise but don't use the post mechanism. Apps like games or normal 2D menus etc uses this interface to surfaceflinger.

Apps like camera, mediaplayback etc would be getting frames directly from media decoder (s/w or h/w) or camera sensor, could do a color conversion if required and pass it to LayerBuffer through post. Each app would be having 2 buffers (atleast in current code) with surfaceflinger for double buffer mechanism.

Other apps also would be sharing two buffers with surfaceflinger. The drawing to this buffer can happen using skia or openGL or so depending on what is needed to be drawn. I need to figure out how it is signalled to the surfacflinger that surface is updated. Here platform providers could use hw engine to speed up the OpenGL or skia library.

In short, every app who needs a window would be creating a layer with surfaceflinger who provides a surface and two buffers will be shared (uses IPC binder interface). App could use opengl or skia to draw to this surface.

Surfaceflinger runs in a loop threadLoop (called repeatedly - see Threads.cpp) - which identifies the layers which are updated by the app. I need to figure out how this is done. Then, based on the order of drawing and layers which have updates, surfaceflinger identifies the layer/surfaces that has to be drawn and composes them to the framebuffer using openGL. This rectangle within the frambuffer which is updated is passed to the framebuffer driver followed by the framebuffer using flip mechanism. The framebuffer driver has a HAL interface and is finally encapsulated in gralloc and dispayhardware. Surfaceflinger passes the dirty rectangle to framebuffer so that framebuffer driver need to copy only this region if the framebuffer supports partial updates.

Please do correct me if my understanding is wrong.

Note:
I am not looking exactly in a vanilla Android code, it is a platform customized code.

- Nishanth Peethambaran

shakti patnaik

unread,
Jul 29, 2011, 3:07:35 AM7/29/11
to android-...@googlegroups.com
Hi,

   Thanks for sharing knowledge.

   I want to know more on Skia.How skia is storing data in to buffer.and how skia library interacting with surface-flinger after it storing data in to buffer.

Thanks With Regards,

  Shakti
Reply all
Reply to author
Forward
0 new messages