Mobile Gfx Performance

72 views
Skip to first unread message

Bas Schouten

unread,
Sep 8, 2011, 11:42:19 AM9/8/11
to dev-pl...@lists.mozilla.org
Hi all,

As the mobile browser becomes an increasingly important target, it has become clear over the recent months that graphics often comes up as a performance hotspot on the mobile android browser. OpenGL accelerated layers is one of the projects which intends to address part of this, however it has become increasingly clear that significant improvements are needed in content drawing performance in order to get equal to or better performance than the competition.

With the Azure project coming along, we now have a new API that can map directly to a variety of drawing APIs, and that will effectively allow us to easily implement different backend solutions for content drawing. From this perspective we're planning to address the issue of content drawing performance and we'd like to raise a discussion on how to plan and prioritize the different approaches that are possible. To elaborate a little more there's 3 basic options that have been discussed:

1. A Skia backend for Azure

This option revolves around implementing the Azure API on top of Skia, Skia is the rendering system used by Google Chrome on Windows, Linux & Android.

Complexity: Low
Failure probability: <5%
Required investment: Estimated 2 months for one engineer

Advantages:
- Low risk, low investment
- Easy approach to get to, and stay on par with at least part of the competition
- Good to have as a benchmark for desktop as well

Disadvantages:
- External dependency on the Skia project
- Essentially this is unlikely to ever bring our performance to levels higher than the competition

2. The 'Emerald' project - our own Hardware Accelerated 2D rendering

This option is a project that we have been planning (and doing some work on) for a little longer. The long term goal is to create our own GPU accelerated 2D graphics rendering system (Something like Direct2D or Skia's work-in-progress backend).

Complexity: High
Failure probability estimate: 40%
Required investment: Estimated 6-12 months for 2-3 engineers

Advantages:
- The approach will allow us to attempt to truly 'innovate' in the area of graphics performance
- Consistent, cross platform accelerated rendering (OGL/DirectX based)
- An accelerated rendering system where we have complete control (unlike with Direct2D)

Disadvantages:
- A chance that we simply fail to deliver the something that performs considerable better than 3rd party renderers like Direct2D or Skia.
- Large investments required resource-wise

3. A 'quick and dirty' software rasterizer for Azure

This option involves creating an 'as simple as possible' software scanline rasterizer that would attempt to strike the right 'performance vs. quality' trade-off that we want in the mobile market (considering screensizes, usage patterns, etc.). And optimized for performance on mobile devices.

Complexity: Medium
Failure probability estimate: 30%
Required investment: Estimated 3-6 months for two engineers

Advantages:
- Not too complicated
- Potentially could deliver superior performance
- Such an implementation is a well-documented problem

Disadvantages:
- Few certainties - we may end up with something no faster than what we already have
- Considerable amount of time involved that would have to be taken away from other projects

Any input or ideas would be appreciated and will help us pick the best strategies to get us into the best possible situation in the next 12 months.

Thanks on behalf of the Gfx team!
Bas

Chris Lord

unread,
Sep 8, 2011, 12:26:59 PM9/8/11
to dev-pl...@lists.mozilla.org
On Thu 08 Sep 2011 16:42:19 BST, Bas Schouten wrote:
> Hi all,
>
> As the mobile browser becomes an increasingly important target, it has become clear over the recent months that graphics often comes up as a performance hotspot on the mobile android browser. OpenGL accelerated layers is one of the projects which intends to address part of this, however it has become increasingly clear that significant improvements are needed in content drawing performance in order to get equal to or better performance than the competition.
>
> With the Azure project coming along, we now have a new API that can map directly to a variety of drawing APIs, and that will effectively allow us to easily implement different backend solutions for content drawing. From this perspective we're planning to address the issue of content drawing performance and we'd like to raise a discussion on how to plan and prioritize the different approaches that are possible. To elaborate a little more there's 3 basic options that have been discussed:
>
> 1. A Skia backend for Azure
>
> This option revolves around implementing the Azure API on top of Skia, Skia is the rendering system used by Google Chrome on Windows, Linux& Android.
> _______________________________________________
> dev-planning mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-planning

I think that option (2) would be the most beneficial for the project
and the free-software community in general, but it is indeed the
hardest... I wonder how we'll get around issues with e10s when using GL
acceleration on the content side too, on Android at least.

I like the idea of (3), but if we embarked down that path, it'd
probably be worth doing it as a cairo backend? A colleague from my
previous job is working on this at the moment (a faster,
deferred-rendering based software rasteriser for Cairo) and getting
promising results. It might be worth contacting him about it before
going down that path (pip...@gimp.org)

--Chris

Robert O'Callahan

unread,
Sep 8, 2011, 8:36:47 PM9/8/11
to Chris Lord, dev-pl...@lists.mozilla.org
On Fri, Sep 9, 2011 at 4:26 AM, Chris Lord <cl...@mozilla.com> wrote:

> I think that option (2) would be the most beneficial for the project and
> the free-software community in general, but it is indeed the hardest... I
> wonder how we'll get around issues with e10s when using GL acceleration on
> the content side too, on Android at least.
>

We can either remote the GL calls, or remote Azure calls. I think remoting
Azure calls would make the most sense.

Rob
--
"If we claim to be without sin, we deceive ourselves and the truth is not in
us. If we confess our sins, he is faithful and just and will forgive us our
sins and purify us from all unrighteousness. If we claim we have not sinned,
we make him out to be a liar and his word is not in us." [1 John 1:8-10]

Robert O'Callahan

unread,
Sep 8, 2011, 8:38:17 PM9/8/11
to Bas Schouten, dev-pl...@lists.mozilla.org
On Fri, Sep 9, 2011 at 3:42 AM, Bas Schouten <bsch...@mozilla.com> wrote:

> As the mobile browser becomes an increasingly important target, it has
> become clear over the recent months that graphics often comes up as a
> performance hotspot on the mobile android browser. OpenGL accelerated layers
> is one of the projects which intends to address part of this, however it has
> become increasingly clear that significant improvements are needed in
> content drawing performance in order to get equal to or better performance
> than the competition.
>

Part of this means getting gfxContext running on top of Azure, and
hopefully, eventually, modifying gfxContext/nsIRenderingContext-using code
to use Azure directly. That's a lot of work in itself, although it's
relatively low risk and the latter part can be done incrementally.

Benoit Jacob

unread,
Sep 9, 2011, 8:27:02 AM9/9/11
to Bas Schouten, dev-pl...@lists.mozilla.org
Can't we have it both ways? First do the skia backend, which as you
said could be done quickly and gives immediate benefits; then start
working on option 2 (Emerald) or 3.

A couple of questions about Skia:
- can it target D3D? D3D10? or only GL. (And then D3D via ANGLE).
- would it be a reasonable "option 1.5" to contribute directly to
Skia, in the same way as we've been doing for Cairo?
- is there a significant overhead in mapping the Azure API onto Skia?

Another thing:

2011/9/8 Bas Schouten <bsch...@mozilla.com>:
> 2. The 'Emerald' project - our own Hardware Accelerated 2D rendering
>
> Advantages:
> - The approach will allow us to attempt to truly 'innovate' in the area of graphics performance

I would elaborate a bit on that as it's really important. 'Innovate'
here is important not just for the product, but also for the
engineers. Obviously working on Emerald is more interesting work than
depending on Google to develop Skia for us. It's a good thing for us
that interesting work is done at Mozilla. So if we were to rely on
Skia entirely, I hope we could contribute to its improvement.
Otherwise it'd be kind of sad and would be an argument to add in favor
of doing Emerald.

Cheers
Benoit

Bas Schouten

unread,
Sep 9, 2011, 1:30:11 PM9/9/11
to Benoit Jacob, dev-pl...@lists.mozilla.org
HI, I'll try and answer some questions here according to my personal ideas as best I can.

----- Original Message -----
> From: "Benoit Jacob" <jacob.b...@gmail.com>
> To: "Bas Schouten" <bsch...@mozilla.com>
> Cc: dev-pl...@lists.mozilla.org
> Sent: Friday, September 9, 2011 12:27:02 PM
> Subject: Re: Mobile Gfx Performance
> Can't we have it both ways? First do the skia backend, which as you
> said could be done quickly and gives immediate benefits; then start
> working on option 2 (Emerald) or 3.

Yes, this was in no way meant as an 'exclusive or' list.

>
> A couple of questions about Skia:
> - can it target D3D? D3D10? or only GL. (And then D3D via ANGLE).

I believe the latter, but I might be wrong. A lthough I wouldn't be surprised if for performance reasons that would change. At some point they'll likely want to be competitive with Direct2D on windows.

> - would it be a reasonable "option 1.5" to contribute directly to
> Skia, in the same way as we've been doing for Cairo?

Of course this is an options, and I do believe we should, even in the case of also pursuing options 2/3, contribute to Skia where we can. So essentially this 'is' option 1. There's a number of reasons why I wouldn't want to invest like we would in Emerald this way:

- Skia is controlled by Google. We do not have control over its priorities, API, etc. One reason for the Azure project is to gain this control.
- I don't think Skia is the 'optimal' API for a hardware accelerated 2D renderer. For Azure we had an opportunity to start from scratch, we took a close look at different APIs like Direct2D, Skia and Cairo and made choices for Azure making it (what we believe to be) the optimalAPI for the future.
- It doesn't give us an ability to innovate and do something 'better' than the competition. Essentially all the advantages and disadvantages listed in Option 1 still apply.

Similarly of course, for JS we could use V8, and instead of gecko use Webkit. But in the end that's not what's going to bring innovation and competition to the browser market. I think healthy competition in this field will in the end make all browsers better! Just like IE9 has pushed us and other vendors to ramp up on graphics performance through Direct2D. Obviously this is just my personal opinion.

> - is there a significant overhead in mapping the Azure API onto Skia?

Significant is hard to define here, personally I'm convinced you could devise a benchmark that shows the mapping overhead just by pushing on the painful points of the mapping. I think and hope the overhead will not be large for 'general' rendering. But it is hard to be sure as long as it isn't fully implemented yet.

Bas

David Mandelin

unread,
Sep 9, 2011, 2:36:24 PM9/9/11
to dev-pl...@lists.mozilla.org, Bas Schouten
On 9/9/2011 10:30 AM, Bas Schouten wrote:
> HI, I'll try and answer some questions here according to my personal ideas as best I can.
>
> ----- Original Message -----
>> From: "Benoit Jacob" <jacob.b...@gmail.com>
>> To: "Bas Schouten" <bsch...@mozilla.com>
>> Cc: dev-pl...@lists.mozilla.org
>> Sent: Friday, September 9, 2011 12:27:02 PM
>> Subject: Re: Mobile Gfx Performance
>> Can't we have it both ways? First do the skia backend, which as you
>> said could be done quickly and gives immediate benefits; then start
>> working on option 2 (Emerald) or 3.
>
> Yes, this was in no way meant as an 'exclusive or' list.

To me, #2 is most appealing, then #3, then #1, because that's the order
from biggest win to smallest. And that is also the order from longest
time-to-completion to shortest. So I could imagine us just trying #2,
then if that doesn't work, #3, then if that doesn't work, #1. But if we
just did the dumb thing of carrying each one all the way through, that's
up to 20 months all told, which seems too long.

- Can we 'fail fast' on either #2 or #3? Say, for Emerald, how long
would it take to prove/disprove whether it can beat Skia or D2D?

- Can we do experiments that might take a week or a few weeks to get
more information about which is most likely to turn out best?

>> A couple of questions about Skia:
>> - can it target D3D? D3D10? or only GL. (And then D3D via ANGLE).
>
> I believe the latter, but I might be wrong. A lthough I wouldn't be surprised if for performance reasons that would change. At some point they'll likely want to be competitive with Direct2D on windows.
>
>> - would it be a reasonable "option 1.5" to contribute directly to
>> Skia, in the same way as we've been doing for Cairo?
>
> Of course this is an options, and I do believe we should, even in the case of also pursuing options 2/3, contribute to Skia where we can. So essentially this 'is' option 1. There's a number of reasons why I wouldn't want to invest like we would in Emerald this way:
>
> - Skia is controlled by Google. We do not have control over its priorities, API, etc. One reason for the Azure project is to gain this control.
> - I don't think Skia is the 'optimal' API for a hardware accelerated 2D renderer. For Azure we had an opportunity to start from scratch, we took a close look at different APIs like Direct2D, Skia and Cairo and made choices for Azure making it (what we believe to be) the optimalAPI for the future.
> - It doesn't give us an ability to innovate and do something 'better' than the competition. Essentially all the advantages and disadvantages listed in Option 1 still apply.
>
> Similarly of course, for JS we could use V8, and instead of gecko use Webkit. But in the end that's not what's going to bring innovation and competition to the browser market. I think healthy competition in this field will in the end make all browsers better! Just like IE9 has pushed us and other vendors to ramp up on graphics performance through Direct2D. Obviously this is just my personal opinion.

All good points. We have a little experience with this last year in JS.

- We borrowed the assembler from Nitro (JSC (WebKit)) to use in JM.
We're very happy with that decision--it totally did the job and saved us
a lot of work. We're currently planning to use it in IM, although we
will probably change it more this time.

- To get more regex compilation, instead of building our own, we took
the Yarr regex engine from JSC. I think that was also the right thing to
do, but it hasn't brought about the same level of happiness as the
assembler. The good: Personally, I've not yet been convinced that
compiling JS regexes is all that important in the real world, so I'm not
excited about having someone go off and spend the time to build a whole
regex compiler and advance the state of the art there. Also, we get to
share the work of finding and fixing crashes and security bugs with
Apple, which is nice. The less good: it's a pretty big, complicated code
base, and no one here is terribly interested in living inside Yarr and
becoming a big-time contributor. So, we're kind of dependent on Apple there.

Mapping this back to your points:

- We didn't care to innovate in assemblers (because there's not much to
do) or regex compilers (because we don't see great value to the web), so
we didn't need or want to build our own. Mobile graphics seems very
different: an area ripe for innovation.

- We did need to change the assembler APIs somewhat for JM. We didn't
try to upstream those changes because they were fairly small and we
didn't expect Apple to be interested. We will probably want to change
them more for IM. We might want to upstream those, but if Apple doesn't
want them, we're OK with that. It seems like you have a similar
situation with Skia, where your desire to change the APIs is probably
stronger than your desire to upstream everything.

- With Yarr, having a code base largely controlled by Apple and without
much involvement from our engineers is definitely limiting, but livable,
because awesome regex technology isn't a priority for us. With the
assembler, not so much, just because assemblers are such a simple,
stable technology.

>> - is there a significant overhead in mapping the Azure API onto Skia?
>
> Significant is hard to define here, personally I'm convinced you could devise a benchmark that shows the mapping overhead just by pushing on the painful points of the mapping. I think and hope the overhead will not be large for 'general' rendering. But it is hard to be sure as long as it isn't fully implemented yet.
>
> Bas

Dave

David Mandelin

unread,
Sep 9, 2011, 2:36:24 PM9/9/11
to Bas Schouten, dev-pl...@lists.mozilla.org
On 9/9/2011 10:30 AM, Bas Schouten wrote:
> HI, I'll try and answer some questions here according to my personal ideas as best I can.
>
> ----- Original Message -----
>> From: "Benoit Jacob" <jacob.b...@gmail.com>
>> To: "Bas Schouten" <bsch...@mozilla.com>
>> Cc: dev-pl...@lists.mozilla.org
>> Sent: Friday, September 9, 2011 12:27:02 PM
>> Subject: Re: Mobile Gfx Performance
>> Can't we have it both ways? First do the skia backend, which as you
>> said could be done quickly and gives immediate benefits; then start
>> working on option 2 (Emerald) or 3.
>
> Yes, this was in no way meant as an 'exclusive or' list.

To me, #2 is most appealing, then #3, then #1, because that's the order
from biggest win to smallest. And that is also the order from longest
time-to-completion to shortest. So I could imagine us just trying #2,
then if that doesn't work, #3, then if that doesn't work, #1. But if we
just did the dumb thing of carrying each one all the way through, that's
up to 20 months all told, which seems too long.

- Can we 'fail fast' on either #2 or #3? Say, for Emerald, how long
would it take to prove/disprove whether it can beat Skia or D2D?

- Can we do experiments that might take a week or a few weeks to get
more information about which is most likely to turn out best?

>> A couple of questions about Skia:
>> - can it target D3D? D3D10? or only GL. (And then D3D via ANGLE).
>
> I believe the latter, but I might be wrong. A lthough I wouldn't be surprised if for performance reasons that would change. At some point they'll likely want to be competitive with Direct2D on windows.
>
>> - would it be a reasonable "option 1.5" to contribute directly to
>> Skia, in the same way as we've been doing for Cairo?
>
> Of course this is an options, and I do believe we should, even in the case of also pursuing options 2/3, contribute to Skia where we can. So essentially this 'is' option 1. There's a number of reasons why I wouldn't want to invest like we would in Emerald this way:
>
> - Skia is controlled by Google. We do not have control over its priorities, API, etc. One reason for the Azure project is to gain this control.
> - I don't think Skia is the 'optimal' API for a hardware accelerated 2D renderer. For Azure we had an opportunity to start from scratch, we took a close look at different APIs like Direct2D, Skia and Cairo and made choices for Azure making it (what we believe to be) the optimalAPI for the future.
> - It doesn't give us an ability to innovate and do something 'better' than the competition. Essentially all the advantages and disadvantages listed in Option 1 still apply.
>
> Similarly of course, for JS we could use V8, and instead of gecko use Webkit. But in the end that's not what's going to bring innovation and competition to the browser market. I think healthy competition in this field will in the end make all browsers better! Just like IE9 has pushed us and other vendors to ramp up on graphics performance through Direct2D. Obviously this is just my personal opinion.

>> - is there a significant overhead in mapping the Azure API onto Skia?
>
> Significant is hard to define here, personally I'm convinced you could devise a benchmark that shows the mapping overhead just by pushing on the painful points of the mapping. I think and hope the overhead will not be large for 'general' rendering. But it is hard to be sure as long as it isn't fully implemented yet.
>
> Bas

Dave

Robert O'Callahan

unread,
Sep 10, 2011, 9:46:21 AM9/10/11
to David Mandelin, Bas Schouten, dev-pl...@lists.mozilla.org
On Sat, Sep 10, 2011 at 6:36 AM, David Mandelin <dman...@mozilla.com>wrote:

> - Can we 'fail fast' on either #2 or #3? Say, for Emerald, how long
> would it take to prove/disprove whether it can beat Skia or D2D?
>
> - Can we do experiments that might take a week or a few weeks to get
> more information about which is most likely to turn out best?
>

Matt Woodrow already has Azure-Skia running well enough we can do meaningful
performance comparisons using it for some important <canvas> workloads, for
Skia's CPU-only backend. Integrating Skia's GL backend will take a bit more
work but I'm sure it won't take Matt long :-).

Prototyping some of Bas's Emerald ideas in GL enough to get good performance
comparisons might not take long either.

What I find difficult about decisions on using third-party code is that they
often depend greatly on the future evolution of that code. It's often easy
to pick holes in third-party code and say "we could do better!" but in the
time it would take us to get to that better state, someone else (or even we
ourselves) could have improved the third-party code past that point. On the
other hand, you can buy into third-party libraries on the expectation that
other people are going to help you fix the limitations you see, and then be
disappointed.

In other words, we can do some simple performance comparisons quickly now,
and we should, but will they still be valid in a year?
Reply all
Reply to author
Forward
0 new messages