Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

B2G Performance Testing

60 views
Skip to first unread message

Dave Hunt

unread,
Jan 23, 2013, 5:45:02 AM1/23/13
to mozilla...@lists.mozilla.org
We now have basic B2G performance tests reporting to
https://datazilla.mozilla.org/b2

There are a few things worth noting:

1. The version number is taken from the device (these are running
against nightly unagi engineering builds with latest master Gaia flashed
on top). Until recently this was 1.0.0-prerelease but now appears to be
1.0.0.0-prerelease and is the reason you will see two charts.

2. The revision identifier for these tests in DataZilla is the git
commit hash for Gaia, however DataZilla is currently only set up to work
with Mercurial, so any links to the commit are broken. One of the next
steps is to switch this to use github and leverage the API to detect and
report performance regressions.

3. Clicking the summary chart at the top of the results page will
display a larger version of the chart below. In this chart, each point
represents a test run, and clicking data points will display detailed
results for the test run in an additional chart below. If you then click
a data point in this chart, a final chart showing the data submitted for
that application/metric will be displayed.

It's also worth a brief explanation of what these tests do. For now they
simply open the Phone, Messages, and Settings apps and take a
measurement of the time before the first paint event is fired. Each app
is launched 30 times, and these tests are triggered when a new nightly
build is available, or when Gaia master branch is updated.

I encourage and look forward to your questions, feedback, and
suggestions. :)

For those interested, the source code for these tests can be found here:
http://hg.mozilla.org/users/tmielczarek_mozilla.com/b2gperf

Cheers,
Dave Hunt
Automation Development

Dave Hunt

unread,
Jan 23, 2013, 9:37:05 AM1/23/13
to mozilla-...@lists.mozilla.org
Apologies for the broken link. This should be:
https://datazilla.mozilla.org/b2g/

Dietrich Ayala

unread,
Jan 23, 2013, 1:17:00 PM1/23/13
to Dave Hunt, Taras Glek, dev-...@lists.mozilla.org
This is *awesome*, thanks so much.

A couple of questions:

1. Does each datapoint in the graph represent the 30-runs of each app
combined? (e.g.: 30 * NumberOfAppsTested)

2. Is first-paint the right metric to use? Cc'ing Taras Glek for input.

3. Can we get measurements per-app? This will help detect per-app
regressions, as well as communicate performance behavior to partners.

4. Can we add tests with user data? Eg, a common test our partners are
doing is "add 1000 contacts, restart, load Contacts app". Telemetry showed
us in Firefox desktop that testing without user data is of limited value.

5. I'm worried that testing "hot" doesn't give us visibility into the worst
case performance. The pathologically bad performance cases are a lot of
what got Firefox a bad performance rap, so we should ensure we're testing
only cold start, or test it separately maybe?


On Wed, Jan 23, 2013 at 2:45 AM, Dave Hunt <dh...@mozilla.com> wrote:

> We now have basic B2G performance tests reporting to
> https://datazilla.mozilla.org/**b2 <https://datazilla.mozilla.org/b2>
> http://hg.mozilla.org/users/**tmielczarek_mozilla.com/**b2gperf<http://hg.mozilla.org/users/tmielczarek_mozilla.com/b2gperf>
>
> Cheers,
> Dave Hunt
> Automation Development
> ______________________________**_________________
> dev-gaia mailing list
> dev-...@lists.mozilla.org
> https://lists.mozilla.org/**listinfo/dev-gaia<https://lists.mozilla.org/listinfo/dev-gaia>
>

Peter La

unread,
Jan 23, 2013, 1:28:12 PM1/23/13
to Dietrich Ayala, Taras Glek, Dave Hunt, dev-...@lists.mozilla.org
This is fantastic. :)

To Dietrich's points, I'd like to add:

1. Let's separate 'cold' and 'hot' startup times. I'm not sure if the 30 times includes 1 'cold' startup time, but if it does, it probably shouldn't. Except for apps that depend on hardware (like Camera), or that need to load data upon launch, most apps hot startup time hasn't been a big issue, and generally take less than 1 second, or about the time it takes for the screen transition animation. It's the cold startup times that tend to take anywhere from 2.5-5 seconds, sometimes more.

2. In addition to time to first paint, I think a useful metric to add is time it takes to get to an interactible state, ie. when you can start tapping on things. The latter is really important for a good user experience.

Peter.
_______________________________________________
dev-gaia mailing list
dev-...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-gaia

Taras Glek

unread,
Jan 23, 2013, 1:43:29 PM1/23/13
to Dietrich Ayala, Dave Hunt, dev-...@lists.mozilla.org
On 1/23/2013 10:17 AM, Dietrich Ayala wrote:
> This is *awesome*, thanks so much.
>
> A couple of questions:
>
> 1. Does each datapoint in the graph represent the 30-runs of each app
> combined? (e.g.: 30 * NumberOfAppsTested)
>
> 2. Is first-paint the right metric to use? Cc'ing Taras Glek for input.
How is firstPaint obtained?


Measuring load time for web apps is poorly defined. navigationTiming api
provides some interesting datapoints. Not sure if it's hooked up
properly on B2G.

responseStart - navigationStart = latency until app can actually start
doing stuff. I think this is equivalent to time to .main measurement in
desktop apps.

firstPaint does seem like a good indicator to track. However we also
want to measure when we can respond to events. The web is extremely
unfriendly in this aspect. My best approximation for that has been to
start a .requestAnimationFrame loop in <head> and measure latency
between each iteration until onLoad fires. There every iteration over
16.6ms = janky. Downside to injecting code is that it's likely to
negatively bias what we are measuring.


>
> 3. Can we get measurements per-app? This will help detect per-app
> regressions, as well as communicate performance behavior to partners.
>
> 4. Can we add tests with user data? Eg, a common test our partners are
> doing is "add 1000 contacts, restart, load Contacts app". Telemetry
> showed us in Firefox desktop that testing without user data is of
> limited value.
>
> 5. I'm worried that testing "hot" doesn't give us visibility into the
> worst case performance. The pathologically bad performance cases are a
> lot of what got Firefox a bad performance rap, so we should ensure
> we're testing only cold start, or test it separately maybe?
>
>
> On Wed, Jan 23, 2013 at 2:45 AM, Dave Hunt <dh...@mozilla.com
> _______________________________________________
> dev-gaia mailing list
> dev-...@lists.mozilla.org <mailto:dev-...@lists.mozilla.org>
> https://lists.mozilla.org/listinfo/dev-gaia
>
>

Andrew Sutherland

unread,
Jan 23, 2013, 5:44:36 PM1/23/13
to dev-...@lists.mozilla.org
On 01/23/2013 05:45 AM, Dave Hunt wrote:
> It's also worth a brief explanation of what these tests do. For now
> they simply open the Phone, Messages, and Settings apps and take a
> measurement of the time before the first paint event is fired. Each
> app is launched 30 times, and these tests are triggered when a new
> nightly build is available, or when Gaia master branch is updated.

Would it also be possible to let apps issue their own events? The
e-mail app currently isn't fully started up until our (huge chunk of) JS
loads, we issue an IndexedDB query, and then get the query back and do
some startup based on that. It would be great for us to be able to
dispatch an event at the end of that so we could get that data-point
into datazilla as part of this canonical set of metrics. Other apps that
might need to do some database stuff before they are 100% ready seem
like they could also benefit from emitting such an event.


A related question is that for e-mail I would like to start trying to
add even more data points than that, capturing things like the time to
synchronize a mail account for hotmail, gmail, yahoo, local dovecot,
etc. I don't think we're ready to immediately run that on a cron job,
but I'd like to get started with that at least locally. Should I stand
up a local instance of datazilla, can I be granted access to a 'dataset'
bucket to stash things on a testing datazilla instance/the full
datazilla server/other?

Thanks,
Andrew

Alex Keybl

unread,
Jan 23, 2013, 5:53:21 PM1/23/13
to Dave Hunt, dev-...@lists.mozilla.org
Hi Dave,

Do you plan to have B2G performance testing set up for both mozilla-b2g18/v1-train (v1.x) and mozilla-central/master (v2)? Future v1.x features will be developed on v2, so having some perf information there prior to uplifting to v1.x makes sense. Thanks!

-Alex
> _______________________________________________
> dev-gaia mailing list
> dev-...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-gaia

Zbigniew Braniecki

unread,
Jan 24, 2013, 3:08:11 AM1/24/13
to Dietrich Ayala, Dave Hunt, dev-...@lists.mozilla.org
W dniu środa, 23 stycznia 2013 10:43:29 UTC-8 użytkownik Taras Glek napisał:
>
> Measuring load time for web apps is poorly defined. navigationTiming api
>
> provides some interesting datapoints. Not sure if it's hooked up
>
> properly on B2G.

It actually seems to be. It provides pretty reliable data that is matching the data from external POVs (like system events and even physical camera recordings).

The way we measure app startup now is via ttl view which uses a difference between appwillopen event and mozbrowserloadend (which matches performance API loadEnd event).

The way I would like to improve that (bug 825137) is by measuring between tap event on an app icon and mozbrowserloadend.

We actually recently stopped using firstPaint for measuring app startup as it was bogus and didn't match real life user experience.

Cheers,
g.

Dave Hunt

unread,
Jan 24, 2013, 5:45:01 AM1/24/13
to Dietrich Ayala, Taras Glek, dev-...@lists.mozilla.org, Jonathan Eads

On 1/23/13 6:17 PM, Dietrich Ayala wrote:
> This is *awesome*, thanks so much.
>
> A couple of questions:
>
> 1. Does each datapoint in the graph represent the 30-runs of each app
> combined? (e.g.: 30 * NumberOfAppsTested)

Yes, that's exactly what the first graph represents. When you click
datapoints you will accumulate data in the second graph, which breaks
down into each app/metric. Clicking a datapoint in this chart shows the
individual values gathered for the selected app/metric.

>
> 2. Is first-paint the right metric to use? Cc'ing Taras Glek for input.
>
> 3. Can we get measurements per-app? This will help detect per-app
> regressions, as well as communicate performance behavior to partners.

I don't believe it's possible in the current DataZilla UI to show a
breakdown per app/metric. I believe this will be available in the
updated UI, but in order to take advantage of that I believe we need to
tweak DataZilla to support git in addition to hg. I've CC'd Jonathan
Eads who may be able to clarify this.

>
> 4. Can we add tests with user data? Eg, a common test our partners are
> doing is "add 1000 contacts, restart, load Contacts app". Telemetry
> showed us in Firefox desktop that testing without user data is of
> limited value.

This should be possible, and sounds similar to the Firefox endurance
tests, which execute a number of iterations with a number of 'entities'.
At the moment there is no individual setup per application.

>
> 5. I'm worried that testing "hot" doesn't give us visibility into the
> worst case performance. The pathologically bad performance cases are a
> lot of what got Firefox a bad performance rap, so we should ensure
> we're testing only cold start, or test it separately maybe?

This is something I've been thinking about too. Currently these tests
include one cold start, and the remaining launches are warm. It
shouldn't be too difficult to restart B2G between launches to give us
more accurate results for cold starts. Would there still be value in
recording metrics for warm launches?
> _______________________________________________
> dev-gaia mailing list
> dev-...@lists.mozilla.org <mailto:dev-...@lists.mozilla.org>
> https://lists.mozilla.org/listinfo/dev-gaia
>
>

--
*Dave Hunt*
QA, Mozilla Corporation
dh...@mozilla.com

Dave Hunt

unread,
Jan 24, 2013, 5:51:22 AM1/24/13
to Peter La, Dietrich Ayala, Taras Glek, dev-...@lists.mozilla.org

On 1/23/13 6:28 PM, Peter La wrote:
> This is fantastic. :)
>
> To Dietrich's points, I'd like to add:
>
> 1. Let's separate 'cold' and 'hot' startup times. I'm not sure if the 30 times includes 1 'cold' startup time, but if it does, it probably shouldn't.

Yes, there's currently 1 cold launch. I can look into enhancing the
testrun to include both hot and cold launches.

> Except for apps that depend on hardware (like Camera), or that need to load data upon launch, most apps hot startup time hasn't been a big issue, and generally take less than 1 second, or about the time it takes for the screen transition animation. It's the cold startup times that tend to take anywhere from 2.5-5 seconds, sometimes more.
>
> 2. In addition to time to first paint, I think a useful metric to add is time it takes to get to an interactible state, ie. when you can start tapping on things. The latter is really important for a good user experience.

Do you have suggestions for events to listen to for this state to have
been achieved?

>
> Peter.
>
> ----- Original Message -----
> From: "Dietrich Ayala" <auto...@gmail.com>
> To: "Dave Hunt" <dh...@mozilla.com>
> Cc: "Taras Glek" <tg...@mozilla.com>, dev-...@lists.mozilla.org
> Sent: Wednesday, January 23, 2013 1:17:00 PM
> Subject: Re: B2G Performance Testing
>
> This is *awesome*, thanks so much.
>
> A couple of questions:
>
> 1. Does each datapoint in the graph represent the 30-runs of each app
> combined? (e.g.: 30 * NumberOfAppsTested)
>
> 2. Is first-paint the right metric to use? Cc'ing Taras Glek for input.
>
> 3. Can we get measurements per-app? This will help detect per-app
> regressions, as well as communicate performance behavior to partners.
>
> 4. Can we add tests with user data? Eg, a common test our partners are
> doing is "add 1000 contacts, restart, load Contacts app". Telemetry showed
> us in Firefox desktop that testing without user data is of limited value.
>
> 5. I'm worried that testing "hot" doesn't give us visibility into the worst
> case performance. The pathologically bad performance cases are a lot of
> what got Firefox a bad performance rap, so we should ensure we're testing
> only cold start, or test it separately maybe?
>
>
> On Wed, Jan 23, 2013 at 2:45 AM, Dave Hunt <dh...@mozilla.com> wrote:
>
>> We now have basic B2G performance tests reporting to
>> https://datazilla.mozilla.org/**b2 <https://datazilla.mozilla.org/b2>
>>
>> There are a few things worth noting:
>>
>> 1. The version number is taken from the device (these are running against
>> nightly unagi engineering builds with latest master Gaia flashed on top).
>> Until recently this was 1.0.0-prerelease but now appears to be
>> 1.0.0.0-prerelease and is the reason you will see two charts.
>>
>> 2. The revision identifier for these tests in DataZilla is the git commit
>> hash for Gaia, however DataZilla is currently only set up to work with
>> Mercurial, so any links to the commit are broken. One of the next steps is
>> to switch this to use github and leverage the API to detect and report
>> performance regressions.
>>
>> 3. Clicking the summary chart at the top of the results page will display
>> a larger version of the chart below. In this chart, each point represents a
>> test run, and clicking data points will display detailed results for the
>> test run in an additional chart below. If you then click a data point in
>> this chart, a final chart showing the data submitted for that
>> application/metric will be displayed.
>>
>> It's also worth a brief explanation of what these tests do. For now they
>> simply open the Phone, Messages, and Settings apps and take a measurement
>> of the time before the first paint event is fired. Each app is launched 30
>> times, and these tests are triggered when a new nightly build is available,
>> or when Gaia master branch is updated.
>>
>> I encourage and look forward to your questions, feedback, and suggestions.
>> :)
>>
>> For those interested, the source code for these tests can be found here:
>> http://hg.mozilla.org/users/**tmielczarek_mozilla.com/**b2gperf<http://hg.mozilla.org/users/tmielczarek_mozilla.com/b2gperf>
>>
>> Cheers,
>> Dave Hunt
>> Automation Development
>> ______________________________**_________________
>> dev-gaia mailing list
>> dev-...@lists.mozilla.org
>> https://lists.mozilla.org/**listinfo/dev-gaia<https://lists.mozilla.org/listinfo/dev-gaia>
>>
> _______________________________________________
> dev-gaia mailing list
> dev-...@lists.mozilla.org

Dave Hunt

unread,
Jan 24, 2013, 6:00:36 AM1/24/13
to Taras Glek, Dietrich Ayala, dev-...@lists.mozilla.org

On 1/23/13 6:43 PM, Taras Glek wrote:
> On 1/23/2013 10:17 AM, Dietrich Ayala wrote:
>> This is *awesome*, thanks so much.
>>
>> A couple of questions:
>>
>> 1. Does each datapoint in the graph represent the 30-runs of each app
>> combined? (e.g.: 30 * NumberOfAppsTested)
>>
>> 2. Is first-paint the right metric to use? Cc'ing Taras Glek for input.
> How is firstPaint obtained?

By waiting for the mozbrowserfirstpaint event. See
http://hg.mozilla.org/users/tmielczarek_mozilla.com/b2gperf/file/428f249f6b2f/launchapp.js#l32

>
>
> Measuring load time for web apps is poorly defined. navigationTiming
> api provides some interesting datapoints. Not sure if it's hooked up
> properly on B2G.
>
> responseStart - navigationStart = latency until app can actually start
> doing stuff. I think this is equivalent to time to .main measurement
> in desktop apps.
>
> firstPaint does seem like a good indicator to track. However we also
> want to measure when we can respond to events. The web is extremely
> unfriendly in this aspect. My best approximation for that has been to
> start a .requestAnimationFrame loop in <head> and measure latency
> between each iteration until onLoad fires. There every iteration over
> 16.6ms = janky. Downside to injecting code is that it's likely to
> negatively bias what we are measuring.

I'm not entirely sure I follow this. Do you have some example code I
could look over?

>
>
>>
>> 3. Can we get measurements per-app? This will help detect per-app
>> regressions, as well as communicate performance behavior to partners.
>>
>> 4. Can we add tests with user data? Eg, a common test our partners
>> are doing is "add 1000 contacts, restart, load Contacts app".
>> Telemetry showed us in Firefox desktop that testing without user data
>> is of limited value.
>>
>> 5. I'm worried that testing "hot" doesn't give us visibility into the
>> worst case performance. The pathologically bad performance cases are
>> a lot of what got Firefox a bad performance rap, so we should ensure
>> we're testing only cold start, or test it separately maybe?
>>
>>
>> On Wed, Jan 23, 2013 at 2:45 AM, Dave Hunt <dh...@mozilla.com
>> _______________________________________________
>> dev-gaia mailing list
>> dev-...@lists.mozilla.org <mailto:dev-...@lists.mozilla.org>

Dave Hunt

unread,
Jan 24, 2013, 6:06:02 AM1/24/13
to Alex Keybl, Jonathan Griffin, dev-...@lists.mozilla.org

On 1/23/13 10:53 PM, Alex Keybl wrote:
> Hi Dave,
>
> Do you plan to have B2G performance testing set up for both mozilla-b2g18/v1-train (v1.x) and mozilla-central/master (v2)? Future v1.x features will be developed on v2, so having some perf information there prior to uplifting to v1.x makes sense. Thanks!

We're currently running these on daily unagi-eng builds, which I
*believe* are still based on mozilla-b2g18 (CC'ing Jonathan Griffin to
confirm). These are then flashed with the latest from Gaia's master
branch. It would certainly make sense to also run against other branches.

If possible, could you point me towards any nightly engineering builds
and appropriate Gaia branches for additional coverage?

Thanks,
Dave

>
> -Alex
>> _______________________________________________
>> dev-gaia mailing list
>> dev-...@lists.mozilla.org

Dave Hunt

unread,
Jan 24, 2013, 7:32:20 AM1/24/13
to Zbigniew Braniecki, mozilla....@googlegroups.com, dev-...@lists.mozilla.org, Dietrich Ayala
Thanks Zbigniew,

I've had success with adding a listener for the mozbrowserloadend event.
Do you think it's worthwhile gathering time to paint in addition to load
end, or just load end?

Cheers,
Dave

Dave Hunt

unread,
Jan 24, 2013, 9:12:13 AM1/24/13
to mozilla-...@lists.mozilla.org, Jonathan Eads
On 1/23/13 10:44 PM, Andrew Sutherland wrote:
> On 01/23/2013 05:45 AM, Dave Hunt wrote:
>> It's also worth a brief explanation of what these tests do. For now
>> they simply open the Phone, Messages, and Settings apps and take a
>> measurement of the time before the first paint event is fired. Each
>> app is launched 30 times, and these tests are triggered when a new
>> nightly build is available, or when Gaia master branch is updated.
>
> Would it also be possible to let apps issue their own events? The
> e-mail app currently isn't fully started up until our (huge chunk of) JS
> loads, we issue an IndexedDB query, and then get the query back and do
> some startup based on that. It would be great for us to be able to
> dispatch an event at the end of that so we could get that data-point
> into datazilla as part of this canonical set of metrics. Other apps that
> might need to do some database stuff before they are 100% ready seem
> like they could also benefit from emitting such an event.

Sounds reasonable to me. If there could be a way of determining if this
additional event will be fired then I could conditionally wait for it in
the performance tests. It would then be available in DataZilla as an
additional datapoint such as email_custom_event_name.

>
>
> A related question is that for e-mail I would like to start trying to
> add even more data points than that, capturing things like the time to
> synchronize a mail account for hotmail, gmail, yahoo, local dovecot,
> etc. I don't think we're ready to immediately run that on a cron job,
> but I'd like to get started with that at least locally. Should I stand
> up a local instance of datazilla, can I be granted access to a 'dataset'
> bucket to stash things on a testing datazilla instance/the full
> datazilla server/other?

I've CC'd Jonathan Eads, who should be able to help you with these
DataZilla questions.

>
> Thanks,
> Andrew

Dave Hunt

unread,
Jan 24, 2013, 9:20:33 AM1/24/13
to mozilla-...@lists.mozilla.org
On 1/24/13 10:51 AM, Dave Hunt wrote:
>
> On 1/23/13 6:28 PM, Peter La wrote:
> Yes, there's currently 1 cold launch. I can look into enhancing the
> testrun to include both hot and cold launches.

Thinking about this more, how are we defining hot vs cold here?
Currently the performance tests launch the app and then kill it. At this
point the app is not running in the background. Would launching it again
mean this is a cold launch, or would we need to go one further and
reboot the device?

The current performance results show that only the Phone application has
a longer initial launch time, but I wonder if this is more likely due to
it being the first launch after the device is flashed. Perhaps after
flashing we should wait for the device to 'settle' as I have seen
mentioned in bug 814981
(https://bugzilla.mozilla.org/show_bug.cgi?id=814981#c17)

Jonathan Griffin

unread,
Jan 24, 2013, 11:54:55 AM1/24/13
to Dave Hunt, Alex Keybl, dev-...@lists.mozilla.org
Yes, the daily unagi_eng builds use mozilla-b2g18.  As soon as bug 823775 is done, we'll be able to use m-c versions of unagi_eng builds in Jenkins automation as well.

Jonathan


On 1/24/13 3:06 AM, Dave Hunt wrote:

On 1/23/13 10:53 PM, Alex Keybl wrote:
Hi Dave,

Do you plan to have B2G performance testing set up for both mozilla-b2g18/v1-train (v1.x) and mozilla-central/master (v2)? Future v1.x features will be developed on v2, so having some perf information there prior to uplifting to v1.x makes sense. Thanks!

We're currently running these on daily unagi-eng builds, which I *believe* are still based on mozilla-b2g18 (CC'ing Jonathan Griffin to confirm). These are then flashed with the latest from Gaia's master branch. It would certainly make sense to also run against other branches.

If possible, could you point me towards any nightly engineering builds and appropriate Gaia branches for additional coverage?

Thanks,
Dave

-Alex

On Jan 23, 2013, at 2:45 AM, Dave Hunt <dh...@mozilla.com> wrote:


Cheers,
Dave Hunt
Automation Development
_______________________________________________
dev-gaia mailing list
dev-...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-gaia

--
Dave Hunt
QA, Mozilla Corporation
dh...@mozilla.com

Dietrich Ayala

unread,
Jan 24, 2013, 11:59:14 AM1/24/13
to Dave Hunt, mozilla-...@lists.mozilla.org
> Thinking about this more, how are we defining hot vs cold here? Currently
> the performance tests launch the app and then kill it. At this point the app
> is not running in the background. Would launching it again mean this is a
> cold launch, or would we need to go one further and reboot the device?

Reboot is likely best for truly cold start. There are image caches,
SQLite caches, filesystem-level caching, all kinds of stuff that
contributes to "hot" start. Taras could probably list more. The fact
that you're not seeing much difference could also be because there's
no user data. Difference between hot & cold with actual user data is
where a lot of those behind-the-scenes caches make a difference.

>
> The current performance results show that only the Phone application has a
> longer initial launch time, but I wonder if this is more likely due to it
> being the first launch after the device is flashed. Perhaps after flashing
> we should wait for the device to 'settle' as I have seen mentioned in bug
> 814981 (https://bugzilla.mozilla.org/show_bug.cgi?id=814981#c17)

Probably a good idea. Firefox desktop has a number of services that
are not critical to bringing up the window, so stagger in over the
first minute or so after the first window comes up. I don't know if
Firefox OS has a similar scenario. Maybe Peter can provide info on why
he did that in his tests.

Zbigniew Braniecki

unread,
Jan 24, 2013, 3:08:11 AM1/24/13
to mozilla....@googlegroups.com, Dave Hunt, dev-...@lists.mozilla.org, Dietrich Ayala
W dniu środa, 23 stycznia 2013 10:43:29 UTC-8 użytkownik Taras Glek napisał:
>
> Measuring load time for web apps is poorly defined. navigationTiming api
>
> provides some interesting datapoints. Not sure if it's hooked up
>
> properly on B2G.

Dave Hunt

unread,
Jan 25, 2013, 3:14:17 PM1/25/13
to mozilla...@lists.mozilla.org, Andreas Gal, Clint Talbert, Jonathan Eads
You can now see the results broken down by application, and the metric
used is now based on the mozbrowserloadend event rather than
mozbrowserfirstpaint. This makes the results closer to actual app launch
times. Note that these are all 'cold' launches based on all applications
being killed between each launch.

Thanks,
Dave

On 1/23/13 10:45 AM, Dave Hunt wrote:
> We now have basic B2G performance tests reporting to
> https://datazilla.mozilla.org/b2g

David Clarke

unread,
Jan 29, 2013, 8:04:42 PM1/29/13
to Dave Hunt, Andreas Gal, dev-...@lists.mozilla.org, Clint Talbert, Jonathan Eads
Dave,
Curious what type of hardware you are running these tests on ?
Is this using a unagi / otoro or the emulator ?

Thanks

-David

----- Original Message -----
From: "Dave Hunt" <dh...@mozilla.com>
To: dev-...@lists.mozilla.org
Cc: "Andreas Gal" <ag...@mozilla.com>, "Clint Talbert" <ctal...@mozilla.com>, "Jonathan Eads" <je...@mozilla.com>
Sent: Friday, January 25, 2013 12:14:17 PM
Subject: Re: B2G Performance Testing

Dave Hunt

unread,
Jan 30, 2013, 2:49:11 AM1/30/13
to David Clarke, Andreas Gal, dev-...@lists.mozilla.org, Clint Talbert, Jonathan Eads
They're running on an unagi in Haxxor, MV.

Dave Hunt
0 new messages