Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Good tests for real-world browser memory usage?

11 views
Skip to first unread message

Mike Schroepfer

unread,
Oct 5, 2007, 3:07:19 PM10/5/07
to
Hey Folks,

Thanks to the really hard work of DBaron, Sayre, Rob, Alice, Graydon and
countless others I'm forgetting we've got much better visibility into
Firefox memory usage than ever before. In particular we tools to find
memory leaks and through Talos we can measure memory usage during a Tp
run (loads ~2000 pages) both between builds:

http://graphs.mozilla.org/#spst=range&spstart=1180657203&spend=1191603512&bpst=cursor&bpstart=1180657203&bpend=1191603512&m1tid=9&m1bl=0&m1avg=0

and during a test run:

http://graphs.mozilla.org/dgraph.html#spst=range&spstart=0&spend=2001&bpst=cursor&bpstart=0&bpend=2001&name=tp_Working%20Set&m1tid=29127&m1bl=0&m1avg=0

During these runs trunk maxes out at around 90MB of working set size on
trunk and 70MB on 1.8 branch. However, many of the complaints we
get about Firefox memory usage are when it gets much higher than this (>
150MB). Also a really quick and dirty informal test logging into
gmail, google reader, and loading cnn.com on a 2GIG Vista system I get:

IE7: 86.5MB
Safari3: 78.7MB
Trunk: 63.3MB
Firefox2.0.0.7: 53.9MB

The leads me to a bunch of questions:

1) The number reported on the Talos tests is the number at the end of
the run. Should this number be the peak instead (or in addition)?

2) Looking at the second graph above you see memory usage settle into a
a sine-wave like pattern with 4 or so local maximum's spaced about 5
minutes apart. The peak of these goes up about 700KB each time (not
much given the number of pages loaded). Should we expect these
maximums over a long enough run to stabilize (e.g. never increase beyond
a certain value)?

3) Is there a user-initiated way to purge all caches (esp BFCache)?

4) These tests are clearly do not reproduce the conditions causing
people to have much larger memory use. Obviously missing is user
interaction, XMLHttpRequest, and other interactive elements. This is
also a clean profile with no extensions. So any ideas on how we can
build a more real-world test suite such as:
* Many more page loads
* Playback of user events
* Running with top 10 extensions
* Running Mochi or other testsuites

If we tried to construct a more "real-world" user-event driving test
what do people think is both representative and useful? E.g. logging
into gmail, search for a message, log into flickr, do stuff, etc...
Even spot testing by hand key builds of FF might give us a useful
measuring stick.

Best,

Schrep

Robert Sayre

unread,
Oct 5, 2007, 3:43:12 PM10/5/07
to Mike Schroepfer
Mike Schroepfer wrote:
>
> If we tried to construct a more "real-world" user-event driving test
> what do people think is both representative and useful? E.g. logging
> into gmail, search for a message, log into flickr, do stuff, etc... Even
> spot testing by hand key builds of FF might give us a useful measuring
> stick.

We should make one in CoScripter.[1]

1.) log in to gmail
2.) click a label
3.) find a message with a link to a youtube video
4.) visit youtube and watch the clip
5.) visit scripting.com
6.) click RSS link
7.) add feed to google reader
8.) visit yahoo
9.) search on some things
etc.

I think it's a great idea.

- Rob

[1] http://services.alphaworks.ibm.com/coscripter/browse/about

Jonas Sicking

unread,
Oct 5, 2007, 8:46:42 PM10/5/07
to
> 1) The number reported on the Talos tests is the number at the end of
> the run. Should this number be the peak instead (or in addition)?

Yes, I think the peak is the most interesting number here. The number at
the end seems like it would be more random depending on where in the
cycle we stop.

> 2) Looking at the second graph above you see memory usage settle into a
> a sine-wave like pattern with 4 or so local maximum's spaced about 5
> minutes apart. The peak of these goes up about 700KB each time (not
> much given the number of pages loaded). Should we expect these
> maximums over a long enough run to stabilize (e.g. never increase beyond
> a certain value)?

This is very interesting, and I have no idea why there's a pattern like
that. Is this loading the 400 alexa pages and cycling through them 5
times? What is actually being measured? The total of memory allocated
(malloc() - free()) or something else?

> 3) Is there a user-initiated way to purge all caches (esp BFCache)?

There is a nsIObserver notification, "memory-pressure", that various
caches are supposed to honor, looks like the bfcache is observing that.

However, there is no UI to cause such a notification to be sent out. It
would be trivial to add an extension that does that though.

> 4) These tests are clearly do not reproduce the conditions causing
> people to have much larger memory use. Obviously missing is user
> interaction, XMLHttpRequest, and other interactive elements. This is
> also a clean profile with no extensions. So any ideas on how we can
> build a more real-world test suite such as:
> * Many more page loads

> * Running Mochi or other testsuites

We've talked with the QA folks about running mochitest and reftest while
checking for leaks. This will not only add checking for a lot more
pages, but also get better testing for new features as they should
hopefully get tests in either suite. Hopefully this will happen in a not
too distant future.

We've also discussed adding a separate suite that simply tests for
crashes, leaks and assertions, though this will probably mostly be
useful to avoid regressions.

> * Playback of user events
> * Running with top 10 extensions

Both of these would be great ideas. In general I think that while we're
getting in better shape when it comes to leaks in the browser, we still
have a lot of work to do fixing them in extensions.

> If we tried to construct a more "real-world" user-event driving test
> what do people think is both representative and useful? E.g. logging
> into gmail, search for a message, log into flickr, do stuff, etc... Even
> spot testing by hand key builds of FF might give us a useful measuring
> stick.

I really like sayres idea of using coscripting because it can be
automated, and it's easy to add more actions. If we could create a
collection of coscript-testcases that live in cvs, like we currently
have html-testcases, that are run on a regular basis that would rock!

/ Jonas

Peter Van der Beken

unread,
Oct 6, 2007, 6:19:24 AM10/6/07
to
Jonas Sicking wrote:
> We've talked with the QA folks about running mochitest and reftest while
> checking for leaks. This will not only add checking for a lot more
> pages, but also get better testing for new features as they should
> hopefully get tests in either suite. Hopefully this will happen in a not
> too distant future.

I just fixed a leak when loading any RSS feed
(https://bugzilla.mozilla.org/show_bug.cgi?id=398270). I don't think
there are any RSS feeds in the Talos pageset?

This is an interesting case because while it leaked a number of objects
at shutdown, it also kept objects unnecessarily alive for the life of
the application (and released them at shutdown). Opening a feed in FF is
probably not a common task, but just opening one made our general memory
usage go up a bit for the rest of the life of the application.

Peter

Robert Sayre

unread,
Oct 7, 2007, 4:54:57 PM10/7/07
to Peter Van der Beken
Peter Van der Beken wrote:
> Jonas Sicking wrote:
>> We've talked with the QA folks about running mochitest and reftest
>> while checking for leaks. This will not only add checking for a lot
>> more pages, but also get better testing for new features as they
>> should hopefully get tests in either suite. Hopefully this will happen
>> in a not too distant future.
>
> I just fixed a leak when loading any RSS feed
> (https://bugzilla.mozilla.org/show_bug.cgi?id=398270). I don't think
> there are any RSS feeds in the Talos pageset?

Yeah, this is actually a good case, since the feed preview page does
complicated things. I've fixed leaks on trunk there like 3 or 4 times
now, so I can promise you that we've been regressing it.

- Rob

rocal...@gmail.com

unread,
Oct 7, 2007, 6:47:26 PM10/7/07
to
On Oct 6, 1:46 pm, Jonas Sicking <jo...@sicking.cc> wrote:
> Yes, I think the peak is the most interesting number here. The number at
> the end seems like it would be more random depending on where in the
> cycle we stop.

Now that we're using timed caching in various places, I think we
should also measure memory usage after the browser has gone idle for a
few minutes. I hypothesize that people care more about Firefox's
memory usage while they're not using it.

Rob

Mike Schroepfer

unread,
Nov 7, 2007, 7:44:05 PM11/7/07
to
Here's an even simpler case:


http://www.tripadvisor.com/HotelDateSearch?d=607162&returnTo=__2F__Hotels__2D__g187514__2D__Madrid__2D__Hotels__2E__html&fromPop=false&from=HotelsButton

(make sure you allow pop-ups)

1) Load above URL
2) Click on 'check rates'
3) allow all windows to open
4) close them all except 1 and go to about:blank

On Vista (Working Set):

IE7:35MB->150MB->118MB
Minefield:35MB->118MB->94MB (88MB after "clear caches" with Stuarts
extension)
Safari3:25MB->140MB->110MB

Were does all the mem go between 3 and 4?

PS if you repeat this several times the max mem # continues to rise.

Cheers,

Schrep

smaug

unread,
Nov 13, 2007, 8:34:30 AM11/13/07
to
Mike Schroepfer wrote:
> Here's an even simpler case:
>
>
> http://www.tripadvisor.com/HotelDateSearch?d=607162&returnTo=__2F__Hotels__2D__g187514__2D__Madrid__2D__Hotels__2E__html&fromPop=false&from=HotelsButton
>
>
> (make sure you allow pop-ups)
>
> 1) Load above URL
> 2) Click on 'check rates'
> 3) allow all windows to open
> 4) close them all except 1 and go to about:blank
>
> On Vista (Working Set):
>
> IE7:35MB->150MB->118MB
> Minefield:35MB->118MB->94MB (88MB after "clear caches" with Stuarts
> extension)
> Safari3:25MB->140MB->110MB
>

So you have at least one extension.
Tested this on XP,
without *any* extensions or extra themes starting Minefield takes ~24MB
here, Safari takes 21MB.

about:blank->all windows open->close all except the first window and set
that to about:blank
Minefield: 24MB->50MB (peak 57MB)->44MB
Safari: 21MB->80MB (peak 82MB)->70MB
IE7: 18MB->73MB (peak 73MB)->53MB
Before testing cleared all the caches/histories/cookies/etc and
restarted browsers.
Numbers are from task manager, so not sure how reliable those are.
It is strange that max. memusage is so much lower than on Vista.

BTW, on linux(64bit), which I normally use, I've noticed something new.
Running Minefield+Chatzilla now over a week, several tabs open (gmail,
gcalendar and tbox have been open all the time), usually 10-15 tabs, and
chatzilla running all the time, the memusage level seems to have stopped
to a certain level. (This is a build before the current png leak.) So at
least on pages I use, we aren't leaking too much anymore,
and stability is quite good too :)

-Olli

0 new messages