Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is SM2 mail a slug compared to SM1 for everyone?

0 views
Skip to first unread message

Felix Miata

unread,
Dec 1, 2009, 4:27:00 PM12/1/09
to
I finally got around to migrating my SM1 profile to SM2 and trying to use SM2
instead on my main eCS 3.2GHz P4 system with 2G of RAM. This is painful.
After 5-6 days moving from message to message or delete and next in SM1 would
get pretty slow, so each day 6 usually I would restart the whole suite, which
due to the way I use the browser generally takes more than 20 minutes or
more. SM2 does mail slow right off the bat. Since I download around 400 new
messages on average per day, email is pretty important to me. I don't know if
I can live with this.

Migration did not do any passwords that I've been able to tell so far, and
its attempt with filters was bonkers. I had to delete msgFilterRules.dat and
build my extensive filter collection from scratch, because each and every new
message subject to a filter (close to 100% of them) would generate a modal
complaint about the Inbox.msf file. Good thing the format is plain text so
that I could open the old directly instead of mucking with two SM filter
panels at once.

For a while I thought fonts were going to be a problem (some muddy, some
splindly), but it turns out I just needed more reboots than expected to get
the required mods into effect. I'm too burned out from spending all day
yesterday fighting with it to remember details.

Anyway, thanks to Peter and the other contributors who made 2.0 happen, even
though mail is like chilled molasses.
--
" We have no government armed with power capable of
contending with human passions unbridled by morality and
religion." John Adams, 2nd US President

Team OS/2 ** Reg. Linux User #211409

Felix Miata *** http://fm.no-ip.com/

Peter Weilbacher

unread,
Dec 2, 2009, 3:32:39 AM12/2/09
to
On 01/12/09 22:27, Felix Miata wrote:
> I finally got around to migrating my SM1 profile to SM2 and trying to use SM2
> instead on my main eCS 3.2GHz P4 system with 2G of RAM. This is painful.
> After 5-6 days moving from message to message or delete and next in SM1 would
> get pretty slow, so each day 6 usually I would restart the whole suite, which
> due to the way I use the browser generally takes more than 20 minutes or
> more. SM2 does mail slow right off the bat. Since I download around 400 new
> messages on average per day, email is pretty important to me. I don't know if
> I can live with this.

I have seen other reports about this on OS/2 and I vaguely remember
similar complaints on other platforms. So there is some hope for future
2.0.x releases.
--
Please | Official Warpzilla Ports: http://www.mozilla.org/ports/os2/
reply in |
newsgroup | Enhanced OS/2 builds: http://pmw-warpzilla.sf.net/
Steve's Warpzilla Tips: http://www.os2bbs.com/os2news/Warpzilla.html

Bob Plyler

unread,
Dec 3, 2009, 11:37:46 AM12/3/09
to
Felix Miata wrote:
> I finally got around to migrating my SM1 profile to SM2 and trying to use SM2
> instead on my main eCS 3.2GHz P4 system with 2G of RAM. This is painful.
> After 5-6 days moving from message to message or delete and next in SM1 would
> get pretty slow, so each day 6 usually I would restart the whole suite, which
> due to the way I use the browser generally takes more than 20 minutes or
> more. SM2 does mail slow right off the bat. Since I download around 400 new
> messages on average per day, email is pretty important to me. I don't know if
> I can live with this.

I couldn't use the mail/newsgroup stuff in SM2. I couldn't get anything
done.

I changed to Firefox/Thunderbird.

Bob Plyler

Ray Davison

unread,
Dec 5, 2009, 9:18:25 PM12/5/09
to
Felix Miata wrote:
> I finally got around to migrating my SM1 profile to SM2 and trying to use SM2
> instead on my main eCS 3.2GHz P4 system with 2G of RAM. This is painful.
> After 5-6 days moving from message to message or delete and next in SM1 would
> get pretty slow, so each day 6 usually I would restart the whole suite, which
> due to the way I use the browser generally takes more than 20 minutes or
> more. SM2 does mail slow right off the bat. Since I download around 400 new
> messages on average per day, email is pretty important to me. I don't know if
> I can live with this.

Since SM2X first came out I have reported that scrolling from one
message to another takes about six seconds. Is this anything like what
you are talking about?


>
> Migration did not do any passwords that I've been able to tell so far, and
> its attempt with filters was bonkers.

So try it my way. I do not seem to have lost anything: mail filters,
passwords, cookies, address book,,,

Ray

Peter Brown

unread,
Dec 6, 2009, 9:09:43 AM12/6/09
to
Hi Felix

Felix Miata wrote:
> I finally got around to migrating my SM1 profile to SM2 and trying to use SM2
> instead on my main eCS 3.2GHz P4 system with 2G of RAM. This is painful.
> After 5-6 days moving from message to message or delete and next in SM1 would
> get pretty slow, so each day 6 usually I would restart the whole suite, which
> due to the way I use the browser generally takes more than 20 minutes or
> more. SM2 does mail slow right off the bat. Since I download around 400 new
> messages on average per day, email is pretty important to me. I don't know if
> I can live with this.
>


No problems with slow email here - but I doubt if I get any more than 20
messages a day.

I wonder if this is related to your use of SM2? - I rarely leave the
system running overnight.


> Migration did not do any passwords that I've been able to tell so far, and
> its attempt with filters was bonkers. I had to delete msgFilterRules.dat and
> build my extensive filter collection from scratch, because each and every new
> message subject to a filter (close to 100% of them) would generate a modal
> complaint about the Inbox.msf file. Good thing the format is plain text so
> that I could open the old directly instead of mucking with two SM filter
> panels at once.
>


No problem with migrating profiles either.


> For a while I thought fonts were going to be a problem (some muddy, some
> splindly),


Did you read the bit about fonts in the readme.txt?


but it turns out I just needed more reboots than expected to get
> the required mods into effect.


Not sure why you would need to reboot when changing SM fonts...


I'm too burned out from spending all day
> yesterday fighting with it to remember details.
>
> Anyway, thanks to Peter and the other contributors who made 2.0 happen, even
> though mail is like chilled molasses.


Regards

Pete

Felix Miata

unread,
Dec 6, 2009, 9:24:56 PM12/6/09
to
On 2009/12/05 18:18 (GMT-0800) Ray Davison composed:

> Felix Miata wrote:

>> I finally got around to migrating my SM1 profile to SM2 and trying to use SM2
>> instead on my main eCS 3.2GHz P4 system with 2G of RAM. This is painful.
>> After 5-6 days moving from message to message or delete and next in SM1 would
>> get pretty slow, so each day 6 usually I would restart the whole suite, which
>> due to the way I use the browser generally takes more than 20 minutes or
>> more. SM2 does mail slow right off the bat. Since I download around 400 new
>> messages on average per day, email is pretty important to me. I don't know if
>> I can live with this.

> Since SM2X first came out I have reported that scrolling from one
> message to another takes about six seconds. Is this anything like what
> you are talking about?

IIRC I saw some mention many many moons ago, probably yours. I didn't time
the switches, but 6 sec sounds like the right ballpark. Most of my movement
from message to message involves use of the delete key or button. My largest
root mail folder is currently just under 7M, with 30M total in 28 regular
folders and 82M in 5 subfolders, the largest file which in is 8.6M.

>> Migration did not do any passwords that I've been able to tell so far, and
>> its attempt with filters was bonkers.

> So try it my way. I do not seem to have lost anything: mail filters,
> passwords, cookies, address book,,,

Too late now. I'm back on 1.1.18 until such time as I am able to discover
this problem has disappeared. I can't function with mail operating as if on a
486 with minimal RAM.

Felix Miata

unread,
Dec 6, 2009, 9:45:38 PM12/6/09
to
On 2009/12/06 14:09 AM (GMT) Peter Brown composed:

> Felix Miata wrote:

>> ... SM2 does mail slow right off the bat. Since I download around 400 new


>> messages on average per day, email is pretty important to me. I don't know if
>> I can live with this.

> I wonder if this is related to your use of SM2? - I rarely leave the
> system running overnight.

The email slowness occurs even on fresh startup without first bothering to
open CZ or any web pages.

>> For a while I thought fonts were going to be a problem (some muddy, some
>> splindly),

> Did you read the bit about fonts in the readme.txt?

I copied it to a handier location and renamed it README-sm200.txt in order to
more readily come back and reread as often as necessary to ensure I got it
all right as I could.

> but it turns out I just needed more reboots than expected to get
>> the required mods into effect.

> Not sure why you would need to reboot when changing SM fonts..

Font management on OS/2 stinks. I'm using ft2lib260 for pmshell.exe, SM1 and
FF2, and had to be sure SM2 & FF3 weren't trying to use it too. What I had
before giving up:

http://fm.no-ip.com/SS/Moz/innotekfontalias.png
http://fm.no-ip.com/SS/Moz/sm-os2fonts.png
http://fm.no-ip.com/SS/Moz/sm2cz-os2.png
http://fm.no-ip.com/SS/Moz/smprefs-downloads.png

SM2 is still installed, but I can't use it like it is for mail, and won't run
both SM1 & SM2 at the same time except for brief testing.

Ray Davison

unread,
Dec 7, 2009, 12:47:11 PM12/7/09
to
Felix Miata wrote:
>
> IIRC I saw some mention many many moons ago, probably yours. I didn't time
> the switches, but 6 sec sounds like the right ballpark. Most of my movement
> from message to message involves use of the delete key or button. My largest
> root mail folder is currently just under 7M, with 30M total in 28 regular
> folders and 82M in 5 subfolders, the largest file which in is 8.6M.

I have never seen any correlation between delay and file size. I first
encountered this in SM 1X and filed it here;
https://bugzilla.mozilla.org/show_bug.cgi?id=338762

If you read thru that you will see how I demonstrated it at a meeting on
two machines and later narrowed the version insertion point of the
problem to an afternoon. If I removed both Flash and Java plugins the
delay went away. The problem returned with the first SM 2X I tried, and
without plugins.


>
> Too late now. I'm back on 1.1.18 until such time as I am able to discover
> this problem has disappeared. I can't function with mail operating as if on a
> 486 with minimal RAM.

I was told in both SM 1X and 2X that it was too late to fix it.

I run OS/2 and Win SM, on the same profiles. Win does not have the
problem. So I do not suspect the profiles. And yes, I have tried them
on their own profiles.

Ray

Felix Miata

unread,
Dec 7, 2009, 9:35:08 PM12/7/09
to
On 2009/12/07 12:47 (GMT-0800) Ray Davison composed:

> Felix Miata wrote:

>> IIRC I saw some mention many many moons ago, probably yours. I didn't time
>> the switches, but 6 sec sounds like the right ballpark. Most of my movement
>> from message to message involves use of the delete key or button. My largest
>> root mail folder is currently just under 7M, with 30M total in 28 regular
>> folders and 82M in 5 subfolders, the largest file which in is 8.6M.

> I have never seen any correlation between delay and file size. I first
> encountered this in SM 1X and filed it here;
> https://bugzilla.mozilla.org/show_bug.cgi?id=338762

I probably saw it long ago.

> If you read thru that you will see how I demonstrated it at a meeting on
> two machines and later narrowed the version insertion point of the
> problem to an afternoon. If I removed both Flash and Java plugins the
> delay went away. The problem returned with the first SM 2X I tried, and
> without plugins.

I don't install _any_ plugins on eCS. I run both eCS and Linux 24/7, so just
turn to Linux when I need to open a PDF or Flash. Still, I did have to remove
npnulos2.dll in order to make SM2 usable. That was first tested on the old
boot with MOZ_PLUGIN_PATH set. I've since rebooted without it.

>> Too late now. I'm back on 1.1.18 until such time as I am able to discover
>> this problem has disappeared. I can't function with mail operating as if on a
>> 486 with minimal RAM.

> I was told in both SM 1X and 2X that it was too late to fix it.

DLL removal made v2 usable for email. Now I have to see if it can stay up
long enough at a time. The first day it crashed after about 10 hours. If it
can't make at least a couple days without a crash, I'll have to go back to
1.1.18 again.

Can setting NSPR_OS2_NO_HIRES_TIMER=1 ever avoid crashing? Right now this is
not set.

Oliver Kluge

unread,
Dec 16, 2009, 7:54:05 PM12/16/09
to
Ilya Zakharevich wrote:
> Eh? What planet you are from?

What kind of question is this?

> x) First of all, to get 1.5GB of data IN ONE CHUNK at 2Mb/sec would
> take more than an hour.

Sure, obviously I forgot to multiply by 8 because we were talking of
bytes not bits, sorry for that. But that is even more contributing to my
point.

> x) Second, such "massive web pages" usually come in many smaller
> chunks (images + I do not know what). I usually download them
> via wget first; with connection overhead, wget would get about
> 200MB/hour (with 1.5Mb/sec connection).

200 MBytes per hour? Now your speed is almost as slow as my wrongly
calculated speed. More than twice that should be accomplishable on large
objects or your server is slower than your connection, because 200
MBytes per hour are about 0.4 Mbit/s effective speed...

> x) Third, in my examples, FF has about 15x overhead. So it takes
> 100M (mostly of images) to make it use 1.5GB of memory.

I cannot confirm this. Sometimes I do open lots of large images in tabs
(when looking for stock photos 50 or more tabs are common each having a
multimegapixel image), easily exceeding 100 MBytes of compressed (!)
image size. Even when I artificially reduce my RAM to 1 GByte I do not
see any real significant increase in disk cache size, so obviously it
all fits in 1 GByte, too.

And by the way: Why would there be a 15fold increase in size?

> Your arithmetic looks very skew. What has compressed/uncompressed to
> do with how FF shows it? When it is parsed, initial format is
> irrelevant. Then

Are you implying that FF stores image objects uncompressed? That would
surely be a woeful waste of memory for no reason at all.

And since at any point one can store an image to disk, it stores that
image in it's native format, so I doubt that FF re-compressed these
images (because that cannot be done lossless) or stores them compressed
and uncompressed in RAM.

> 6MPix * 4bytes/pix * (2 double-buffers + video memory)

24 bit image data stored in 32 bit? 2 double buffers for storing an
image object? Are you serious? Why should anyone do that? And you add
video memory to that? Objects get _stored_ in video memory? Sorry, I
don't think that's correct.

> is 72MB. (One can imagine more efficient implementations, but having
> measured FF memory usage, I try to be conservative...)

When I open 50 tabs with lots of megapixel images in them my eCS tells
me that 1.5 GBytes of RAM are still available out of 2 GBytes (Top). And
that cannot be calculated with swap included, because I do have 2 GBytes
reserved for swap on a partition that contains only swapper.dat.
Immediately after starting SM (I do not run FF on this machine, I only
have a FF installation on my trusty old Thinkpad 702 butterfly with 40
MBytes of RAM an 2 GBytes of HDD) I have 1.8 GBytes available, so
running Seamonkey to the brink of crash (which happens quite frequently)
mostly increases it by about 300-400 MBytes.

Yours
Oliver

Ilya Zakharevich

unread,
Dec 17, 2009, 11:38:36 AM12/17/09
to
[A complimentary Cc of this posting was sent to
Oliver Kluge
<ok...@kluge-digital.de>], who wrote in article <1-SdnUnvxIi8HLTW...@mozilla.org>:

> > x) Second, such "massive web pages" usually come in many smaller
> > chunks (images + I do not know what). I usually download them
> > via wget first; with connection overhead, wget would get about
> > 200MB/hour (with 1.5Mb/sec connection).

> 200 MBytes per hour? Now your speed is almost as slow as my wrongly
> calculated speed. More than twice that should be accomplishable on large
> objects or your server is slower than your connection, because 200
> MBytes per hour are about 0.4 Mbit/s effective speed...

Sigh... Obviously, you do not understand difference between
downloading 1GB in one chunk and downloading 1GB in 100000 chunks...
Might it be that your experience is with multi-threaded parallel
download vs serial download of wget?

But anyway: FF takes about the same order of magnitude of time to
download (+render) vs wget...

> > x) Third, in my examples, FF has about 15x overhead. So it takes
> > 100M (mostly of images) to make it use 1.5GB of memory.

> I cannot confirm this.

Lucky you... It might be that a few big images in different tabs are
handled better than many averagely sized ones in one tab.

> And by the way: Why would there be a 15fold increase in size?

Why not? It is written by people infatuated with 3-letter acronims...

> Are you implying that FF stores image objects uncompressed?

??? Do you know ANY graphic program which does otherwise?

> > 6MPix * 4bytes/pix * (2 double-buffers + video memory)

> 24 bit image data stored in 32 bit?

Yes, you are right, I forgot about alpha. So it is 5bytes/pix...

> 2 double buffers for storing an image object?

I might have been unclear. 2 buffers for double-buffering...

> Are you serious? Why should anyone do that?

You again asking silly questions...

> And you add video memory to that? Objects get _stored_ in video
> memory? Sorry, I don't think that's correct.

Feel free to believe what you want. I did not check the
implementation, so I'm just guessing how kindergarten programmers
would implement things...

Yours,
Ilya

Oliver Kluge

unread,
Dec 18, 2009, 4:17:01 PM12/18/09
to
Ilya Zakharevich wrote:
> Sigh... Obviously, you do not understand difference between
> downloading 1GB in one chunk and downloading 1GB in 100000 chunks...

100000 chunks? Again, are you talking about a webpage or about multimedia?

I have never seen a webpage with contents in three digit megabyte sizes
that downloads in hundreds of thousands of chunks...

And I do get much more performance out of an Internet connection of that
speed. There is no 70% protocol overhead.

> But anyway: FF takes about the same order of magnitude of time to
> download (+render) vs wget...

Btw, FF does several parallel downloads from the same server. The number
is adjustable and shows in about:config.

>> And by the way: Why would there be a 15fold increase in size?
>
> Why not? It is written by people infatuated with 3-letter acronims...

I don't reply on that one.

>> Are you implying that FF stores image objects uncompressed?
>
> ??? Do you know ANY graphic program which does otherwise?

FF is no "graphics program". The only reason why graphics programs store
images uncompressed is that it is necessary for manipulation.

FF just displays images and it doesn't make any sense to decode all
images to uncompressed and keep them in memory uncompressed, with the
one exception of the tab and the visible content as it has gotten the
expose evcnt. There double buffers do make sense, but only for this one
tab. And of course video memory is only used for that.

>>> 24 bit image data stored in 32 bit?
>>
>> Yes, you are right, I forgot about alpha. So it is 5bytes/pix...

8 bit alpha? In a browser?

>> And you add video memory to that? Objects get _stored_ in video
>> memory? Sorry, I don't think that's correct.
>
> Feel free to believe what you want. I did not check the
> implementation, so I'm just guessing how kindergarten programmers
> would implement things...

I am not sure if it is even permissible under OS/2 to write _anything_
into video memory. That is PM's job, and it may be circumvented with
DART for video playback. But even if it would be possible to write into
video memory: There are only 64 MBytes of video memory here...

Yours
Oliver

Felix Miata

unread,
Dec 18, 2009, 6:04:22 PM12/18/09
to
On 2009/12/18 22:17 (GMT+0100) Oliver Kluge composed:

> Ilya Zakharevich wrote:

>>> Are you implying that FF stores image objects uncompressed?

>> ??? Do you know ANY graphic program which does otherwise?

> FF is no "graphics program". The only reason why graphics programs store
> images uncompressed is that it is necessary for manipulation.

> FF just displays images and it doesn't make any sense to decode all
> images to uncompressed and keep them in memory uncompressed

I'm no programmer, but over many years of Mozilla development I've spent a
lot of time hanging out in the core Moz devs IRC channels, and have been able
to comprehend quite a bit of their discussion. IIUC, Ilya is right about this
- though it seems to make no sense to an core dev outsider, image files are
just DOM objects converted on load to an internal format used by the DOM, and
grow immensely during the conversion.

William L. Hartzell

unread,
Dec 18, 2009, 6:19:47 PM12/18/09
to
Felix Miata wrote:
> On 2009/12/18 22:17 (GMT+0100) Oliver Kluge composed:
>
>> Ilya Zakharevich wrote:
>
>>>> Are you implying that FF stores image objects uncompressed?
>
>>> ??? Do you know ANY graphic program which does otherwise?
>
>> FF is no "graphics program". The only reason why graphics programs store
>> images uncompressed is that it is necessary for manipulation.
>
>> FF just displays images and it doesn't make any sense to decode all
>> images to uncompressed and keep them in memory uncompressed
>
> I'm no programmer, but over many years of Mozilla development I've spent a
> lot of time hanging out in the core Moz devs IRC channels, and have been able
> to comprehend quite a bit of their discussion. IIUC, Ilya is right about this
> - though it seems to make no sense to an core dev outsider, image files are
> just DOM objects converted on load to an internal format used by the DOM, and
> grow immensely during the conversion.
Why convert them? Could not the rendering be done at display time, ie
during WM_Paint?
--
Bill
<Thanks, a Million>

Boris Zbarsky

unread,
Dec 18, 2009, 7:03:57 PM12/18/09
to
On 12/18/09 3:04 PM, Felix Miata wrote:
> I'm no programmer, but over many years of Mozilla development I've spent a
> lot of time hanging out in the core Moz devs IRC channels, and have been able
> to comprehend quite a bit of their discussion. IIUC, Ilya is right about this
> - though it seems to make no sense to an core dev outsider, image files are
> just DOM objects converted on load to an internal format used by the DOM, and
> grow immensely during the conversion.

For what it's worth, I believe we drop the decoded data off a timer now.
And we're working on making the decoding itself lazy.

-Boris

Boris Zbarsky

unread,
Dec 18, 2009, 7:04:22 PM12/18/09
to
On 12/18/09 3:19 PM, William L. Hartzell wrote:
> Why convert them? Could not the rendering be done at display time, ie
> during WM_Paint?

Turns out that if done naively this sends painting performance (e.g.
scrolling) all to hell.

-Boris

Ilya Zakharevich

unread,
Dec 19, 2009, 2:32:22 PM12/19/09
to
[A complimentary Cc of this posting was sent to
Oliver Kluge
<ok...@kluge-digital.de>], who wrote in article <Ve-dnU3whuvdbLbW...@mozilla.org>:

> > Sigh... Obviously, you do not understand difference between
> > downloading 1GB in one chunk and downloading 1GB in 100000 chunks...

> 100000 chunks? Again, are you talking about a webpage or about multimedia?

I was talking about *your understanding*. ;-) The actual count is
about 1000.

> And I do get much more performance out of an Internet connection of that
> speed. There is no 70% protocol overhead.

Sigh... With wget, it is not "protocol overhead", but HTTP
handshaking overhead.

About the other stuff you wrote: you have a long way to go yet in your
understanding "how programs work" (and how program developers work) -
or, if you wish, continue wearing rose-tinted glasses...

Yours,
Ilya

Stanimir Stamenkov

unread,
Dec 19, 2009, 3:36:17 PM12/19/09
to
Sat, 19 Dec 2009 13:32:22 -0600, /Ilya Zakharevich/:

> [A complimentary Cc of this posting was sent to
> Oliver Kluge
> <ok...@kluge-digital.de>], who wrote in article<Ve-dnU3whuvdbLbW...@mozilla.org>:
>
>> And I do get much more performance out of an Internet connection of that
>> speed. There is no 70% protocol overhead.
>
> Sigh... With wget, it is not "protocol overhead", but HTTP
> handshaking overhead.

I believe wget is reusing the same HTTP connection for multiple
requests to the same host. I see there's an option to have it
disable that behavior:

--no-http-keep-alive disable HTTP keep-alive (persistent
connections).

--
Stanimir

Ilya Zakharevich

unread,
Dec 20, 2009, 2:53:09 PM12/20/09
to
[A complimentary Cc of this posting was sent to
Stanimir Stamenkov
<s7a...@netscape.net>], who wrote in article <YaGdnSwBM6LYpLDW...@mozilla.org>:

> I believe wget is reusing the same HTTP connection for multiple
> requests to the same host. I see there's an option to have it
> disable that behavior:

> --no-http-keep-alive disable HTTP keep-alive (persistent
> connections).

While you work with the same host, this must help indeed. But I'm not
sure that wget would be able to handle persistent connections to many
hosts simultaneously...

Yours,
Ilya

Joe Drew

unread,
Dec 21, 2009, 4:39:47 PM12/21/09
to dev-te...@lists.mozilla.org, dev-po...@lists.mozilla.org

It's not possible to leave images compressed all the time, because you
have to decompress to draw. And, as Boris says, we have to draw them
frequently when we're scrolling.

We've mostly moved to a model where images are not decoded until they're
drawn, and the decoded data is thrown away after a little while, but
that still isn't entirely enabled (due to bugs), even on mozilla-central.

To see what's blocking that, take a look at the list of bugs marked with
[decodeondraw] in their whiteboard field:
https://bugzilla.mozilla.org/buglist.cgi?quicksearch=[decodeondraw]

Finally, even before "decode-on-draw" (which is more accurately named
"asynchronous decoding"), we cache 5 MB of decoded images to speed up
their redisplay. Those unused but cached images, like others, have their
decoded data thrown away after 30 seconds.

Any further questions, please ask.

Joe

Joe

unread,
Dec 21, 2009, 4:49:42 PM12/21/09
to
On Dec 18, 4:17 pm, Oliver Kluge <o...@kluge-digital.de> wrote:

> Ilya Zakharevich wrote:
> >> Are you implying that FF stores image objects uncompressed?
>
> > ???  Do you know ANY graphic program which does otherwise?
>
> FF is no "graphics program". The only reason why graphics programs store
> images uncompressed is that it is necessary for manipulation.
>
> FF just displays images and it doesn't make any sense to decode all
> images to uncompressed and keep them in memory uncompressed, with the
> one exception of the tab and the visible content as it has gotten the
> expose evcnt. There double buffers do make sense, but only for this one
> tab. And of course video memory is only used for that.
>
> >>> 24 bit image data stored in 32 bit?
>
> >> Yes, you are right, I forgot about alpha.  So it is 5bytes/pix...
>
> 8 bit alpha? In a browser?

Images are stored in 32 bits (4 bytes) per pixel. (Yes, 8 bits of
alpha. We need it to blend properly.) We don't double buffer images
explicitly, but we do copy to the windowing system, which gets to do
what it wants with respect to memory usage. And we do manipulate
photos, because we need to colour correct them, and they need to be
decompressed for that to work.

Oliver Kluge

unread,
Dec 21, 2009, 6:19:09 PM12/21/09
to
[Since my reply with the attached test image did not go through I repeat
this without the attachment]

Ilya Zakharevich wrote:
> About the other stuff you wrote: you have a long way to go yet in your
> understanding "how programs work" (and how program developers work) -
> or, if you wish, continue wearing rose-tinted glasses...

Please watch your language.

I have studied computer sciences and electrical engineering at
university and I have worked 20 years in this business, so I don't think
it is appropriate that you talk to me using this language.

To end this discussion, I have taken measurements. For that I have
created an image in Adobe Photoshop that is precisely 6 MPix, 3072 x
2028 Pixel, RGB, 8 Bit per channel, no alpha and converted it into JPEG
using Photoshops ImageReady, compressing it to 1.397 MByte. Uncompressed
this images takes up exactly 18 MBytes. This image gets copied several
times, each time getting a new filename so Seamonkey will no reuse the data.

Starting Seamonkey 2 consumes 62 MBytes, with one tab and bookmarks open.

Loading image #1 consumes 18 MBytes.

Opening a new blank tab consumes another 7 MBytes.

Loading further images until there are 16 tabs with 16 images open
consumes another 217 MBytes.

That makes a total (including the overhead for the tabs) of 15.125
MBytes per tab. The cache is completely empty, as is swapper.dat.

So obviously images do not get stored uncompressed in RAM (except for
the one that is currently exposed, of course): an uncompressed image
must consume 18 MBytes. Obviously this is not the case. Your theory was
that 72 MBytes get consumed per image, obviously that is also not the case.

Yours
Oliver

[P.S.: Only after posting my reply with the test image I closed
Seamonkey entirely. Six MBytes did not get freed... Launching a new
Seamonkey with one empty tab and closing it again ate another MByte,
another launch yet another and so on...]

Dave Yeo

unread,
Dec 21, 2009, 8:48:14 PM12/21/09
to
On 12/21/09 03:19 pm, Oliver Kluge wrote:
>
> [P.S.: Only after posting my reply with the test image I closed
> Seamonkey entirely. Six MBytes did not get freed... Launching a new
> Seamonkey with one empty tab and closing it again ate another MByte,
> another launch yet another and so on...]

Yes, memory management is one of the weaknesses of our port which
probably won't get fixed until a knowledgeable person steps up to the plate
Dave

0 new messages