Firefox Page Speed add-on bugs/quirks/inaccuracies

5 views
Skip to first unread message

EnviroChem

unread,
Dec 7, 2009, 9:22:32 PM12/7/09
to page-speed-discuss
I've spent much of today using The Page Speed add-on for Firefox and
have observed the following bugs/quirks/inaccuracies/annoyances:

1) INACCURACY: It falsely claims width/height clues missing from
images if those clues are handled by CSS file as they are supposed to
be. It appears add-on is only looking for old HTML width and height
attributes, which have been superseded by CSS.

2) BUG: It falsely claims caching dates are too short on images, CSS
files and Javascripts. I have used third party header reading tools
to ensure recommended caching dates are in place. I also take issue
with the one month time frame recommended as a couple of weeks would
be more than enough of a cache time for most users to a typical site
AND cache engines caching files for over a month is TOO long for sites
that are regularly updated.

3) INACCURACY: Incorrectly advises applying gzip to image files.
Image files are already compressed. Applying gzip to them wastes cpu
cycles and can actually result in a larger download.

4) ANNOYING: Minimize DNS lookups and parallelize downloads across
hostnames are contradictory instructions. Also the extra DNS lookups
are frequently caused by Google JS files (e.g. Google AdSense and
Analytics). The only way to reduce those DNS lookups would be to
remove Google code or for Google to consolidate such files one domain.

5) QUIRK: Page Speed Activity doesn't allow one to scroll left or
right along time line to see details that have scrolled off page.
This isn't very helpful.

One of the pages I was using for my testing was http://environmentalchemistry.com

Overall the tool is very helpful for uncovering potential performance
issues, but the above listed issues also make it very frustrating to
use.

Sam Kerner

unread,
Dec 8, 2009, 10:20:38 AM12/8/09
to page-spee...@googlegroups.com
On Mon, Dec 7, 2009 at 9:22 PM, EnviroChem
<k...@environmentalchemistry.com> wrote:
> I've spent much of today using The Page Speed add-on for Firefox and
> have observed the following bugs/quirks/inaccuracies/annoyances:
>
> 1) INACCURACY: It falsely claims width/height clues missing from
> images if those clues are handled by CSS file as they are supposed to
> be. It appears add-on is only looking for old HTML width and height
> attributes, which have been superseded by CSS.

Sounds like you found a bug. Can you give us a snippet of html
and css that shows the problem?

> 2) BUG: It falsely claims caching dates are too short on images, CSS
> files and Javascripts.  I have used third party header reading tools
> to ensure recommended caching dates are in place. I also take issue
> with the one month time frame recommended as a couple of weeks would
> be more than enough of a cache time for most users to a typical site
> AND cache engines caching files for over a month is TOO long for sites
> that are regularly updated.

Or reasoning is explained at
http://code.google.com/speed/page-speed/docs/caching.html .

We recommend setting cache headers to keep content cached for as
long as possible (which is one year), and using a fingerprint or
version number in the url of all catchable resources that change. For
example, suppose you want to serve a css file at:

http://www.example.com/path/my.css .

If it can be cached, and you want it to change, serve it from:

http://www.example.com/path/v1/my.css .

Next time you update it, change the path to:

http://www.example.com/path/v2/my.css .

This way, the fact that the old version is cached for a long time does
not mean that a user might get an outdated version.

If you know for a fact that you will change a file next week, you
might decide that you want to set caching headers for a shorter amount
of time. I have seen to many smart and well meaning people mess this
up to recommend it. Keeping track of what content is cached for how
long is hard, and getting it right is unrealistic for all but the
simplest sites.

> 3) INACCURACY: Incorrectly advises applying gzip to image files.
> Image files are already compressed.  Applying gzip to them wastes cpu
> cycles and can actually result in a larger download.

Page Speed should not recommend compressing images. Can you give
a URL that demonstrates this?

> 4) ANNOYING: Minimize DNS lookups and parallelize downloads across
> hostnames are contradictory instructions.

How so? Minimize DNS Lookups recommends not serving one resource
per domain, and serving javascript in the first 10% of the DOM from
the same host as the main document. Parallelize Downloads Across
Hostnames recommends balancing static resources across four hostnames
when at least one host serves 10 static resources.

> Also the extra DNS lookups
> are frequently caused by Google JS files (e.g. Google AdSense and
> Analytics). The only way to reduce those DNS lookups would be to
> remove Google code or for Google to consolidate such files one domain.

See the thread "[page-speed-discuss] Recommendations are google
issues", posted yesterday.

> 5) QUIRK: Page Speed Activity doesn't allow one to scroll left or
> right along time line to see details that have scrolled off page.
> This isn't very helpful.

Yes, that could be improved.

> One of the pages I was using for my testing was http://environmentalchemistry.com
>
> Overall the tool is very helpful for uncovering potential performance
> issues, but the above listed issues also make it very frustrating to
> use.

Glad to hear that. Thanks for letting us know what is broken, so
we can fix it.

Sam

Richard Rabbat

unread,
Dec 8, 2009, 8:46:44 PM12/8/09
to page-spee...@googlegroups.com
Sam, see inline.

On Tue, Dec 8, 2009 at 7:20 AM, Sam Kerner <ske...@google.com> wrote:
On Mon, Dec 7, 2009 at 9:22 PM, EnviroChem
<k...@environmentalchemistry.com> wrote:
> I've spent much of today using The Page Speed add-on for Firefox and
> have observed the following bugs/quirks/inaccuracies/annoyances:
>
> 1) INACCURACY: It falsely claims width/height clues missing from
> images if those clues are handled by CSS file as they are supposed to
> be. It appears add-on is only looking for old HTML width and height
> attributes, which have been superseded by CSS.

   Sounds like you found a bug.  Can you give us a snippet of html
and css that shows the problem?

http://environmentalchemistry.com/ has 2 images with width/height but Page Speed complains still. The source shows: 
<div class="BLAd" style="width:120px;margin:380px Auto 0"><a href="http://www.sciam.com/earth-and-environment" title="Environmental News" rel="nofollow"><img src="/images/SANetworkLogo.png" alt="Scientific American: Partner Network"></a></div>
let's file a bug.
I think this is a minor misunderstanding. Under the gzip rule, no image shows up, but the images could be further compressed by using better jpeg or png compression. So, under "Optimize images", the text says: "Compressing x.png...", but means using png compression.
EnviroChem, please confirm.
 
> 4) ANNOYING: Minimize DNS lookups and parallelize downloads across
> hostnames are contradictory instructions.

   How so?  Minimize DNS Lookups recommends not serving one resource
per domain, and serving javascript in the first 10% of the DOM from
the same host as the main document.  Parallelize Downloads Across
Hostnames recommends balancing static resources across four hostnames
when at least one host serves 10 static resources.

> Also the extra DNS lookups
> are frequently caused by Google JS files (e.g. Google AdSense and
> Analytics). The only way to reduce those DNS lookups would be to
> remove Google code or for Google to consolidate such files one domain.

   See the thread "[page-speed-discuss] Recommendations are google
issues", posted yesterday.

> 5) QUIRK: Page Speed Activity doesn't allow one to scroll left or
> right along time line to see details that have scrolled off page.
> This isn't very helpful.

  Yes, that could be improved.

> One of the pages I was using for my testing was http://environmentalchemistry.com
>
> Overall the tool is very helpful for uncovering potential performance
> issues, but the above listed issues also make it very frustrating to
> use.

   Glad to hear that.  Thanks for letting us know what is broken, so
we can fix it.

Sam

--


EnviroChem

unread,
Dec 14, 2009, 12:21:12 PM12/14/09
to page-speed-discuss
Sorry, my message above was still being composed and issues tested. It
got sent before I was ready (I wasn't even aware it had been sent).

On Dec 8, 10:20 am, Sam Kerner <sker...@google.com> wrote:
> On Mon, Dec 7, 2009 at 9:22 PM, EnviroChem
>
> <k...@environmentalchemistry.com> wrote:
> > I've spent much of today using The Page Speed add-on for Firefox and
> > have observed the following bugs/quirks/inaccuracies/annoyances:
>
> > 1) INACCURACY: It falsely claims width/height clues missing from
> > images if those clues are handled by CSS file as they are supposed to
> > be. It appears add-on is only looking for old HTML width and height
> > attributes, which have been superseded by CSS.
>
>     Sounds like you found a bug.  Can you give us a snippet of html
> and css that shows the problem?
>

This problem may have been a caching issue with the browser using
outdated CSS.

> > 2) BUG: It falsely claims caching dates are too short on images, CSS
> > files and Javascripts.  I have used third party header reading tools
> > to ensure recommended caching dates are in place. I also take issue
> > with the one month time frame recommended as a couple of weeks would
> > be more than enough of a cache time for most users to a typical site
> > AND cache engines caching files for over a month is TOO long for sites
> > that are regularly updated.
>
>     Or reasoning is explained athttp://code.google.com/speed/page-speed/docs/caching.html.
>
>     We recommend setting cache headers to keep content cached for as
> long as possible (which is one year), and using a fingerprint or
> version number in the url of all catchable resources that change.  For
> example, suppose you want to serve a css file at:
>
>    http://www.example.com/path/my.css.
>
> If it can be cached, and you want it to change, serve it from:
>
>    http://www.example.com/path/v1/my.css.
>
> Next time you update it, change the path to:
>
>    http://www.example.com/path/v2/my.css.
>
> This way, the fact that the old version is cached for a long time does
> not mean that a user might get an outdated version.
>

This is bastardize evil way to create mess and it defeats the real
elegance of CSS. For instance, Some of my CSS files are the same
across domains (main site, blog, partner branding) to keep the primary
skin of the site the same across all the sub-sites. I make ONE change
to ONE file and it affects all the sites seamlessly. Your proposal
would require updating ALL of the sub-sites (which also requires
getting 3rd parties involved) just to make what could be a minor
change or update. This is absolutely insane. A much simpler and
reasonably effective way to go with CSS and JS files is to keep a
reasonable cache length and then require cache servers to check the
modified date of said files. A week or month is plenty of time for a
cache. Changing the CSS file path to force a new cache update just
isn't logical and in the end you are not even saving the end user all
that much time if the site is hosted on a reasonably fast server/
connection, the CSS file is reasonably optimized AND it is GZIPed
before being served. In my case I've been seeing wait times from
request to received CSS file from my server of less than 100ms.

Very simply long cache times for CSS and JS files is a solution to a
non-issue. It will cause more development, maintenance headaches
without ANY appreciable advantage to the end user.

>     If you know for a fact that you will change a file next week, you
> might decide that you want to set caching headers for a shorter amount
> of time.  I have seen to many smart and well meaning people mess this
> up to recommend it.  

If I knew what I was going to change next week or next month, I would
change it now and be done with it. This isn't how development works.

> Keeping track of what content is cached for how
> long is hard, and getting it right is unrealistic for all but the
> simplest sites.

EXACTLY! This is why recommending long cache times for JS & CSS will
cause more problems then it will resolve. What is the most benefit to
the user is a cache time that allows the vast majority of users to a
site to maintain support files in their browser cache for the duration
of their session, yet still makes sure the next time they visit they
are getting proper versions of their files. The advice to maintain
really long cache times for CSS & JS files WILL lead some developers
to change their cache time to too long of a period without considering
the ramifications. This will then end up causing problems for some of
their users the next time the developer updates one of these files.

> > 3) INACCURACY: Incorrectly advises applying gzip to image files.
> > Image files are already compressed.  Applying gzip to them wastes cpu
> > cycles and can actually result in a larger download.
>
>     Page Speed should not recommend compressing images.  Can you give
> a URL that demonstrates this?

It is not doing it now, this may have been one of the notes I was
still figuring out what was going on when my message was accidentally
sent.

> > 4) ANNOYING: Minimize DNS lookups and parallelize downloads across
> > hostnames are contradictory instructions.
>
>     How so?  Minimize DNS Lookups recommends not serving one resource
> per domain, and serving javascript in the first 10% of the DOM from
> the same host as the main document.  Parallelize Downloads Across
> Hostnames recommends balancing static resources across four hostnames
> when at least one host serves 10 static resources.

This is a much more clear explanation of what those notes were trying
to convey than what I got from the Page Speed recommendations. The
reality is, however, most developers will be able to do nothing about
the minimize DNS lookups suggestion as the single request domains are
typically for 3rd party widgets like Google Analystics. The only way
to address the issue would be to remove said widget, oftentimes this
is not a viable solution. Removing the widget also wouldn't be a
desirable solution for Google as frequently it would be their widgets
that would be getting removed.

Parallelizing downloads isn't even a viable option for most websites
as it would require registering multiple domains and getting tertiary
sites set up just to handle this. This suggestion would create major
website management headaches and increase web hosting costs. If this
advice is going to be given, then there needs to be some kind of
guideline as to how much of a performance improvement this would
provide. This could be massive retooling of a website and we need to
know up front how much value such an undertaking would be provided.
Without this knowledge, I'd suggest getting rid of this recommendation
because it will be of no use to 90% of us and will only cause
confusion.

Suggestions on the Page Speed tool should be viable suggestions for
the majority of developers with significant opportunities for
improvement. Neither the DNS nor parallelizing options meet this
requirement. At most they should be footnotes of other things that
might be tried.

> > Overall the tool is very helpful for uncovering potential performance
> > issues, but the above listed issues also make it very frustrating to
> > use.
>
>     Glad to hear that.  Thanks for letting us know what is broken, so
> we can fix it.

I love the premise behind this tool and I greatly appreciate being
able to pinpoint page speed issues. My criticisms are aimed at helping
provide end user feedback to help make the Page Speed tool better.

RELATED ISSUES:

1) I did notice is that the Firebug plus Page Speed add-ons really
hammers the processor and slow Firefox and my computer to a crawl.
Google Chrome also slowed to a crawl when its developer tool was
opened. Opera on the other hand performed much better. I ended up
using Firefox with Page Speed only for the overall recommendations
(e.g. minimizing JS), but relied on Opera to do performance testing
when trying to optimize PHP code as it had less impact on my processor
and thus I could get a better idea if PHP tweaks improved performance
(I was looking at the wait time for file requests).

2) Some of the most serious issues I noticed during my testing and
fixing actually come from Google scripts. In particular, the
following scripts were throwing lots of CSS errors:
http://www.google.com/friendconnect/styles/gadgets-ltr.css?v=0.499.5
http://googleads.g.doubleclick.net/pagead/ads? {snipped query string}

It would be really nice if the Google developers who maintain the
scripts we use on our websites wrote cleaner code themselves. Maybe
Google should adopt a policy that the HTML/CSS generated by these
scripts must validate to W3C HTML/CSS specifications and must not
throw any JavaScript errors/warnings. It would certainly be leading by
example. I know there is always the excuse against validating code
because of the need to support legacy browsers, but I've been
supporting legacy browsers for years while still creating code that
validates to W3C specifications (I only recently cut off support for
IE5, mostly because I can no longer test against it).

Richard Rabbat

unread,
Dec 24, 2009, 12:07:57 AM12/24/09
to page-spee...@googlegroups.com
Sam makes one of many recommendations with respect to version control.
Other models include adding a hash to the filename and making it never expire (basically set its expiry for a year). This is what GWT does for example. If you maintain your website files by hand, this is not a trivial task. In general, you should set expiry as long as you can.
A web development space that takes care of all the links will simplify your life and will make sure that you don't end up with a bunch of 404s or unused resources wasting away on your browser.



On Mon, Dec 14, 2009 at 9:21 AM, EnviroChem <k...@environmentalchemistry.com> wrote:
Sorry, my message above was still being composed and issues tested. It
got sent before I was ready (I wasn't even aware it had been sent).

On Dec 8, 10:20 am, Sam Kerner <sker...@google.com> wrote:
> On Mon, Dec 7, 2009 at 9:22 PM, EnviroChem
>
> <k...@environmentalchemistry.com> wrote:
> > I've spent much of today using The Page Speed add-on for Firefox and
> > have observed the following bugs/quirks/inaccuracies/annoyances:
>
> > 1) INACCURACY: It falsely claims width/height clues missing from
> > images if those clues are handled by CSS file as they are supposed to
> > be. It appears add-on is only looking for old HTML width and height
> > attributes, which have been superseded by CSS.
>
>     Sounds like you found a bug.  Can you give us a snippet of html
> and css that shows the problem?
>

This problem may have been a caching issue with the browser using
outdated CSS.

glad to know it's resolved 
You may disagree with a lot of the opinions here, since we believe that every millisecond counts. the 100 ms is just a slippery slope.
 

Very simply long cache times for CSS and JS files is a solution to a
non-issue. It will cause more development, maintenance headaches
without ANY appreciable advantage to the end user.

>     If you know for a fact that you will change a file next week, you
> might decide that you want to set caching headers for a shorter amount
> of time.  I have seen to many smart and well meaning people mess this
> up to recommend it.  

If I knew what I was going to change next week or next month, I would
change it now and be done with it.  This isn't how development works.

> Keeping track of what content is cached for how
> long is hard, and getting it right is unrealistic for all but the
> simplest sites.

EXACTLY! This is why recommending long cache times for JS & CSS will
cause more problems then it will resolve.  What is the most benefit to
the user is a cache time that allows the vast majority of users to a
site to maintain support files in their browser cache for the duration
of their session, yet still makes sure the next time they visit they
are getting proper versions of their files.  The advice to maintain
really long cache times for CSS & JS files WILL lead some developers
to change their cache time to too long of a period without considering
the ramifications.  This will then end up causing problems for some of
their users the next time the developer updates one of these files.

there are several schools of thought. let's say you're using a javascript library at version 1.1. if you don't keep track of changes that the developer makes, you may end up pulling a new version that is incompatible with the functionality that you use. This doesn't happen often, but being able to test before you ship is sometimes a good idea. See for example how the api's that code.google.com hosts allow you to specify a version number that keeps you comfortable that you won't get regressions (whether latency or functionality related).


> > 3) INACCURACY: Incorrectly advises applying gzip to image files.
> > Image files are already compressed.  Applying gzip to them wastes cpu
> > cycles and can actually result in a larger download.
>
>     Page Speed should not recommend compressing images.  Can you give
> a URL that demonstrates this?

It is not doing it now, this may have been one of the notes I was
still figuring out what was going on when my message was accidentally
sent.

> > 4) ANNOYING: Minimize DNS lookups and parallelize downloads across
> > hostnames are contradictory instructions.
>
>     How so?  Minimize DNS Lookups recommends not serving one resource
> per domain, and serving javascript in the first 10% of the DOM from
> the same host as the main document.  Parallelize Downloads Across
> Hostnames recommends balancing static resources across four hostnames
> when at least one host serves 10 static resources.

This is a much more clear explanation of what those notes were trying
to convey than what I got from the Page Speed recommendations. The
reality is, however, most developers will be able to do nothing about
the minimize DNS lookups suggestion as the single request domains are
typically for 3rd party widgets like Google Analystics.  The only way
to address the issue would be to remove said widget, oftentimes this
is not a viable solution.  Removing the widget also wouldn't be a
desirable solution for Google as frequently it would be their widgets
that would be getting removed.

that is not the intent of the rule. You would be surprised how many websites hit just too many different servers. See the results of this rule for 2 different sites:
One of them has:
 Another one has:
12 and 6 DNS hits respectively to web servers with one resource each. It just adds up

Parallelizing downloads isn't even a viable option for most websites
as it would require registering multiple domains and getting tertiary
sites set up just to handle this. This suggestion would create major
website management headaches and increase web hosting costs.  If this
advice is going to be given, then there needs to be some kind of
guideline as to how much of a performance improvement this would
provide.  This could be massive retooling of a website and we need to
know up front how much value such an undertaking would be provided.
Without this knowledge, I'd suggest getting rid of this recommendation
because it will be of no use to 90% of us and will only cause
confusion.


You can blame http 1.1 section 8.1.4. It seems most of the browsers ignore this rule now but ie6/ie7 (which have a sizable portion of the browser market) are limited to 2.
I'll take the feedback to the friendconnect team; unfortunately, I can't see the css for the snipped DoubleClick url. Speed is one of many issues that a web developer has to optimize and we personally love standards (as you might have seen from the numerous posts about HTML5 for example).
thanks for the feedback and keep it coming :)

Richard

EnviroChem

unread,
Dec 24, 2009, 11:23:49 AM12/24/09
to page-speed-discuss

On Dec 24, 12:07 am, Richard Rabbat <rab...@google.com> wrote:

> You may disagree with a lot of the opinions here, since we believe that
> every millisecond counts. the 100 ms is just a slippery slope.

I do see counting milliseconds as important, I spent a lot of time A-B
testing my .htaccess file and PHP to shave off every last millisecond
I could. The problem comes where shaving milliseconds via the
versioning CSS file versioning recommendation becomes a development
nightmare. In my case setting a cache length of one week for JS and
CSS files would serve the surfing habits of 90%+ of my user base.
There would be almost no gain for almost all of my user base to set
cache lengths longer than 1 month (which I'm doing for some files like
images). I really see setting cache times up to a year as obscene
with no payback for anyone except really big sites (e.g. Google) where
lots of people return to them day in and day out.

> there are several schools of thought. let's say you're using a javascript
> library at version 1.1. if you don't keep track of changes that the
> developer makes, you may end up pulling a new version that is incompatible
> with the functionality that you use. This doesn't happen often, but being
> able to test before you ship is sometimes a good idea. See for example how
> the api's that code.google.com hosts allow you to specify a version number
> that keeps you comfortable that you won't get regressions (whether latency
> or functionality related).

I can see that if a JS file is shared (like a library) and hosted on a
3rd party site (e.g. FriendConnect) that versioning would be a good
idea to keep things from breaking on other people's sites. I could
also see using versioning if there are major changes that could break
things if the wrong file is pulled due to caching, but for minor stuff
this doesn't make sense.

For instance, I use a dynamically generated JS for a pull down menu
(yes there are static HTML menus as well, via site/section directory
pages). The contents of the menu is dynamically generated, but only
gets updated when a new page is added to the site. It also pulls an
RSS feed from my blog. These menu items don't change all that often
so it really doesn't matter if a user caches this JS for a couple of
days.

The real beauty of this JS menu is that it costs only 13kb of download
yet links to hundreds upon hundreds of pages on my site. As a separate
JS they download it once and can use it for their entire session. If
this menu were static HTML it would have added about 100kb to the
download and certainly couldn't be a part of every page. This makes it
much easier to jump directly from one page to another on my site
without using an intermediary directory page.

My point is that pushing most websites to extend cache times beyond
one week or one month (depending upon the type of resource) and to use
versioning will be of no value to most of their users, but it will
cause development headaches.


> that is not the intent of the rule. You would be surprised how many websites
> hit just too many different servers. See the results of this rule for 2
> different sites:
> One of them has:

> [snip]


> 12 and 6 DNS hits respectively to web servers with one resource
each. It
> just adds up

Okay, I confess, I call two stats counters, Google Analystics and
Quantcast, I'm using the asynchronous version of GA and Quantcast is
the very last thing that gets called before the closing </body></html>
tags. I use GA for its great stats and one of my ad providers wants
Quantcast to provide 3rd party stats for prospective advertisers. I
did end up killing FriendConnect and will roll my own page
recommendation code that has a smaller footprint. I also pull stuff
from three ad providers, but those scripts pays the bills and puts
bread on the table so they are a necessary "evil".

By chance I did register a really short version of my domain name for
other purposes recently. It will be a cookieless domain so I was able
to do some parallelizing of CSS and core "skin" image files to this
new domain, which should help. Content images will still be pulled
from the main domain, as it would take way to much work to change
these, but this should still improve things. Now if Webmaster Tools
would just give me new averages sans the old methods. I'm finding
there is a two to three day lag before the full effect of changes are
realized on the "Site Performance" page.

> You can blame http 1.1 section 8.1.4. It seems most of the browsers ignore
> this rule now but ie6/ie7 (which have a sizable portion of the browser
> market) are limited to 2.

Ya, I read up on that whole mess and what a mess it is. There are
major shortcomings in HTTP specs.

> I'll take the feedback to the friendconnect team; unfortunately, I can't
> see the css for the snipped DoubleClick url.

Like I mentioned above, I ended up killing off the friendconnect
script because it was so very bloated and didn't provide enough value
to users. The DoubleClick problem comes and goes. I'm sure if the
DoubleClick folks were made aware that Page Speed is detecting issues
with their code they could find and resolve them.

> Speed is one of many issues
> that a web developer has to optimize and we personally love standards (as
> you might have seen from the numerous posts about HTML5 for example).

I've been as faithful to HTML/CSS specifications as possible since the
days of HTML3.0. I try very hard to make sure all my sites validate to
HTML/CSS specifications try to follow WCAG. Right now my target is
HTML4.01 Strict and CSS2.1. I'll wait on HTML5 until it gets finalized
before adopting it. What I'd really like to see is for ad providers
(e.g. AdSense) to make the generated code their scripts spit out
validate to W3C HTML/CSS specifications and to be JS error free.

> thanks for the feedback and keep it coming :)

I'm all for anything that helps us improve the experience of visitors
to our websites.

Reply all
Reply to author
Forward
0 new messages