--
You received this message because you are subscribed to the Google Groups "mod-pagespeed-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-di...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mod-pagespeed-discuss/f37c422c-eb09-45b4-befe-87db26ec8184%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi!
Okey! I got the impression that the X-Mod-Pagespeed header was always added
independent of resource type.
In my requests for javascript files I do get an Etag, but it does not have the
format you indicate.
For me it looks something like
1.ETag:
"41030-63d-524a7c278a1cf"
And the value does not seem to have any static parts in it at all. I guess this Etag is added by Apache ?
Also, for static resources like this I was expecting the max-age to be rewritten with a new value much bigger than the 300 that I did set in the apache config.
1.Cache-Control:
max-age=300, public
Also I was expecting the name of the
resource to be rewritten with a hash-code at the end.
I guess this indicates that my javascript reources are not rewritten at all ?
The MPS settings that I am using right now are more or less default, but I have
changed to CoreFilters, added separate enabling for collapse_whitespace and
remove comments.
I have tried some other settings as well, the sum of all my changes to the
pagespeed.conf are these:
ModPagespeedInPlaceResourceOptimization on
ModPagespeedFetchWithGzip on
SetOutputFilter DEFLATE
ModPagespeedRewriteLevel CoreFilters
ModPagespeedEnableFilters collapse_whitespace,remove_comments
ModPagespeedForbidFilters lazyload_images
Forbidding the lazyload_images was an attempt to see if I could get a reference
the the image with the hash code in directly in the name instead of the
javascript code that is generated, but it did not make a difference, the
checkImageForCriticality javascript is still injected in some places.
But these are all the changes that I have currently in the configuration for a
server that now has been running since yesterday and had at least 15-20 refresh
calls on this html/javascript application.
All whitespace & comments are removed from the first html-page generated
and it does actually contain some page speed manipulated references such as
this css link "resources/Theme1/add2home/style/A.addtohomescreen.css.pagespeed.cf.Avod5EX1Yk.css"
The response for this request has this etag in the header
1.Etag:
W/"0"
But this client seems to dynamically download most resources via javascript and
this is where I belive the problems start.
For example, one of the first script-tags that execute in this client is one
that defines and downloads require.js:
<script type="text/javascript" data-main=product/mobile/init
src="common/lib/require.js"></script>
This is not rewritten, the name reference does not include the hash-code, and
its max age is still just 300 seconds.
Is this case not supported by MPS or am I missing some configuration ?
Thanks,
Håkan
We cannot change URLs of resources loaded from JavaScript; we can only optimize them in place. The in-place optimized results have the content hash in the etag, but we cannot extend their cache ttl. The resources referenced from html and css get renamed to include the content hash, and can be cached forever, and are served with etag W/"0" because the URL is already unique.
To see the in place optimized results, please refresh the page 2 our 3 times in a row within the 5 minute ttl. If there are 20 refreshes per day that will not keep the optimized assets live in the cache.
Let me explain the dynamics. Let's say MPS gets a request for foo.png, which is referenced from JavaScript. If the optimized version of foo.png is already in cache, it will be served. When it's served, if the cache is 80% expired (e.g. 4 minutes through a 5-minute TTL) then it will be proactively freshened in the background after it was served. However, if we only get a request per hour (~20 per day) then the cached version of foo.png will have expired and we won't serve our stale optimized version, but will just let the origin one through. In that case, the original (apache-generated) etag will be served.
This does make it a little challenging to serve optimized results to pages that are only hit rarely. To make sure they are optimized, you can try setting the TTL to be longer, and using mod_pagespeed's cache purging to updater resources. Another alternative is using ModPagespeedLoadFromFile, which bypasses the origin TTL message and just stats the files on every request. This works if your files are stored locally on your server, and not on an NFS disk.
Hope this helps!
-Josh
--
You received this message because you are subscribed to the Google Groups "mod-pagespeed-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-di...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mod-pagespeed-discuss/d51885da-8c8b-4f5c-a420-7f336384f4e3%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-discuss+unsub...@googlegroups.com.