mod_pagespeed mod_expires serving unoptimized content

186 views
Skip to first unread message

Real O'Neil

unread,
Jul 11, 2016, 4:21:08 PM7/11/16
to mod-pagespeed-discuss
Apologies if this is a duplicate. Can't find my original topic, trying again as my browser was acting up at the time.

We have mod_pagespeed running on Apache 2.2.3-91 on a CentOS box.  We have mod_expires configured to expire some content in 10 minutes. Pagespeed is configured to cache in memory, we have the core filters enabled as well as a few others. Here are the relevant apache config entries:

   
ModPagespeed on
   
ModPagespeedInheritVHostConfig on
   
ModPagespeedInPlaceResourceOptimization off
   
ModPagespeedEnableCachePurge on
   
ModPagespeedPurgeMethod PURGE
   
ModPagespeedFileCachePath  "/dev/shm/page_speed_cache"
   
ModPagespeedFileCacheSizeKb 1024000
   
ModPagespeedFileCacheInodeLimit 0
   
ModPagespeedEnableFilters prioritize_critical_css
   
ModPagespeedEnableFilters flatten_css_imports,inline_css
   
ModPagespeedFileCacheCleanIntervalMs 480000
   
ModPagespeedCriticalImagesBeaconEnabled false
   
ModPagespeedDisableFilters convert_meta_tags
   
ModPagespeedDisallow "*/cgi-bin/*"
   
ModPagespeedDisableFilters convert_jpeg_to_webp
   
AddOutputFilterByType MOD_PAGESPEED_OUTPUT_FILTER text/html
   
ModPagespeedLogDir "/var/log/pagespeed"
   
ModPagespeedRewriteLevel CoreFilters


ExpiresActive on
ExpiresDefault "access plus 10 minutes"


<Location /css_webfonts>
   
ExpiresDefault "access plus 5 years"
</Location>


<Location /
objects>
   
ExpiresDefault "access plus 5 years"
</Location>


<Files *.html>
    Header unset Cache-Control
    Header unset Expires
</
Files>


<Location /cgi-bin>
   
Header set Cache-Control "no-cache, no-store"
   
Header unset Expires
</Location>


<Location /
plugins/fontawesome/css/font-awesome.min.css>
   
ExpiresDefault "access plus 1 day"
</Location>



In our staging environment, directly after purging the cache and restarting apache, a big test page is rendered with nearly no optimization on the first view. Subsequent views show increasing amounts of URLs with '.pagespeed.' in them showing more resources are getting optimized.

However, around the 10 min mark, it starts to serve unoptimized content until, presumably, the server is finished optimizing the content again after re-evaluating the expiry (at least that's what I think is happening). This being indicated by fewer '.pagespeed.' URLs showing up in the rendered html.

To test this, I ran a simple bash loop to grep the returned HTML for the instances of '.pagespeed.' hashed URLs and exit once it hits below the expected amount, with some extra bits so I can look at the returned HTML afterwards:

time while [ 1 ]; do curl -D- http://internal.test.domain  2>/dev/null 1> /tmp/ps_capture.txt; COUNT=$(grep -i pagespeed /tmp/ps_capture.txt | wc -l); echo -n $COUNT; echo -n $'\r';echo -n $'\r'; if [[ ! $COUNT -eq '92' ]]; then break; else sleep 1; fi; done


After about 10 mins(give or take a min or two), it returns a lower number and breaks out of the loop.

Adding 'ModPagespeedRewriteDeadlinePerFlushMs 0' fixes this completely, by delaying the serving of the content until it's ready, but introduces a server side latency we'd rather do without.

My question is whether there is any way to get Pagespeed to continue serving the previously optimized content until such a time as it's finished processing the newly evaluated/optimized versions?

Basically, if it was still evaluating a very big .js file, for example, or minifying it, if someone were to navigate to the calling page, it'd serve up the previously cached version if it wasn't ready rather than sending an unoptimized version as-is. 

If that's possible, can it also be tweaked to do so in the event the page hasn't been seen in a while?

If that isn't possible, any suggestions or best practices for keeping the serving of unoptimized content at a minimum when employing relatively short timeout periods with mod_expire?

And should Pagespeed be expiring any cached content, with or without headers, if the file itself hasn't been modified in any way? A hash of the file would be identical to the previous version after all, perhaps there's a configuration option we missed that'd produce that behaviour?

Thanks for any and all help.

Otto van der Schaaf

unread,
Jul 12, 2016, 6:40:35 PM7/12/16
to mod-pagespeed-discuss
I don't think serving the stale optimized version of resources while revalidating the original is a current feature.  And I don't know if such stale-while-revalidate like functionality is planned for serving optimized versions while the original resources are being revalidated. I do think that would be really nice though.

For now, if your files are (mostly) static, you could take a look at LoadFromFile, which does not rely on revalidating over http and should help passing your scripted test:

As for your last question about pagespeed expiring cached content: I'm not sure I understand it correctly, could you give an example of what you observed and what you would have expected instead?

Otto



--
You received this message because you are subscribed to the Google Groups "mod-pagespeed-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-di...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mod-pagespeed-discuss/3d186adf-24a8-4749-a527-28e10f6b534d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Real O'Neil

unread,
Jul 12, 2016, 7:30:39 PM7/12/16
to mod-pagespeed-discuss
We'd looked into LoadFromFile but it would conflict with some of our SSI integration efforts at this time, unfortunately.

The last point about expiring cached content, I suppose I just kinda accidentally rephrased the original question on that one. The local 'stale' content having been static and unchanged, it still gets served unoptimized pending a completed 're-optimizing' despite not needing to.
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-discuss+unsub...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages