Apologies if this is a duplicate. Can't find my original topic, trying again as my browser was acting up at the time.
We have mod_pagespeed running on Apache 2.2.3-91 on a CentOS box. We have mod_expires configured to expire some content in 10 minutes. Pagespeed is configured to cache in memory, we have the core filters enabled as well as a few others. Here are the relevant apache config entries:
ModPagespeed on
ModPagespeedInheritVHostConfig on
ModPagespeedInPlaceResourceOptimization off
ModPagespeedEnableCachePurge on
ModPagespeedPurgeMethod PURGE
ModPagespeedFileCachePath "/dev/shm/page_speed_cache"
ModPagespeedFileCacheSizeKb 1024000
ModPagespeedFileCacheInodeLimit 0
ModPagespeedEnableFilters prioritize_critical_css
ModPagespeedEnableFilters flatten_css_imports,inline_css
ModPagespeedFileCacheCleanIntervalMs 480000
ModPagespeedCriticalImagesBeaconEnabled false
ModPagespeedDisableFilters convert_meta_tags
ModPagespeedDisallow "*/cgi-bin/*"
ModPagespeedDisableFilters convert_jpeg_to_webp
AddOutputFilterByType MOD_PAGESPEED_OUTPUT_FILTER text/html
ModPagespeedLogDir "/var/log/pagespeed"
ModPagespeedRewriteLevel CoreFilters
ExpiresActive on
ExpiresDefault "access plus 10 minutes"
<Location /css_webfonts>
ExpiresDefault "access plus 5 years"
</Location>
<Location /objects>
ExpiresDefault "access plus 5 years"
</Location>
<Files *.html>
Header unset Cache-Control
Header unset Expires
</Files>
<Location /cgi-bin>
Header set Cache-Control "no-cache, no-store"
Header unset Expires
</Location>
<Location /plugins/fontawesome/css/font-awesome.min.css>
ExpiresDefault "access plus 1 day"
</Location>
In our staging environment, directly after purging the cache and restarting apache, a big test page is rendered with nearly no optimization on the first view. Subsequent views show increasing amounts of URLs with '.pagespeed.' in them showing more resources are getting optimized.
However, around the 10 min mark, it starts to serve unoptimized content until, presumably, the server is finished optimizing the content again after re-evaluating the expiry (at least that's what I think is happening). This being indicated by fewer '.pagespeed.' URLs showing up in the rendered html.
To test this, I ran a simple bash loop to grep the returned HTML for the instances of '.pagespeed.' hashed URLs and exit once it hits below the expected amount, with some extra bits so I can look at the returned HTML afterwards:
time while [ 1 ]; do curl -D- http://internal.test.domain 2>/dev/null 1> /tmp/ps_capture.txt; COUNT=$(grep -i pagespeed /tmp/ps_capture.txt | wc -l); echo -n $COUNT; echo -n $'\r';echo -n $'\r'; if [[ ! $COUNT -eq '92' ]]; then break; else sleep 1; fi; done
After about 10 mins(give or take a min or two), it returns a lower number and breaks out of the loop.
Adding 'ModPagespeedRewriteDeadlinePerFlushMs 0' fixes this completely, by delaying the serving of the content until it's ready, but introduces a server side latency we'd rather do without.
My question is whether there is any way to get Pagespeed to continue serving the previously optimized content until such a time as it's finished processing the newly evaluated/optimized versions?
Basically, if it was still evaluating a very big .js file, for example, or minifying it, if someone were to navigate to the calling page, it'd serve up the previously cached version if it wasn't ready rather than sending an unoptimized version as-is.
If that's possible, can it also be tweaked to do so in the event the page hasn't been seen in a while?
If that isn't possible, any suggestions or best practices for keeping the serving of unoptimized content at a minimum when employing relatively short timeout periods with mod_expire?
And should Pagespeed be expiring any cached content, with or without headers, if the file itself hasn't been modified in any way? A hash of the file would be identical to the previous version after all, perhaps there's a configuration option we missed that'd produce that behaviour?
Thanks for any and all help.