Pagespeed resources sometimes hang

187 views
Skip to first unread message

Shane Marsh

unread,
Oct 16, 2019, 4:37:09 AM10/16/19
to mod-pagespeed-discuss
Very occasionally, we have pagespeed optimised resources that simply "hang". We have also had very occasional reports by some clients saying that a website isn't loading properly but it's usually fixed by a refresh. 

I've done some digging and initially I believed the issue was being cause by Cloudflare (as they front our entire cluster of servers) but I've been running a select few sites with Cloudflare disabled to see if that was indeed the cause and to my amazement this morning, it seems the pagespeed module or nginx is the root cause. 

I've managed to capture the problem on a speed test: https://www.webpagetest.org/result/191016_W3_cd509e20a17d0baa2caff6d381a14b68/

Has anyone seen this issue before? The config page is here for those needed more detail: https://www.diannemarshall.com/pagespeed_admin/

Shane :)

Longinos

unread,
Oct 16, 2019, 5:42:57 AM10/16/19
to mod-pagespeed-discuss
Hi Shane
Maybe you have a largue number of authorized domains.
All these domains are running pagesped?
pagespeed domain directive is used make pagespeed optimize resources it can, other domains you put in this MUST run pagespeed too.
This very large list of domains put a higth charge in your server cause the IPRO try to optimize these external resorces and it can´t .
Restrict the list to domains you know are running pagespeed on it.
In these largue list you have a bunch of domains that are proxied, but can´t restrict their scope, for example
https://maps.googleapis.com/ Auth ProxyDomain:https://static.salonguru.net/googleapismap/
With this you are storing all the content of the domain maps.googleapis.com. You need all these content?
Maybe with all these togeter, make put the cache in a constant rebuild, deleting entries to make space to new entries, despite of the need to wait to external resources come to be stored in the cache/proxy.
As a start, try to use only your own domain/s in the pagespeed domain directive and see what happens.

Shane Marsh

unread,
Oct 16, 2019, 6:01:34 AM10/16/19
to mod-pagespeed-discuss
Hi Longinos, 

Thanks for your speedy reply. Yes, that long list of domains are all the other sites that are running within the same installation and they do all use pagespeed. We had to list each domain like this because our client manages tend to copy content from one site to another and we found that listing all the domains in the system was the best way forward.. otherwise the logs contained a lot of permission denied errors and the content would not be optimised. 

With regards to the proxied domains all going to static.salonguru.net this list I think can be optimised a lot better. This list came from me going through the message history and picking up some of external domains that had permission denied errors - I figured the more it was allowed to optimise the better.

I will follow your idea and remove all the proxied domains and see what happens. I know we also have room to extend the file system cache (It's little small for the size of the installation (60GB)). Each server has 2GB of local shared RAM memory and 2GB is available on each memcache server. 

I'll come back to you when I have more info. 

Shane :)

Longinos

unread,
Oct 16, 2019, 12:37:24 PM10/16/19
to mod-pagespeed-discuss
Hi Shane
About the proxied domains. If you set it, you must restrict their scope to the directories you need, only these you need.
For example, you will take photos from pinterest and wil use only photos from 1 user. If you do:

pagespeed MapProxyDomain http://www.pinterest.com https://static.salonguru.com/pinterest/

you are caching ALL the content from http://www.pinterest.com. So you need to restrict the content you will proxy:

pagespeed MapProxyDomain http://www.pinterest.com/user_directory https://static.salonguru.com/pinterest/

This will proxy only the content from these user.


Shane Marsh

unread,
Oct 17, 2019, 5:04:44 AM10/17/19
to mod-pagespeed-discuss
Hi Longinos, 

I'll take more care when setting that in future - thanks for your advice. I did remove all the proxied domains but that has introduced other temporary issues: for example sites are suffering 404's due to content from static.salonguru.net no longer being available. I've just cleared all our upstream caches and hoping thats now resolved. 

I'm still quite curious about these "random" resources that get held up: https://www.webpagetest.org/result/191017_JS_38a9624e32acb7af4c6d49683a504fd5/. One tiny 3.4kb JS file timed out after 60 seconds. Strange.

I guess things are still settling the entire cache is being rebuilt.

Shane :)

Longinos

unread,
Oct 17, 2019, 5:27:03 AM10/17/19
to mod-pagespeed-discuss
Hi Shane

I have tried diannemarshall site and have not issues, if this site is by passing cloudflare is normal. Sites with Cloudflare in front if are caching the html return 404, a clear of cloudfront cache is needed.
I tried to rerun your webpage test and can´t see any file issue but the ajax call.
I see you are requesting a bunch of external domains, so perhaps you need some "help" with this.
Try to use link preconnect with the domains you are requesting, some like this
<link href="https://www.google-analytics.com" rel="Preconnect">

Shane Marsh

unread,
Oct 17, 2019, 5:44:19 AM10/17/19
to mod-pagespeed-discuss
Hi, 

I'm not too worried about Cloudflare only because pagespeed sets the HTML content to no-cache which Cloudflare complies with so they are only storing assets. However, we have an Edge network that lies in-between our backend Web/Pagespeed servers and Cloudflare - there are 5 servers in total, two in the US, two in London and last in Sydney. These use AWS Route53 for DNS and are Nginx reverse proxies. It's on those servers I cleared out the caches as they are designed to cache HTML content (along with any site asset) for short periods of time. In nutshell - our systems are complex being horizontally and vertically scaled and we run our own primitive CDN. It has crossed my mind that the lag could be down to the reverse proxying but I felt that if there was to be any delays, it would likely be on all assets not just a single random file.

Yep there are lot of external resources. I'm tempted to give the system another day or two to settle before re-introducing any proxies. I'd appreciate any help with that :)

Thanks for the idea of prefetching - i'll look into that. Although I don't think it has full cross browser support?

Shane :)

Longinos

unread,
Oct 18, 2019, 3:51:03 AM10/18/19
to mod-pagespeed-discuss
Hi Shane

Prefetching is link rel="preload" and is not supported by all browser (firefox has it disabled by default, for exmple) but I´m not talking about preload but preconnect.

Preload download the assets before it is needed, preconnect only make the connexion the the site (dns,connect,ssl negotiation) and is has support from the maajority of browser. Edge browser has no support for the html version, but suport the http header version.


You can use preload when you know the exect file you will donload ( for google analitycs, for example) but when you don´t know the exect name of the file the pge uses, you can use the preconnect.

I see you are using WP in some sites, perhaps for the WP sites you can use the CAOS plugins, this makes google fonts and google analitycs/tags to be loaded from local.

Shane Marsh

unread,
Oct 18, 2019, 4:41:56 AM10/18/19
to mod-pagespeed-discuss
Morning! 

OK great - thank you for your help. I will look through those plugins and preconnect with my colleagues :)

Can we come back to the initial query regarding the resources.. I'm not convinced the hanging issue is entirely related to the proxied domains as we are continuing to see issues with .js files. At least three sites had issues overnight. The load times are appalling - 65 seconds in one case. 


The three filters we have activated for Javascript are: 
pagespeed EnableFilters rewrite_javascript;
pagespeed EnableFilters defer_javascript;
pagespeed EnableFilters combine_javascript;

Another thought has crossed my mind.. is it beacon related? What would happen if a visitor requests a .js resource from one server and the beacon gets returned to a different second server .. would that cause this?

Shane :)

Shane Marsh

unread,
Oct 18, 2019, 5:00:47 AM10/18/19
to mod-pagespeed-discuss
Hi, 

Another thought we have had is that Webpage test says the following about resources...


I can see the file has a .pagespeed extension so I believe it has already been re-written at this point? Could it be that js_defer is taking too long to fire in the resource? I'm going to test this by turning off the filter defer_javascript and see if that helps at all.

Any suggestions welcome. 

Shane :)

Longinos

unread,
Oct 20, 2019, 6:35:03 AM10/20/19
to mod-pagespeed-discuss
Hi Shane

I don´t know how the js_defer filter works cause I don´t use it, but my bet is for it.
All files that take to much time to download are downloaded by these filter.

My site is much simpler and my approach to the js render blocking issue is to put the defer label and let the browser take care of it.


Same with the combine filters. I use a plugin to combine css and js files, put it as a file and cache it to no need to create the combined file in each request. For that, combine and defer, I use the plugin Autoptimize, and to take care of the preconnect/preload I use the plugin Pre-party hints

Shane Marsh

unread,
Oct 21, 2019, 5:05:29 AM10/21/19
to mod-pagespeed-discuss
Morning!

Being totally honest, I am a little nervous about adopting Autoptimize. We used Franks plugin before we went down the route of building custom servers and at the time we tested all the plugins that were available - all of them had the same issue - when javascript was deferred inside the browser it broke. We discovered that not all plugin developers enqueue correctly causing endless function does not exist errors and we reached the conclusion that "proper" optimisation had to happen outside Wordpress which is what led us onto the mod_pagespeed tools in the first place. Additionally as our servers are now clustered, we'd have to put the cache that Autoptimize creates on an NFS filesystem. NFS is not quick and could might other performance problems which we wouldn't about until system is under full load.

I've looked at the speed test results from over the weekend and I can confirm the issue is not related to the js_defer filter. All the files that are hanging are non-combined small javascript files with a .pagespeed extension so I think the issue actually relates to the rewrite_javascript filter.

I'm going to rule out the beacon issue I mentioned earlier by altering our load balancers so that requests from the same client are sent all to the same backend server and I hope this improves things. Otherwise I genuinely don't know where to go next other than to optimise within Wordpress with Autoptimize.

Shane :)

Longinos

unread,
Oct 21, 2019, 11:03:53 AM10/21/19
to mod-pagespeed-discuss
Hi Shane
I don´t make mandatory statements, I only say what I do in my site, feel free to pick-up what you consider, each site is a world and those what do the admin work know what the site need.

I make a re-test in webpagetest in those sites you have posted and the the js files don´t take too much time to download as in the test you have posted, so I think is js_defer related issue.

About beaconing and pagespeed caching, I see you are using memcached as a external cache. As far as I know, memcached don´t work as a cluster, so the assests stored in each memcached server come from different request, so 1 time is stored in a server and the next time in the other (maybe I´m wrong and memcached work as a cluster sharing the stored assests).

Have you tried redis as a backend cluster cache store? Redis can store objects with a big size than memcached and can be configured as a cluster so both can hold the same objects and don´t matter to what server the request ask for assests.

Shane Marsh

unread,
Oct 22, 2019, 6:27:00 AM10/22/19
to mod-pagespeed-discuss
Hiya Longinos, 

Otto - can you give your thoughts if you are available.

OK we are making progress.. last nights speed tests were MUCH better. Only a single site had a single resource that hung. I have managed to rule out an issue with the js_defer filter.

Memcached is not working entirely as expected. If a resource is generated on one server (using memcached-linked servers), it appears that same resource is NOT automatically available via another server. 

Basically, what I think is happening:

1) Client requests web page and Server A responds with the HTML document. 
2) The clients browser requests a resource based on that HTML document HOWEVER the load balancer diverts the request to Server B
3) On Server B, the requested pagespeed resource does not exist and generates it on the fly. This appears to be the source of the lag. 

I believe to fully resolve, pagespeed would need have a centralised file system cache to compliment memcached. This idea however goes against the advice I received from this thread before starting the project: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!searchin/mod-pagespeed-discuss/shanemarsh28|sort:date/mod-pagespeed-discuss/AyW5RPcoi7I/ca9Ak9GoBQAJ

In short, with Nginx load balancing: http://nginx.org/en/docs/http/load_balancing.htmlleast_conn; cannot be supported. It must be round robin (default) or ip_hash;

Ideally i feel this finding should be verified by developers and added to the documentation. 

Shane :)

Longinos

unread,
Oct 23, 2019, 7:29:50 AM10/23/19
to mod-pagespeed-discuss
Hi Shane

As I stated in my last post, as far as I know memcached don´t work as a cluster, so the objects stored in one server are not the same to the objects stored in the other server.

Memcached don´t relies in files is (same as redis) a in-memory database. Memcahed and redis can be used to replace the file system cache, so you don´t need any shared filesystem to use as cache.
Redis can be configured as a cluster so when a object is stored in one server is replicated to the other nodes in the cluster and the any request for a object to any server in the cluster returns the same object.

I think you have 2 alternatives:
Supose you have 4 webservers and 2 pagespeed cache servers.

1.- split your domains between the server in 2 groups: webservers A and B hold the domains www.a.com www.b.com and webservers C and D hold the domains www.c.com and www.d.com.
The webservers A and B have the memcached server1 and the webservers C and D have the memcached server2. In this way webservers A and B stores their assests in memcached server1 and webservers C and D in memcached server2.

2.- Change memcached servers with redis servers and configure it as a cluter with 2 nodes. Webservers A,B,C,D have the same domains.

Shane Marsh

unread,
Oct 23, 2019, 10:00:29 AM10/23/19
to mod-pagespeed-discuss
Hi Longinos, 

Thanks again - and I understood your point about using a Reddis cluster although I'm not 100% sure on the need - I'll try to explain. 

At present, we have two web servers and they are identically setup with the same domains, memcached and pagespeed configurations. In the pagespeed configuration, both memcached servers are listed which was supposed to make sure that both servers could read and write to each servers memcache "instance". The idea being that if an optimised resource is needed, both memcache servers could be checked. If I've miss understood and that is not what's happening then yes I agree entirely, your two options above are the only way's forward.

I've not heard before that Memcached/Reddis could replace the file system cache completely.

Shane :)

Longinos

unread,
Oct 23, 2019, 12:34:48 PM10/23/19
to mod-pagespeed-discuss
Hi Shane
Well....
So you have webservers A and B and memcached server1 and server2. As far a memcached server don´t comunicate among them you need to define some affinity between domains and memcached server to avoid what you are seeing: A optimized resource is stored in server1 but the request for that resource is addressed to server2 and need to start the optimization procces again.
So the sites are full optimized when both memcached servers have all optimized resources and then don´t matter to what memcached server the request is addressed. But my bet is that this never got achieved cause that optimized resources can´t be created at the same time in both server, they expire at different times.....
With redis cluster, the redis servers have a internal process that replicate the stored object between cluster nodes and don´t matter to what redis cluster node the request is addressed, maybe the request come when the object is not replicated but is a small window of time.

Memcached server has a limit in the size of an object it can store (I don´t remember well maybe 1 or 2 mb size) so object that are bigger that this size is stored in disk cache. Redis don´t have this limit, can store objects of any size so the disk cache is not needed, you need to configure cause pagespeed need to have it configured, but the disk cache is allways empty so no disk read is needed.
Even the metadata cache can be stored in it so all webservers use the same metadata cache.

In the image bellow you can see that all the pagespeed caches are stored in a redis instance

Captura.png

Shane Marsh

unread,
Oct 24, 2019, 11:06:51 AM10/24/19
to mod-pagespeed-discuss
Hi Longins, 

Thanks again for your very details explanation. I will look into converting from Memcached to Redis. Might require a bit of research on my part only because I haven't setup a Redis cluster before. I've used Redis but only a bare bones setup. 

So does that mean if everything is in memory using redis, I can reduce (almost entirely) the DefaultSharedMemoryCacheKB and FileCacheSizeKb to more or less 0? What minimum settings would you recommend. 

Each server has about 10GB of RAM that could be dedicated to Redis. That would make the cache smaller overall - they current have setup: 2GB Shared Memory, 2GB Memcached and in excess of 100GB file system caches. 

Shane :)

Longinos

unread,
Oct 24, 2019, 2:10:56 PM10/24/19
to mod-pagespeed-discuss
Hi Shane
Well .... you have a very big site with a bunch of domains, I have not worked with these size, so any advice I can do is not experience based. Only one: try and failure.
I use nginx so the config I post here is for nginx, if you use apache you can traslate:

pagespeed DefaultSharedMemoryCacheKB 0;
pagespeed FileCachePath /var/cache/nginx/pagespeed/;
pagespeed FileCacheSizeKb            1048576;
pagespeed FileCacheCleanIntervalMs   3600000;
pagespeed FileCacheInodeLimit        100000;

These are the only lines I have to configure cache. Yes you can see that I have file cache configured, but is cause nginx+pagespeed don´t start ii ot configured.
But is empty, the only file I have in the cache path is cache.purge.These sizes are the ones I used before the use of redis and my site is not too big as yours.

I don´t know what your infrastructure is but, I think, the ideal is to have a public net interface to be hited by the webservers and a private one to be dedicated to the replycaiton process.
Read carefully the redis doc. With redis you can have a disk backup of the redis database, this backup can be read at server restart so you don´t losse the optimized resources every server restart.
This backup can be done by 2 ways: a snapshot in 1 file or/and with an incremental way with small files. The snapshot take time and server resources so you need to have free ram memory cause is a fork
of the redis process. I have never used the incremental one, so don´t know memory requeriments for that.
Redis must be configured with LRU cache eviction policy and you can play with the pagespeed RedisTTLSec (by default is -1, no expire) to set a TTL in the database objects. And can play with the
pagespeed RedisDatabaseIndex (default 0) to store diferent sites in diferent databases. Eache redis instance can have until 16 databases.....


I know that these are a very poor advice, but is the best I can do based in my own experience with a small site, I don´t will make advices that I don´t know.

Shane Marsh

unread,
Oct 25, 2019, 12:40:17 PM10/25/19
to mod-pagespeed-discuss
Hi Longinos,

OK great thank you again for another detailed reply. No need to apologise, i think we are both at the technical limit of what is possible with our knowledge.

This afternoon I have looked into building a Redis cluster but I have come up against issues - the setup is complex. Redis can support Master -> Master replication only if there is at least one slave. On top of that, Sentinel would needed to manage failover and the promotion of masters - for me this is an exceptionally steep learning curve and one I'd like to avoid in a large production environment at the moment. However, after an out of the box brain storming session we have come up with another potential way forward. 

The basic idea is: 

1) Reduce DefaultSharedMemoryCacheKB to 0 - local shared memory cache to be disabled
2) Increase Memcached to 5GB+ on each server and raise the maximum object size to 10MB - there shouldn't be any single files larger than this - if there is they deserve to be slow! :)
3) Mount the V3 cache on a NFS mount so both servers can share any resources written to disk at the same time. 

Memcached CAN now be configured to store entries above 1MB: memcached -I 10m would allow items up to 10MB for example. This is great news.

I realise I am advised agaisn't puting the "entire" pagespeed cache directory into NFS but I see no reason why we can't mount part of it. It appears there is nothing in the /pagespeed/cache/v3 directory other than the optimised CSS/JS/Image etc files that could not be stored in memory (listed in folders by domain). Front end requests to these should be limited due to our Edge Network and Cloudflare CDN's I've talked about before.

Although this is unorthodox, in theory it MIGHT work and it's much easier to undo if we find everything breaks! I accept with this we will get occasions where Memcached resources will expire from cache as you explained which will cause resources to be occasionally re-created but I think we will get that with Redis too. Also the cache sizes we could achieve with Redis would be smaller. In total, I calculated we would need in excess of 250GB in order to cache the entire platform - this could never be done entirely within memory. I also accept the issue that all caches will be lost on server restart  - I can live with this as we do not restart servers that often.

Thanks for your continued support - have a great weekend! We plan to start implementing this on Monday so I'll come back to you with how I get on.

Shane :) 

Shane Marsh

unread,
Oct 29, 2019, 7:31:38 AM10/29/19
to mod-pagespeed-discuss
Hi Longinos, 

I thought I'd update you with how we are getting along and as for the unorthodox setup we have now adopted, so far so go. 

I ran a stack trace (strace) and the read and writes going into the v3 directory are minimal and I found no evidence of polling in this directory. It appears the majority of the optimisations are going direct to Memcached which is perfect now that the v3 directory is now mounted within NFS. 

Last nights speed tests on the slow side as not everything had been optimised yet but I found no sites with hanging resources. This is the first time in a number of weeks where all resources loaded in a timely manner.

So far so good.

Shane :)

Longinos

unread,
Oct 29, 2019, 3:06:50 PM10/29/19
to mod-pagespeed-discuss
Hi Shane

So glad to read this.

The initial issue was solved (the js files hang). And the whole site work better that earlier.

But get alert. Take a look at the memcached statistics (pagespeed_global_admin/cache#physical_cache) and take a look to the curr_items/evictions relation.
In one server I see 1230324/0, this server has space to store 1230324 objects w/ evicting 0 keys, but the other show a relation of 619106/686485, this may indicate that the size of the memcached db is too small.

To be honest, I don´t know how pagespeed split the keys w/ memcached, I think is done spliting objects between db by a hash key compossed by some elements like UA, url and so on, so an object with a determined key is stored at the same db each time the hash key is created and don´t exists, but for sure if the db evict keys to make space for news objects, you lost the optimization and the procees to optimizate the evicted key must restart from scracht. So take this in account and watch this cause maybe you need increase the db size.

In other hand, I see you are using google fonts, take a look at this wp plugin https://wordpress.org/plugins/host-webfonts-local/ . This plugin make google fonts load from local so you convert external request to local ones.

Shane Marsh

unread,
Oct 29, 2019, 3:30:31 PM10/29/19
to mod-pagespeed-discuss
Ahh you found a typo - the first server had the correct setting of -m 5120MB (hence no evictions) but the second server had a much reduced -m 2120MB. The give away was that limit_maxbytes needed to be the same on both servers. I've corrected this now and restarted the memcached service on the second server. Well spotted thank you. If 10GB in total isn't enough I think I could get away with increasing it by another 2GB but I'd like to keep some RAM headroom if possible as the servers are are also handling PHP. 

I think you are right in that pagespeed uses a hashed key to work out which Memcached server has the resource but I too don't know for sure. 

Shane :)

Shane Marsh

unread,
Oct 30, 2019, 6:39:26 AM10/30/19
to mod-pagespeed-discuss
Morning!

All the speed tests came back satisfactory this morning - I think this issue is resolved. 

So in summery, if your running with Memcached across multiple servers the v3 cache directory DOES need to be writable by all running instances.

Just wanted to check one small thing: It seems that all sites are caching content and optimising well but I have noted in the Message History a lot of resources (HTTPCache) with recent failures. So I have added MapOriginDomain and that seems to have helped a lot but there are still some. Is it normal to have some resources fail?

Shane :)

Joshua Marantz

unread,
Oct 30, 2019, 8:28:43 AM10/30/19
to mod-pagespeed-discuss
Hi all -- just chiming in to confirm that PageSpeed uses a hashed key for memcached. The key is hashed from the URL as well as some context-properties -- not exactly the UA (which is high-entropy), but it might include a bit or two about whether (say) an image optimization is targeting webp or the request was made with an HTTP Save-Data header.

Can you paste some of the messages from the message history? It would be good to understand what fetch-failures remain.

I strongly discourage having PageSpeed look at a file-cache directory mounted via NFS. It is not necessary for all PageSpeed servers to see the same caches, though it does help with server-side efficiency, which is why you use memcached. It would be better to use a tmpfs directory for PageSpeed's file-cache.



--
You received this message because you are subscribed to the Google Groups "mod-pagespeed-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-di...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mod-pagespeed-discuss/3be5b56b-bcc0-4928-bfe6-81e7d60e7846%40googlegroups.com.

Shane Marsh

unread,
Oct 30, 2019, 8:55:00 AM10/30/19
to mod-pagespeed-discuss
Hi jmaranta, Thanks for the clarity on the hashing for memcached. 

I agree that having the v3 directory mounted in NFS is not in any way ideal but eventually I came to the conclusion that we had to mount it. Ideally I'd expect a server to be able to build the optimised resource "on the fly" but sadly this lead to unacceptable server side delays. As mentioned in previous posts, just mounting the v3 only ensured the NFS file system is not being constantly polled and it also made sure each server kept it's own log files. It's unorthodox but it has solved our problem. 

Sure the message history is available here: https://www.salonguru.net/pagespeed_admin/message_history 
** You'll need to hard refresh (CTRL+F5) to get an update as the whole site is cached.

Typically I'm seeing this:
[Wed, 30 Oct 2019 12:48:29 GMT] [Info] [20939] HTTPCache key=https://www.junior-green.com/wp-content/plugins/woocommerce/assets/images/icons/credit-cards/diners.svg fragment=junior-green.com: remembering recent failure for 299 seconds.
[Wed, 30 Oct 2019 12:48:29 GMT] [Info] [20939] HTTPCache key=https://www.junior-green.com/wp-content/plugins/woocommerce/assets/images/icons/loader.svg fragment=junior-green.com: remembering recent failure for 299 seconds.
[Wed, 30 Oct 2019 12:48:29 GMT] [Info] [20939] HTTPCache key=https://www.junior-green.com/wp-content/plugins/woocommerce/assets/images/icons/credit-cards/visa.svg fragment=junior-green.com: remembering recent failure for 299 seconds.
[Wed, 30 Oct 2019 12:48:29 GMT] [Info] [20939] HTTPCache key=https://www.junior-green.com/wp-content/plugins/woocommerce/assets/images/icons/credit-cards/laser.svg fragment=junior-green.com: remembering recent failure for 299 seconds.
[Wed, 30 Oct 2019 12:48:29 GMT] [Info] [20939] HTTPCache key=https://www.junior-green.com/wp-content/plugins/woocommerce/assets/images/icons/credit-cards/mastercard.svg fragment=junior-green.com: remembering recent failure for 299 seconds.
[Wed, 30 Oct 2019 12:48:57 GMT] [Info] [20939] HTTPCache key=https://www.hughcampbellhairgroup.com/wp-content/themes/HughCampbell/images/prev.png fragment=hughcampbellhairgroup.com: remembering recent failure for 299 seconds.
[Wed, 30 Oct 2019 12:48:57 GMT] [Info] [20939] HTTPCache key=https://www.hughcampbellhairgroup.com/wp-content/themes/HughCampbell/images/next.png fragment=hughcampbellhairgroup.com: remembering recent failure for 299 seconds.
[Wed, 30 Oct 2019 12:48:57 GMT] [Info] [20939] Trying to serve rewritten resource in-place: https://www.hughcampbellhairgroup.com/files/2019/08/PMSmartbond-Sept-19-Offer-%E2%82%AC12.jpg
[Wed, 30 Oct 2019 12:48:57 GMT] [Info] [20939] Could not rewrite resource in-place because URL is not in cache: https://www.hughcampbellhairgroup.com/files/2019/08/PMSmartbond-Sept-19-Offer-%E2%82%AC12.jpg
[Wed, 30 Oct 2019 12:48:57 GMT] [Info] [20939] HTTPCache key=https://www.hughcampbellhairgroup.com/wp-content/themes/HughCampbell/images/next-grey.png fragment=hughcampbellhairgroup.com: remembering recent failure for 299 seconds.
[Wed, 30 Oct 2019 12:48:57 GMT] [Info] [20939] HTTPCache key=https://www.hughcampbellhairgroup.com/wp-content/themes/HughCampbell/images/prev-grey.png fragment=hughcampbellhairgroup.com: remembering recent failure for 299 seconds.

Interestingly those files do exist and when tested in browser they don't 404. 

There are quite a few failures for Wordpress Ajax - these can be ignored as I will be updating the config to exclude any path that contains /?wc-ajax

Shane :)
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-discuss+unsub...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages