Nginx status - Reading dangerously large protocol message

54 views
Skip to first unread message

Shane Marsh

unread,
Jul 25, 2019, 6:57:33 AM7/25/19
to mod-pagespeed-discuss
Hi, 

I've noticed when running sudo nginx -t this is repeated. I don't appear to be seeing any other issues at the moment - how do I increase the limit?


[libprotobuf WARNING third_party/protobuf/src/src/google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 67108864 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.

I should explain our systems are large and nginx maybe occupying more than 10-15GB of RAM memory at any given time - I can only assume this is the problem?  

Shane :) 

Otto van der Schaaf

unread,
Jul 25, 2019, 8:27:54 AM7/25/19
to mod-pagesp...@googlegroups.com
I suspect the amount of many isn't the problem.
And, I think you can safely ignore it, unless you experience functional problems.
If you want to bump the limit, you'll have to hack source:



--
You received this message because you are subscribed to the Google Groups "mod-pagespeed-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-di...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mod-pagespeed-discuss/3f334d30-d3d3-49df-8ddd-3097bcff9791%40googlegroups.com.

Shane Marsh

unread,
Jul 25, 2019, 8:34:59 AM7/25/19
to mod-pagespeed-discuss
Hi Otto, 

Thanks for coming back to me. I will ignore it for now because we do not have any functional issues yet - we are running at about 47-48MB so it's close but not critical. Would I be correct in thinking that if we did reach this limit (which i believable would cause problems), we would have to modify the source code and recompile? Either that or try and reduce the weight of the configuration files? 

Shane :)

Otto van der Schaaf

unread,
Jul 25, 2019, 8:40:19 AM7/25/19
to mod-pagesp...@googlegroups.com
Yes, you'd have to modify code and recompile if you hit the limit, and importantly, experience problems 
because of it that are critical to what you need to get out of the module. Chance of these things happening
in combination seem very low to me. 


--
You received this message because you are subscribed to the Google Groups "mod-pagespeed-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-di...@googlegroups.com.

Shane Marsh

unread,
Jul 25, 2019, 12:13:44 PM7/25/19
to mod-pagespeed-discuss
Great - thanks Otto for you advice.

All the best, 
Shane :)
To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-discuss+unsub...@googlegroups.com.

Always-R Marketing

unread,
Jul 25, 2019, 12:53:33 PM7/25/19
to mod-pagesp...@googlegroups.com

I know this is not very helpful but we are like you running NGINX and Mod_paagespeed and we don’t see that, but we also are not running something that large. Maybe giving it a smaller % of ram to work with.  I thought I saw something during building the package package for install.  Or if you build the install on a smaller machine.   

--

You received this message because you are subscribed to the Google Groups "mod-pagespeed-discuss" group.

To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-di...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mod-pagespeed-discuss/3f334d30-d3d3-49df-8ddd-3097bcff9791%40googlegroups.com.

Shane Marsh

unread,
Jul 26, 2019, 7:36:27 AM7/26/19
to mod-pagespeed-discuss
Hi Steve,

Well this is the thing, I'm not 100% sure why we are specifically getting these warnings. It might be resource related as you say or more likely it might be down to the weight of the configuration files. There are a number of "Core" configuration files that are included for all domains then each site has it's own configuration + SSL certificates alongside custom configurations that deal with different setup relating to caching and pagespeed. It is wholly possible that its the sheer number of configuration files and when Niginx compiles them during startup or reload we are getting getting close to a limit. I suspect that if we ever actually hit the limit, Nginx will fail to start. I wouldn't be surprised if the compiled weight of all the config files exceeded the 40Mb or so it's warning about as Nginx does take a good 60-120 seconds to reload fully consuming anything above 10GB+ of RAM as it does it. Once Nginx has fully reloaded it dumps the used RAM memory and free it up again. I don't notice any performance issues but we do have to run a server with a high spare RAM availability so that it can cope with reloads alongside it's normal load without crashing. If someone knows more about this please feel free to correct me.

If my suspicions are correct, my job will be to try and optimise the process a little or eventually we'll need to break the configuration up across different servers and proxy them through to a backend. As we run Wordpress this will not be easy but food for thought.

Shane :)


On Thursday, 25 July 2019 17:53:33 UTC+1, Steve Godlewski wrote:

I know this is not very helpful but we are like you running NGINX and Mod_paagespeed and we don’t see that, but we also are not running something that large. Maybe giving it a smaller % of ram to work with.  I thought I saw something during building the package package for install.  Or if you build the install on a smaller machine.   

 

From: 'Shane Marsh' via mod-pagespeed-discuss <mod-pagesp...@googlegroups.com>
Sent: Thursday, July 25, 2019 5:58 AM
To: mod-pagespeed-discuss <mod-pagesp...@googlegroups.com>
Subject: Nginx status - Reading dangerously large protocol message

 

Hi, 

 

I've noticed when running sudo nginx -t this is repeated. I don't appear to be seeing any other issues at the moment - how do I increase the limit?

 

 

[libprotobuf WARNING third_party/protobuf/src/src/google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 67108864 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.

 

I should explain our systems are large and nginx maybe occupying more than 10-15GB of RAM memory at any given time - I can only assume this is the problem?  

 

Shane :) 

--
You received this message because you are subscribed to the Google Groups "mod-pagespeed-discuss" group.

To unsubscribe from this group and stop receiving emails from it, send an email to mod-pagespeed-discuss+unsub...@googlegroups.com.

Shane Marsh

unread,
Jul 31, 2019, 7:33:54 AM7/31/19
to mod-pagespeed-discuss
Well this is interesting - we have just increased the Shared Memory from 7.5Gb to 10Gb and the warning did not present itself when i checked the syntax. 

Shane :)

Shane Marsh

unread,
Aug 2, 2019, 7:26:19 AM8/2/19
to mod-pagespeed-discuss
Just to update you all again, updating the DefaultSharedMemoryCacheKB directive to 10GB was HUGE mistake. 

It seems that if you go over what we already had it set to, 7.5GB, Nginx becomes unstable. Above 7.5GB (or there about) upon reload, Nginx clears the meta cache out completely on each reload forcing the server to optimise the content again. 

For us this happened this morning while the the server was heavily loaded maxing all cores of our CPU to 100% for more than 20 mins. Sites were going up and down all over the place. eek!

If we can't overcome this limitation. I think what this means for us is we will need to split our Nginx/pagespeed into two physical servers and and divide the domains 50/50 evenly between the two.

Shane :)

On Thursday, 25 July 2019 11:57:33 UTC+1, Shane Marsh wrote:
Reply all
Reply to author
Forward
0 new messages