YannRobert commentedloggregator (LGR) is sometime filtering with the following warning:
Log message output too high. We've dropped 100 messages
It occurs when the application is logging a lot of messages in a short time.
It may be opportunate in the event the application is logging GB/sec for several seconds or minutes.But sometimes the application is just logging 200 lines at a time, then stop or go back to a moderate logging rate.
It may not be appropriate to just drop those messages, as the log messages being dropped are important.
Why is LGR using a static threshold (100 messages) instead of a dynamic or configurable threshold?
Is that possible to configure it on a per application basis? (running in a public cloud)Would it be possible to change LGR so that it truncate a buffer only when the syslog endpoint fails to keep up with the current rate?
Or, would it be possible to enable the flood protection only when it's during for a 'long' time? For instance if the application is logging 300 lines / seconds for 10 seconds, only drop after the 3rd or 4th second, so that we get the content of the flood for the first 2 or 3 seconds. Also, it could send the last buffer if the flood is stopping so that we get the 10th second of logs.
ajackson commentedHi Yan,
Loggregator will output the message you referred to when clients can't keep up with the volume. So if you have a syslog drain or websocket consumer that isn't keeping up with the flow of logs, loggregator will truncate it's buffer and send the client this warning message. So the answer to your question about only truncating the buffer when the syslog endpoint fails to keep up with the current rate is that this is exactly what is happening when this message appears.The buffer size is configurable when deploying loggregator with the doppler.maxRetainedLogMessages property that can be added in the bosh manifest.
This property is set by the operators of the CF instance and can't be set for an individual application. Since the buffer size is heavily dependent on available resources for the loggregator vms it's not really appropriate to allow users of a public cloud to set this property.
We are exploring ways to throttle log messages from noisy applications based on time scales as you suggest but currently we don't have anything in the system that does this.
Hope this helps to explain why you're seeing this output and what loggregator can and can't currently do.
YannRobert commentedHi Alex, thank you for the reply.
I am not sure I understand. Could we please discuss that further?In particular, you are referring to "clients" that can't keep up. But the endpoint in a syslog server (namely Papertrail).
If the syslog server cannot keep up with the flow sent by CloudFoundry, it should at least receive a few bytes of logs. But here there's nothing (but the warning message).Like when I send a big text file to a syslog server, like that
cat big.txt | nc -v -u logs.papertrailapp.com 514
I get at least the beginning of the file.
So I suppose there is a condition that makes LGR not even try to send the logs to the syslog server; like when the application is logging 150-300 lines (differs on the different public clouds I tried) of logs at the same time.
What is exactly the network condition that makes LGR say "the syslog server can't keep up i'm going to truncate the buffer but still send it a warning message"? Is that a connection reset? Is that a connection timeout? Is that a blocking socket write?
--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/a4771027-c30d-4a5f-8fb8-9a33c84da06a%40cloudfoundry.org.
To unsubscribe from this group and stop receiving emails from it, send an email to vcap-dev+u...@cloudfoundry.org.
--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/ca0841d6-2d2a-4e65-aed7-cdb35f64ec10%40cloudfoundry.org.
To unsubscribe from this group and stop receiving emails from it, send an email to vcap-dev+u...@cloudfoundry.org.