A few hundred MB of backlogged logs from syslog -> ES

751 views
Skip to first unread message

Bobby M.

unread,
Dec 19, 2016, 8:18:30 PM12/19/16
to Fluentd Google Group
There's been some concern over one of our fluentd collectors taking in logs over syslog input plugins and not being able to forward to our elasticsearch clusters.  When we get backlogged it ranges somewhere to a few hundred MB.  I'm worried the fluentd process isn't breaking down the data into the buffer_chunk_limit specified, and its forcing the backlogged messages to get worse.

A)  Can I force upload these logs to ES to reduce the backlogged data
B) is there a way to monitor this kind of behavior?

Mr. Fiber

unread,
Dec 20, 2016, 8:28:21 AM12/20/16
to Fluentd Google Group

--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Bobby M.

unread,
Dec 20, 2016, 12:52:02 PM12/20/16
to Fluentd Google Group
Sadly no.  Using TCP our logs arrive just fine.  It's the elasticsearch connection that complains of broken pipes and then continues to buffer new logs

To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.

Mr. Fiber

unread,
Dec 21, 2016, 12:16:13 AM12/21/16
to Fluentd Google Group
Hmm... there is no warning / error logs about out_elasticsearch?
And you can use slow_flush_log_threshold to check flush performance since v0.12.31


To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.

Bobby M.

unread,
Dec 21, 2016, 2:07:04 PM12/21/16
to Fluentd Google Group
It appears to be backed up when there's more than one or two chunks waiting to be uploaded.
I get the following in my logs

2016-12-21 19:03:39 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2016-12-21 19:37:11 +0000 error_class="Elasticsearch::Transport::Transport::Errors::RequestEntityTooLarge" error="[413] {\"Message\":\"Request size exceeded 10485760 bytes\"}" plugin_id="object:3f93769be198"


I know AWS ES service has a limit of 10MB, but my `buffer_chunk_limit` is set to 9m

Mr. Fiber

unread,
Dec 21, 2016, 7:08:31 PM12/21/16
to Fluentd Google Group
Ah, AWS ES, not ES...

`buffer_chunk_limit` is set to 9m

buffer_chunk_limit is not payload size.
You know Elasticsearch client adding metadata to each record and
message is converted to json, not efficient msgpack.
So 9m chunk size may hit AWS ES limit.


To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.

Bobby M.

unread,
Dec 27, 2016, 1:54:45 PM12/27/16
to Fluentd Google Group
This is a good detail. I'll reduce my size and continue to monitor it. 

Thank you for the clarification of payload != chunk size
Reply all
Reply to author
Forward
0 new messages