Temporarily failed to flush the buffer. Data too big

435 views
Skip to first unread message

praven john

unread,
Aug 30, 2016, 4:46:31 AM8/30/16
to Fluentd Google Group
Hi,

We are occasionally getting the below log in our td-agent logs. We use TD-Agent to connect to GrayLog.

2016-08-29 21:13:19 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2016-08-29 21:43:31 +0000 error_class="ArgumentError" error="Data too big (276199 bytes), would create more than 128 chunks!" plugin_id="object:3ff4410aba20"

The corresponding td-agent.conf is 
<match **> # Should match the tag above
  type copy
  <store>
    # Send to Graylog2 host
    type gelf
    host stage-logs-01 #Host name of the graylog server
    port 12201 # Port configured for GELF UDP
    flush_interval 5s # Send to server every 5seconds
  </store>
</match>

Is there any way to increase the configs for these chunks? I'm assuming they are not working off the buffer plugin, because, the defaults for the same are 64 chunks(and it complains of 128 chunks)

Any help is appreciated.

Mr. Fiber

unread,
Aug 30, 2016, 5:46:43 AM8/30/16
to Fluentd Google Group
From the source code, we can't change the max chunk size.


Set smaller buffer_chunk_limit may help.


Masahiro

--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

praven john

unread,
Aug 30, 2016, 6:03:24 AM8/30/16
to Fluentd Google Group
Hi,

Thanks for the quick reply. I'm guessing the default buffer_chunk_limit is 8m. I was wondering how reducing this would help? Wouldn't that mean we would have even more chunks for the same amount of data?
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.

praven john

unread,
Aug 30, 2016, 7:25:23 AM8/30/16
to Fluentd Google Group
I think I got the idea, a smaller chunk would mean the chunk would be processed sooner.

I looked at https://github.com/Graylog2/gelf-rb/blob/d1b61fcad5d2d6971c583b5066e24ed732636e89/lib/gelf/notifier.rb#L234 , and it seems the GELF plugin uses a variable max_chunk_size, and not buffer_chunk_limit :( .

I couldn't find the value for that, but since our app died for a message of 276199 bytes, and the value of count is greater than 128, I reverse engineered, that the value of max_chunk_size would be roughly 2048 bytes, or 2KB. And messages of less than 200KB, worked. Would you know where I might be able to set this?

Mr. Fiber

unread,
Aug 30, 2016, 8:37:32 AM8/30/16
to Fluentd Google Group
"buffer_chunk_limit 100k" doesn't work?


--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages