Size of the emitted data exceeds buffer_chunk_limit

535 weergaven
Naar het eerste ongelezen bericht

Chris Broglie

ongelezen,
27 jun 2016, 18:46:3227-06-2016
aan Fluentd Google Group
There are a few other threads discussing this, but they focus on workarounds by adjusting the input or output plugins. What I would like to understand is what can actually 
cause this error to occur?

2016-06-27 20:00:29 +0000 [warn]: Size of the emitted data exceeds buffer_chunk_limit.
2016-06-27 20:00:29 +0000 [warn]: This may occur problems in the output plugins ``at this server.``
2016-06-27 20:00:29 +0000 [warn]: To avoid problems, set a smaller number to the buffer_chunk_limit
2016-06-27 20:00:29 +0000 [warn]: in the forward output ``at the log forwarding server.``

I encountered this issue before when using detach_process (https://github.com/fluent/fluentd/issues/930), and the root of that problem was the input forwarder was batching together multiple records before sending along to the output plugin, and despite the individual records being small, batching them together resulted in the data passed to the output plugin exceeding buffer_chunk_limit.

I'm now running without detach_process, and encountered this warning again, and am thus trying to understand how it could occur. Obviously it could happen if any individual record exceeds buffer_chunk_limit, but are there any other conditions which could trigger it?

So far I've just seen it in the logs once on a production system, and have not been able to reproduce it. The relevant bits of my configuration are here:

<source>
  type forward
</source>
<match ztrack.count>
  @type kinesis-aggregation
  region us-west-2
  stream_name xxx
  aws_key_id xxx
  aws_sec_key xxx
  include_time_key false
  include_tag_key false
  buffer_chunk_limit 999k
  buffer_queue_limit 30000
  buffer_queue_full_action drop_oldest_chunk
  flush_interval 10s
  try_flush_interval 0.1
  queued_chunk_flush_interval 0.01
  disable_retry_limit true
  num_threads 50
  buffer_type file
  buffer_path /var/log/fluentd/ztrack.count*.buffer
</match>


Thanks,
-Chris

Mr. Fiber

ongelezen,
28 jun 2016, 11:01:1528-06-2016
aan Fluentd Google Group
It seems in_forward received large chunk from forwarder.
fluentd emits the received chunks to output plugin directly when client is fluentd's out_forward.
So if you want to avoid this warning, you should set small chunk limit in forwarder side.


Masahiro


--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Chris Broglie

ongelezen,
28 jun 2016, 12:21:0328-06-2016
aan flu...@googlegroups.com
We're not using out_forward, events are coming from a local web application which writes 1 record at a time.

I can understand exceeding buffer_chunk_limit if the application wrote a record that was larger than that limit, but am trying to understand if that's the only possible explanation.

Thanks,
-Chris

--
You received this message because you are subscribed to a topic in the Google Groups "Fluentd Google Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fluentd/9-8P0WVWAwQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fluentd+u...@googlegroups.com.

Mr. Fiber

ongelezen,
5 jul 2016, 06:55:1805-07-2016
aan Fluentd Google Group
I can understand exceeding buffer_chunk_limit if the application wrote a record that was larger than that limit

Ah, this is another possible situation.
Do you send such large 1 record to fluentd?

Chris Broglie

ongelezen,
6 jul 2016, 17:44:1006-07-2016
aan flu...@googlegroups.com
The data we're sending should be very small, but I can't say conclusively that it didn't happen. I haven't observed the issue other than that one time, and can't reproduce it, so I was wondering if there were any other known scenarios. I guess a single large record may be the most likely cause at this point.
Allen beantwoorden
Auteur beantwoorden
Doorsturen
0 nieuwe berichten