Buffer chunk got created more than configured chunk_limit_size

185 views
Skip to first unread message

chaitra hegde

unread,
May 19, 2021, 2:25:55 AM5/19/21
to Fluentd Google Group
Hi,
  I am using fluentd 1.11.1. Below is match section details:
      <match test.logging.**>
        @type copy
        <store>
            @type elasticsearch_dynamic
            host elasticsearch.admin-ns.svc.cluster.local
            port 9200
            resurrect_after 5s
            request_timeout 15s
            reconnect_on_error true
            reload_on_failure true
            reload_connections false
            type_name fluentd
            time_key time
            utc_index true
            time_key_exclude_timestamp true
            logstash_format true
            logstash_prefix fluentd-${tag_parts[2]}-${tag_parts[3]}

            ca_file /etc/td-agent/certs/ca.crt
            client_cert /etc/td-agent/certs/tls.crt
            client_key /etc/td-agent/certs/tls.key
            scheme https
            ssl_verify true
            ssl_version TLSv1_2

            <buffer tag, time, namespace, type>
                @type file
                path /var/log/td-agent/elasticsearch-buffer/test.logging.all.all
                flush_mode interval
                flush_interval 30s
                timekey 3600
                retry_forever true
                retry_max_interval 5s
                overflow_action block
                chunk_limit_size 8MB
                total_limit_size 512m
            </buffer>
        </store>
      </match>
    </label> 

Here I have configured chunk_limit_size to 8MB. But I can see some of the chunks got created with 200MB, 80MB sizes. 

In  fluentd I see below errors:
  2021-05-11 06:38:13 +0000 [warn]: #0 suppressed same stacktrace 2021-05-11 06:38:44 +0000 [warn]: #0 failed to flush the buffer. retry_time=7 next_retry_seconds=2021-05-11 06:38:49 +0000 chunk="5c184dd0f8b5ca3997d921e0d4947c3f" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.admin-ns.svc.cluster.local\", :port=>9200, :scheme=>\"https\"}): [413] "

So is it expected to create such large chunk? If yes then on what all conditions this can be obsereved?
Reply all
Reply to author
Forward
0 new messages