Fluentd JSON logs truncate/splitting after 16385 characters- How to concate ?

219 views
Skip to first unread message

kishor kotule

unread,
May 3, 2021, 2:06:29 AM5/3/21
to flu...@googlegroups.com
I am very new to Fluentd and need expert support. I have deployed: 
  repository: bitnami/fluentd
  tag: 1.12.1-debian-10-r0

Currently, one of the modules/applications inside my namespaces are configured to generate JSON logs. I see logs in Kibana as JSON format.

But there is the issue of splitting/truncating logs after 16385 characters, and I cannot see full logs trace.
I have tested some of the concat plugins but they don't give the expected results so far. or maybe I did the wrong implementation of Plugins.

    fluentd-inputs.conf: |
      # Get the logs from the containers running in the node
      <source>
        @type tail
        path /var/log/containers/*.log
        tag kubernetes.*
        <parse>
          @type json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </parse>
      </source>
      # enrich with kubernetes metadata
      <filter kubernetes.**>
        @type kubernetes_metadata
      </filter>
      <filter kubernetes.**>
        @type parser
        key_name log
        reserve_data true
        <parse>
          @type json
        </parse>
      </filter>
      <filter kubernetes.**>
        @type concat
        key log
        stream_identity_key @timestamp
        #multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d+ .*/
        multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}
        flush_interval 5
      </filter>

    fluentd-output.conf: |
      <match **>
        @type forward
         # Elasticsearch forward
        <buffer>
          @type file
          path /opt/bitnami/fluentd/logs/buffers/logs.buffer
          total_limit_size 1024MB
          chunk_limit_size 16MB
          flush_mode interval
          retry_type exponential_backoff
          retry_timeout 30m
          retry_max_interval 30
          overflow_action drop_oldest_chunk
          flush_thread_count 2
          flush_interval 5s
          flush_thread_count 2
          flush_interval 5s
        </buffer>
      </match>
      {{- else }}
      # Send the logs to the standard output
      <match **>
        @type stdout
      </match>
      {{- end }}

I am not sure but a reason could be that inside fluentd configuration, some Plugins are already used to filter JSON data, and maybe there is a different way to use a new concat plugin. ? or it can be configured in a different way. ?

I am really struggling with this issue for few days. Can you please support me?

The full config file also attached!! (fluentd-inputs.conf from line 111 in the file)

Thanks and stay safe.


--
Regards,

Kishor Kotule
fluentd-values.yaml
Reply all
Reply to author
Forward
0 new messages