Getting [warn]: no patterns matched tag kubernetes.var.log.containers

562 views
Skip to first unread message

David Truong

unread,
Nov 18, 2019, 10:46:52 PM11/18/19
to Fluentd Google Group
Hi,

Not sure why I'm getting many no patterns matched warning and no logs is being collected.

Appreciate if someone can point out what I am doing wrong in the config file.

    <source>
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/goapps-log.pos
      tag kubernetes.*
      read_from_head false
      format json
      time_format %Y-%m-%dT%H:%M:%SZ
      time_key @time
      @log_level debug
    </source>

    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @type kubernetes_metadata
      @label @NORMAL
    </filter>

    <filter kubernetes.var.log.containers.my-access-api**>
      @type concat
      key log
      multiline_start_regexp /^"handler-api"/
      multiline_end_regexp /"myURL": "api"/
      flush_interval 10
      timeout_label "@NORMAL"
    </filter>

    <label @NORMAL>
      # Apply transformation to the concatenated field, removing the carriage return added by Docker
      <filter kubernetes.var.log.containers.my-access-api**>
        @type record_transformer
        enable_ruby true
        <record>
          logger ${record["log"].gsub(/\r/, '')}
        </record>
        remove_keys log
      </filter>

      <filter kubernetes.var.log.containers.my-access-api**>
        @type parser
        <parse>
          @type json
          json_parser json
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </parse>
        hash_value_field parsed
        replace_invalid_sequence true
        emit_invalid_record_to_error false
        key_name logger
        reserve_data true
      </filter>
 
      <filter kubernetes.**>
        @type parser
        <parse>
          @type json
          json_parser json
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </parse>
        hash_value_field parsed
        replace_invalid_sequence true
        emit_invalid_record_to_error false
        key_name log
        reserve_data true
      </filter>

      <match **>
        @id elasticsearch
        @type elasticsearch
        @log_level debug
        include_tag_key true
        type_name _doc
        host "#{ENV['OUTPUT_HOST']}"
        port "#{ENV['OUTPUT_PORT']}"
        scheme "#{ENV['OUTPUT_SCHEME']}"
        ssl_version "#{ENV['OUTPUT_SSL_VERSION']}"
        logstash_format true
        logstash_prefix "#{ENV['LOGSTASH_PREFIX']}"
        reconnect_on_error true
        reload_on_failure true
        reload_connections false
        <buffer>
          @type file
          path /var/log/fluentd-buffers/kubernetes.system.buffer
          flush_mode interval
          retry_type exponential_backoff
          flush_thread_count 2
          flush_interval 10s
          retry_forever
          retry_max_interval 30
          chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}"
          queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}"
          overflow_action block
        </buffer>
      </match>
      <match kubernetes.var.log.containers.*fluentd*.log>
        @type null
      </match>
    </label>

Thanks.

David Truong

unread,
Nov 19, 2019, 2:16:55 PM11/19/19
to Fluentd Google Group
I was able resolved the no pattern matched warning.  The label was missing a relabel in <match> directive.

Thulasiraman V B

unread,
Feb 25, 2020, 5:42:43 AM2/25/20
to Fluentd Google Group
Hi David, 

Thanks for the complete syntax.

I am working for the similar kind of logging in my work now. Could you please let me know how did you resolve? 

Thanks in advance.
Reply all
Reply to author
Forward
0 new messages