I have all syslog message in file output but only 2 messages in ElasticSearch, why?

164 views
Skip to first unread message

Stéphane Klein

unread,
Jul 18, 2017, 4:37:59 AM7/18/17
to Fluentd Google Group
Hi,

I have this fluentd config file:

<source>
  @type syslog
  port 5140
  bind 0.0.0.0
  tag journal
</source>


<match **>
  @type copy

  <store>
    @type file
    path /fluentd/log/output
  </store>

  <store>
    @type elasticsearch
    host elasticsearch
    flush_interval 10s
    port 9200
    logstash_format true
    type_name fluentd
    index_name logstash
    include_tag_key true
    tag_key _key
    buffer_chunk_limit 512k
    reload_connections false
    reconnect_on_error true
    max_retry_wait 60
    disable_retry_limit
  </store>
</match>

I don't understand, all syslog message are in /fluentd/log/output but I have very few messages in ElasticSearch, only messages like:

July 18th 2017, 10:18:27.000 fluent.info Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
July 18th 2017, 10:18:16.000 fluent.info fluentd worker is now running worker=0


Fluentd output log:

2017-07-18 08:35:32 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2017-07-18 08:35:32 +0000 [debug]: adding store type="file"
2017-07-18 08:35:32 +0000 [debug]: adding store type="elasticsearch"
2017-07-18 08:35:33 +0000 [info]: using configuration file: <ROOT>
  <system>
    log_level debug
  </system>
  <source>
    @type syslog
    port 5140
    bind "0.0.0.0"
    tag "journal"
  </source>
  <match **>
    @type copy
    <store>
      @type "file"
      path "/fluentd/log/output"
      <buffer time>
        path "/fluentd/log/output"
      </buffer>
    </store>
    <store>
      @type "elasticsearch"
      host "elasticsearch"
      flush_interval 10s
      port 9200
      logstash_format true
      type_name "fluentd"
      index_name "logstash"
      include_tag_key true
      tag_key "_key"
      buffer_chunk_limit 512k
      reload_connections false
      reconnect_on_error true
      max_retry_wait 60
      disable_retry_limit
      <buffer tag>
        flush_mode interval
        retry_type exponential_backoff
        flush_interval 10s
        retry_forever
        retry_max_interval 60
        chunk_limit_size 512k
      </buffer>
      <inject>
        tag_key _key
      </inject>
    </store>
  </match>
</ROOT>
2017-07-18 08:35:33 +0000 [info]: starting fluentd-0.14.19 pid=22
2017-07-18 08:35:33 +0000 [info]: spawn command to main:  cmdline=["/usr/bin/ruby2.3", "-Eascii-8bit:ascii-8bit", "/usr/local/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--log", "/fluentd/log/fluentd.log", "--under-supervisor"]
2017-07-18 08:35:33 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.9.5'
2017-07-18 08:35:33 +0000 [info]: gem 'fluent-plugin-systemd' version '0.2.0'
2017-07-18 08:35:33 +0000 [info]: gem 'fluentd' version '0.14.19'
2017-07-18 08:35:33 +0000 [info]: adding match pattern="**" type="copy"
2017-07-18 08:35:33 +0000 [debug]: #0 adding store type="file"
2017-07-18 08:35:33 +0000 [debug]: #0 adding store type="elasticsearch"
2017-07-18 08:35:33 +0000 [info]: adding source type="syslog"
2017-07-18 08:35:33 +0000 [info]: #0 starting fluentd worker pid=29 ppid=22 worker=0
2017-07-18 08:35:33 +0000 [debug]: #0 buffer started instance=69993113626480 stage_size=144362 queue_size=0
2017-07-18 08:35:33 +0000 [debug]: #0 flush_thread actually running
2017-07-18 08:35:33 +0000 [debug]: #0 enqueue_thread actually running
2017-07-18 08:35:33 +0000 [debug]: #0 buffer started instance=69993143963400 stage_size=0 queue_size=0
2017-07-18 08:35:33 +0000 [info]: #0 listening syslog socket on 0.0.0.0:5140 with udp
2017-07-18 08:35:33 +0000 [debug]: #0 enqueue_thread actually running
2017-07-18 08:35:33 +0000 [debug]: #0 flush_thread actually running
2017-07-18 08:35:33 +0000 [info]: #0 fluentd worker is now running worker=0

I don't understand, where is my mistake?

Best regards,
Stéphane

Stéphane Klein

unread,
Jul 18, 2017, 5:19:28 AM7/18/17
to Fluentd Google Group

If I enable log_level trace all trace are emit to ElasticSearch. I think I have only fluentd log in ES but not log entry from sources. I don't understand why.

Stéphane Klein

unread,
Jul 18, 2017, 5:21:55 AM7/18/17
to Fluentd Google Group


Le mardi 18 juillet 2017 10:37:59 UTC+2, Stéphane Klein a écrit :

This is my gems list:

# gem list

*** LOCAL GEMS ***

bigdecimal (1.2.8)
cool.io (1.5.0)
did_you_mean (1.0.0)
elasticsearch (5.0.4)
elasticsearch-api (5.0.4)
elasticsearch-transport (5.0.4)
excon (0.57.1)
faraday (0.12.1)
ffi (1.9.18)
fluent-plugin-elasticsearch (1.9.5)
fluent-plugin-systemd (0.2.0)
fluentd (0.14.19)
http_parser.rb (0.6.0)
io-console (0.4.5)
json (2.1.0, 1.8.3)
minitest (5.9.0)
msgpack (1.1.0)
multi_json (1.12.1)
multipart-post (2.0.0)
net-telnet (0.1.1)
oj (2.18.3)
power_assert (0.2.7)
psych (2.1.0)
rake (10.5.0)
rdoc (4.2.1)
serverengine (2.0.5)
sigdump (0.2.4)
strptime (0.1.9)
systemd-journal (1.2.3)
test-unit (3.1.7)
thread_safe (0.3.6)
tzinfo (1.2.3)
tzinfo-data (1.2017.2)
yajl-ruby (1.3.0)
root@7cf8b34163f9:/#

Stéphane Klein

unread,
Jul 18, 2017, 12:51:21 PM7/18/17
to Fluentd Google Group


Le mardi 18 juillet 2017 10:37:59 UTC+2, Stéphane Klein a écrit :

Fixed with:

    environment:
      - FLUENT_UID=0

Best regards,
Stéphane
Reply all
Reply to author
Forward
0 new messages