configuring fluentd to send container logs to logstash

3,095 views
Skip to first unread message

ctc...@gmail.com

unread,
Apr 25, 2017, 10:33:20 AM4/25/17
to Fluentd Google Group
Hi,

I have a docker swarm configured to use fluentd as the default log driver for containers, and I would like to have the container logs sent from fluentd to logstash.
I have tried a few things so far, but I have not found a configuration that works:
My first attempt was to configure fluentd to use the remote_syslog output plugin to send to logstash configured to listen for syslog input. With this setup I could see that fluentd was sending the logs to the logstash, and from the logstash log I could see that logstash was receiving them, but I never saw any logs appearing in kibana. I did not see any errors in the logstash log.
Then I tried what I thought might be a simpler setup, with logstash configured for udp and tcp input on port 51415 and fluentd using the forward output plugin to send to that port. With this setup I haven't managed to get fluentd to send the logs at all. Initially I saw "no nodes are available" in the fluentd log, so I tried again with heartbeat_type none and flush_interval 0s; then I saw nothing in the fluentd log at all.

Is there a configuration for fluentd and logstash that should work?

Chris

Eduardo Silva

unread,
Apr 25, 2017, 11:12:27 AM4/25/17
to flu...@googlegroups.com
Would you please share your Fluentd configuration?

Thanks

--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

ctc...@gmail.com

unread,
Apr 25, 2017, 11:24:33 AM4/25/17
to Fluentd Google Group
The relevant match rule is
<match pattern>
    @type forward
    <server>
        name logstash
        host 127.0.0.1
        port 51415
        weight 100
    </server>
    heartbeat_type none
    flush_interval 0s
    @log_level debug
</match>
I know that the pattern is matched, because my first try used the same pattern but with the remote_syslog plugin, I and could see fluentd sending logs to logstash configured for syslog.

I should add that I'm using td-agent-2.3.4 on CentOS 7.3.1611, and that logstash, elasticsearch and kibana are all running as docker containers in the swarm.

Chris
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.

ctc...@gmail.com

unread,
Apr 25, 2017, 11:35:04 AM4/25/17
to Fluentd Google Group
Update: I just recreated the elasticsearch and logstash services in the swarm, and I am now seeing logs in kibana. Not sure why it wasn't working before.

ctc...@gmail.com

unread,
Apr 25, 2017, 12:28:08 PM4/25/17
to Fluentd Google Group
Now I'm getting messages sent to logstash, I tried changing the logstash configuration to use the fluent codec, but then logstash just kept giving errors like "<TypeError: can't convert String into Integer>".
Any ideas for a logstash configuration to parse fluentd messages?

Mr. Fiber

unread,
Apr 25, 2017, 12:30:14 PM4/25/17
to Fluentd Google Group
I'm not sure but maybe, it's a logstash plugin issue.
Asking it on elastic forum is better.


Masahiro

To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.

ctc...@gmail.com

unread,
Apr 26, 2017, 6:51:24 AM4/26/17
to Fluentd Google Group
I'm now trying a different approach, using the fluent-plugin-elasticsearch to send the logs directly to elastic search. Is this the right group to ask questions about this plugin?

Mr. Fiber

unread,
Apr 28, 2017, 7:09:22 AM4/28/17
to Fluentd Google Group
Yes, maybe.
Some case the problem is on elasticsearch side, e.g. configuration, performance issue.
If the problem is elasticsearch plugin, some people can help you here.

On Wed, Apr 26, 2017 at 7:51 PM, <ctc...@gmail.com> wrote:
I'm now trying a different approach, using the fluent-plugin-elasticsearch to send the logs directly to elastic search. Is this the right group to ask questions about this plugin?

Bookmarks

unread,
Dec 20, 2018, 11:13:26 AM12/20/18
to Fluentd Google Group
You need to use the following setting in your fluentd match block --> time_as_integer true

Sample

<match **>
  @type forward
  time_as_integer true
  heartbeat_type transport
  transport tls
  tls_cert_path path/to/cert
  tls_verify_hostname true
  <server>
   name my-server
   host 127.0.0.1
   port 1000
   weight 100
  </server>
  flush_interval 5s
</match>
Reply all
Reply to author
Forward
0 new messages