I have setup the fluentd / elasticsearch / kibana stack to centralize logs from docker containers. This uses the following source and filter in td-agent.conf:--## built-in forward input
## @see http://docs.fluentd.org/articles/in_forward
<source>
@type forward
port 24224 # optional. 24224 by default
format json
</source># Using filter to add log elements to each event<filter docker.**>type record_transformerenable_ruby true<record>timestamp ${log.split('|')[0].strip}level ${log.split('|')[1].strip}category ${log.split('|')[2].strip}msg ${log.split('|')[3].strip}</record></filter>My docker container runs a java app with logback and formats logs with pipe delimiters according to the following:<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"><encoder><charset>utf-8</charset><!-- Fluentd/Elasticsearch pattern --><pattern>%date{ISO8601} | %-5.5p | %c | %m%n</pattern></encoder></appender>
This comes out quite nicely in Kibana as I can filter by level and/or category.The problem is when I have a multiline log message such as a stack trace. This is not handled well at all by the fluentd/elasticsearch combo and according to the fluentd doc and my own experiments, the 'multiline' formatter does not work with the in_forward plugin.Has anyone found a solution to this problem (which I assume is quite common with Java folks)?
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.