How to index data in Elastic similar to Logstash using fluentd

2,816 views
Skip to first unread message

Arun John Varughese

unread,
Nov 5, 2015, 2:02:12 AM11/5/15
to Fluentd Google Group

Hi,

I am currently testing on fluentd and elastic and also aware of the plugin https://github.com/uken/fluent-plugin-elasticsearch
But, I guess it does not say how I can get the time based indexes.

If I use logstash as my log collector then the indexes in Elastic are created as follows:

green  open   logdata-2015.10.08      5   1    4051135            0      4.9gb          2.4gb
green  open   logdata-2015.10.15      5   1     650371            0    300.4mb        150.3mb
green  open   logdata-2015.10.05      5   1      62252            0     58.7mb         29.3mb


This I believe is based on the time/date mentioned in the log file.

However, if I use fluentd as the log collector then I only get the following:


green  open   fluentd-2015.11.05      5   1     154329            0     27.9mb         13.9mb


Does anyone know how we can get idexes created just as in Logstash using Fluentd.

Thanks and Regards,

AJV

Mr. Fiber

unread,
Nov 5, 2015, 2:59:16 AM11/5/15
to Fluentd Google Group

--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Arun John Varughese

unread,
Nov 5, 2015, 3:12:59 AM11/5/15
to Fluentd Google Group
Yes I did, and this is how it looks like in my config:

<match squid.**>
  type elasticsearch
  logstash_format true

  #elastic host
  host 10.0.0.7
  port 9200
  type_name squid
  logstash_prefix fluentd

  #buffering
  buffer_type file
  buffer_path /var/log/td-agent/buffer/td
  flush_interval 5m
  buffer_chunk_limit 8m
  buffer_queue_limit 2048
  retry_wait 15s
</match>



Mr. Fiber

unread,
Nov 5, 2015, 3:19:15 AM11/5/15
to Fluentd Google Group
How to collect logs?
Could you also paste your source configurations?


Arun John Varughese

unread,
Nov 5, 2015, 10:43:04 PM11/5/15
to Fluentd Google Group

Hi,

This is how it is currently configured:

fluentd forwarder >  fluentd collector  >  Elasticsearch < Kibana



fluentd forwarder (source) config:

<source>
  type tail
  path /var/log/squid/test/*
  pos_file /var/log/td-agent/squid.log.pos
  read_from_head true
  format grok
  grok_pattern %{SQUIDSN}
  custom_pattern_path /etc/td-agent/patterns/
  keep_time_key true
  tag squid.secure
  log_level trace
</source> 

<match squid.**>
type forward
  send_timeout 60s
  recover_wait 10s
  heartbeat_interval 1s
  phi_threshold 16
  hard_timeout 120s

 #buffering
  buffer_type file
  buffer_path /var/log/td-agent/buffer/td
  buffer_chunk_limit 8m
  buffer_queue_limit 1024
  flush_interval 10s
  retry_wait 20s

  # push data to fluentd collector
  <server>
    name localhost
    host 10.0.0.6
    port 24224

  </server>
  <secondary>
    type file
    path /var/log/td-agent/failed/fail
  </secondary>
  log_level trace
</match>

-------------

fluentd collector config:

<source>
  type forward
  log_level trace
</source>

<match squid.**>
  type elasticsearch
  logstash_format true

  #elastic host
  host 10.0.0.7
  port 9200
  type_name squid
  logstash_prefix fluentd

  #buffering
  buffer_type file
  buffer_path /var/log/td-agent/buffer/td
  flush_interval 5m
  buffer_chunk_limit 8m
  buffer_queue_limit 2048
  retry_wait 15s
</match>

-----------------


Thanks and Regards,
AJV

Mr. Fiber

unread,
Nov 5, 2015, 11:03:53 PM11/5/15
to Fluentd Google Group
grok_pattern %{SQUIDSN}

Does SQUIDSN pattern have 'time' field?
Fluentd's regex parser treat time field as an event time and
it is used in elasticsearch plugin.


If your SQUIDSN pattern doesn't have 'time', fluentd uses current time as an event time.


--

Arun John Varughese

unread,
Nov 6, 2015, 12:07:54 AM11/6/15
to Fluentd Google Group
Thanks mate.

That explains the reason why this happens here.

The grok pattern SQUIDSN has a timefield with 'time' as the key name but is in  epoch time format like for eg: 1438744871.421

Since I'm a newbie to fluentd, I am  having a hard time getting this converted to the normal format like YYYY-MM-DD -HH-MM-SS, which is my other problem which I may post it separately.

Thanks for your time on this.

Regards,
AJV

Mr. Fiber

unread,
Nov 6, 2015, 12:15:20 AM11/6/15
to Fluentd Google Group
I see.
I think grok parser should provide time_format option.
It is useful for your case.
If you have a time, please open an issue on grok parser plugin repository.


--
Reply all
Reply to author
Forward
0 new messages