Hi,
I'm new in FluentD, so excuse me if i ask dumb questions. Well, i had configure a fluentd service as follow:
<match **>
type copy
<store>
@type file
path /var/log/fluent/al_service_logs
time_slice_format %Y%m%d
time_slice_wait 10m
time_format %Y%m%dT%H%M%S%z
compress gzip
utc
</store>
<store>
type elasticsearch
host
elasticsearch.domain.net logstash_format true
include_tag_key true
tag_key _key
flush_interval 10s
</store>
</match>
I'm generating a lot of events and sending them to fluentd source port, when I look in the /var/log/fluent/ directory, I see that there are a lot of chunk files compressed. So, my questions are:
1. Those compressed files are already processed or flushed and are they safe to be deleted or they are still in the buffer queue?
2. Is there a way to set a maximum number of files to keep on that directory?. Like "rotate" in logrotate?
3. Thinking about the question above, is there a relation between the number of chunk files that are stored in that directory and the "buffer_queue_limit" option?
4. All these questions above are because i'd like to avoid logrotate and use fluentd options to manage the use of the fluentd logs. Is that possible or i need to use logrotate or a script to keep just a few files?
Thanks,
Julian