Max Chunk Files

126 views
Skip to first unread message

Jhon Doe

unread,
Nov 29, 2017, 5:34:30 PM11/29/17
to Fluentd Google Group
Hi,

I'm new in FluentD, so excuse me if i ask dumb questions. Well, i had configure a fluentd service as follow:

<match **>
  type copy
  <store>
    @type file
    path /var/log/fluent/al_service_logs
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%z
    compress gzip
    utc
  </store>
  <store>
    type elasticsearch
    host elasticsearch.domain.net
    logstash_format true
    include_tag_key true
    tag_key _key
    flush_interval 10s
  </store>
</match>


I'm generating a lot of events and sending them to fluentd source port, when I look in the /var/log/fluent/ directory, I see that there are a lot of chunk files compressed. So, my questions are:

1. Those compressed files are already processed or flushed and are they safe to be deleted or they are still in the buffer queue?
2. Is there a way to set a maximum number of files to keep on that directory?. Like "rotate" in logrotate?
3. Thinking about the question above, is there a relation between the number of chunk files that are stored in that directory and the "buffer_queue_limit" option?
4. All these questions above are because i'd like to avoid logrotate and use fluentd options to manage the use of the fluentd logs. Is that possible or i need to use logrotate or a script to keep just a few files?


Thanks,
Julian

Mr. Fiber

unread,
Nov 29, 2017, 9:56:43 PM11/29/17
to Fluentd Google Group
1. Those compressed files are already processed or flushed and are they safe to be deleted or they are still in the buffer queue?


2. Is there a way to set a maximum number of files to keep on that directory?. Like "rotate" in logrotate?

No way via out_file. Fluentd doesn't know which files can be deleted so use other tool or writing own script is better.

3. is there a relation between the number of chunk files that are stored in that directory and the "buffer_queue_limit" option?

The total number of files are not releated because buffer_queue_limit doesn't consider flushed buffer chunks.
buffer_queue_limit controls the max number of queued chunks.

4.   Is that possible or i need to use logrotate or a script to keep just a few files?

Yes, you need to use other tools or own script.


Masahiro

--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jhon Doe

unread,
Nov 29, 2017, 10:37:51 PM11/29/17
to Fluentd Google Group
Hi,

Thanks for answering. Sorry for this dumb question, but I don't understand something. What means that a buffer chunk was flushed and how the buffer queue is filled? I mean, chunk files are written in a serial way, when a chunk reach the buffer_chunk_limit size a new chunk is created, so the buffer_queue_limit is never reached. I used dummer (https://github.com/sonots/dummer) to generate more than 100 chunks in a few seconds and I never got an error for queue limit. If I configured  buffer_queue_limit to 10, how can I try to reach that limit?

In addition, a buffer flushed chunk means that the events were successfully sent to elastic search or those processes are totally different?

Thanks,
Julian
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.

Mr. Fiber

unread,
Nov 29, 2017, 10:55:35 PM11/29/17
to Fluentd Google Group
how can I try to reach that limit?

I'm not sure your setting but I can reproduce BufferQueueLimitError with following configuration.

- fluent.conf

```
<source>
  @type tail
  path dummy.log
  format none
  read_lines_limit 5
  tag dummy
</source>

<match dummy>
  @type file
  path log/test
  buffer_queue_limit 10
  buffer_chunk_limit 1000
</match>
```

- dummer.conf

```
configure 'sample' do
  output "dummy.log"
  rate 20000
  message "time:2013-11-25 00:23:52 +0900\tlevel:ERROR\tmethod:POST\turi:/api/v1/people\treqtime:3.1983877060667103"
end
```

- log example

```

2017-11-30 12:50:55 +0900 [warn]: emit transaction failed: error_class=Fluent::BufferQueueLimitError error="queue size exceeds limit" location="/Users/repeatedly/dev/fluentd/fluentd/lib/fluent/buffer.rb:204:in `block in emit'" tag="dummy"
  2017-11-30 12:50:55 +0900 [warn]: /Users/repeatedly/dev/fluentd/fluentd/lib/fluent/buffer.rb:204:in `block in emit'
  2017-11-30 12:50:55 +0900 [warn]: /Users/repeatedly/.rbenv/versions/2.4.2/lib/ruby/2.4.0/monitor.rb:214:in `mon_synchronize'
```

- files in directory

```

% ls log
test.20171130.b55f2b28c1c74167b  test.20171130.q55f2b264560da7f5  test.20171130.q55f2b26460b58c53  test.20171130_2.log
test.20171130.q55f2b2644e667925  test.20171130.q55f2b2645865ca10  test.20171130.q55f2b264630fbb95
test.20171130.q55f2b264519f20f2  test.20171130.q55f2b2645af84afa  test.20171130_0.log
test.20171130.q55f2b26453d845e4  test.20171130.q55f2b2645d9695e5  test.20171130_1.log
```


To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages