Fluentd is not flushing the buffer upon restart.

538 views
Skip to first unread message

Sai Birada

unread,
Aug 8, 2019, 5:09:43 AM8/8/19
to Fluentd Google Group
I am maintaining file buffers and currently, I had around 100GB of logs buffered in files to be flushed to elasticsearch. I restarted fluentd, and as I am using file buffers it didn't flush anything at shutdown, upon restart instead of picking buffers, it ignored them. Currently, since the last 3 days, the buffered files are lying still and the data is not getting shipped to elasticsearch. Is there a way to get those buffered logs shipped to elasticsearch? 

Sargunan Arujanan

unread,
Aug 8, 2019, 10:28:12 AM8/8/19
to flu...@googlegroups.com
This is purely lies down to capacity of ur Elasticsearch cluster. I had the same issue 3 months ago, and had many failures. I built a 4 node cluster with 16 core cpu, 32gb ram with 16gb heap size, and 600gb storage and some configuration in elasticsearch to process and consume high traffic.. It solved my issue. Am sending like 70 to 100gb logs daily without any issues now. U can mail me personally for the elasticsearch configuration at xcod...@gmail.com. I will be happy to help

On Thursday, August 8, 2019, Sai Birada <sbi...@zscaler.com> wrote:
> I am maintaining file buffers and currently, I had around 100GB of logs buffered in files to be flushed to elasticsearch. I restarted fluentd, and as I am using file buffers it didn't flush anything at shutdown, upon restart instead of picking buffers, it ignored them. Currently, since the last 3 days, the buffered files are lying still and the data is not getting shipped to elasticsearch. Is there a way to get those buffered logs shipped to elasticsearch? 
>
> --
> You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/fluentd/eec218b0-0c64-4328-82d9-b5557b2e8158%40googlegroups.com.
>
This message is for the designated recipient only and may contain privileged or otherwise private information. If you are not the intended recipient, please notify us immediately and delete the message.

Sai Birada

unread,
Aug 8, 2019, 11:35:59 AM8/8/19
to Fluentd Google Group
Hi ,
Thanks for the quick response.
My issue here is not with the elasticsearch, Currently, since last 3 days, both elasticsearch and fluentd are sitting idle as there is no live data flow, with all the powerful resources for themselves to ingest the buffered data. But fluentd is not picking the buffered files anymore as if it can't see them anymore and sitting idle as if there is no data to ingest.


On Thursday, August 8, 2019 at 7:58:12 PM UTC+5:30, silverhawk wrote:
This is purely lies down to capacity of ur Elasticsearch cluster. I had the same issue 3 months ago, and had many failures. I built a 4 node cluster with 16 core cpu, 32gb ram with 16gb heap size, and 600gb storage and some configuration in elasticsearch to process and consume high traffic.. It solved my issue. Am sending like 70 to 100gb logs daily without any issues now. U can mail me personally for the elasticsearch configuration at xcod...@gmail.com. I will be happy to help

On Thursday, August 8, 2019, Sai Birada <sbi...@zscaler.com> wrote:
> I am maintaining file buffers and currently, I had around 100GB of logs buffered in files to be flushed to elasticsearch. I restarted fluentd, and as I am using file buffers it didn't flush anything at shutdown, upon restart instead of picking buffers, it ignored them. Currently, since the last 3 days, the buffered files are lying still and the data is not getting shipped to elasticsearch. Is there a way to get those buffered logs shipped to elasticsearch? 
>
> --
> You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to flu...@googlegroups.com.

Markus Bergholz

unread,
Aug 8, 2019, 11:40:37 AM8/8/19
to flu...@googlegroups.com
Habe you take a look at your fluentd logs vor elasticsearch logs?
When using elasticsearch from AWS and it is hitting 92% Java heap, it will silently refuse to accept any new datas.

To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fluentd/50106e48-906b-42e1-a66a-ed3c57325faf%40googlegroups.com.

Sai Birada

unread,
Aug 8, 2019, 11:45:46 AM8/8/19
to Fluentd Google Group
Hi,
I did verify those logs and they are sitting idle without any load, both fluentd and elasticsearch.
This issue is consistently reproducible.
Steps to reproduce.
1) Start fluentd server to receive logs, buffer them and dump to elasticsearch
2) Start fluentd client to tail a file and forward it to the server.
3) Append huge data say 100MB to the file.
4) Track the file buffers in server getting filled up.
5) Stop or kill the fluentd server process.
6) Start the fluentd server process and track the buffered files and elasticsearch index counts.
The files will remain as is without getting deleted and elasticsearch index is not getting populated.

Markus Bergholz

unread,
Aug 8, 2019, 11:49:54 AM8/8/19
to flu...@googlegroups.com
When i'm going to reproduce it tomorrow, which fluentd Version are you using?

To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fluentd/b46f7478-b7b4-4250-bb9c-4de255b7f238%40googlegroups.com.

Sai Birada

unread,
Aug 8, 2019, 11:52:02 AM8/8/19
to Fluentd Google Group
Hi Markus,
I am using fluentd 1.2.0.

Markus Bergholz

unread,
Aug 8, 2019, 11:53:42 AM8/8/19
to flu...@googlegroups.com
Have you considered to update to 1.6.3, maybe this already fixes your issue.

To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fluentd/86be7d44-3663-40db-b956-0b0768c0f70f%40googlegroups.com.

Sai Birada

unread,
Aug 8, 2019, 11:56:23 AM8/8/19
to Fluentd Google Group
Will verify with the latest version and see if that fixes the problem.

Sai Birada

unread,
Aug 9, 2019, 2:24:56 AM8/9/19
to Fluentd Google Group
This problem is consistently reproducible with the latest version of fluentd too.

Giri Babu

unread,
Aug 9, 2019, 1:45:53 PM8/9/19
to flu...@googlegroups.com
Hello Everyone, 

Could you pls share me the useful document / steps to deal with java stacktrace multi line issue.??


when I am using fluent-plugin-detect-exceptions-0.0.12 plugin configuration, fluentd was not started and getting bellow error into fluentd logs:
2019-08-02 12:46:23 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Unknown output plugin 'detect_exceptions'. Run 'gem search -rd fluent-plugin' to find plugins"

To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fluentd/e12a89a7-e4da-4af5-bb12-c3c4968252d9%40googlegroups.com.

Sai Birada

unread,
Aug 12, 2019, 6:22:31 AM8/12/19
to Fluentd Google Group
Any update regarding the way to flush the buffers created by the previous process.


On Thursday, August 8, 2019 at 9:26:23 PM UTC+5:30, Sai Birada wrote:

Mr. Fiber

unread,
Aug 14, 2019, 6:25:33 AM8/14/19
to Fluentd Google Group
With `debug` log level, you can check buffer files are restored or not.


If restore is succeded, the problem is output side.
If restore is failed, the problem is path or something wrong.

To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fluentd/38194e83-381d-4d53-8f6d-02fe33561e50%40googlegroups.com.

Sai Birada

unread,
Aug 14, 2019, 6:27:51 AM8/14/19
to Fluentd Google Group
There is no log containing the string "restore" in the entire debug logs. 

Mr. Fiber

unread,
Aug 14, 2019, 8:15:10 AM8/14/19
to Fluentd Google Group
What is the your buffer setting and remaining buffer file path?

To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fluentd/81b782be-4710-407f-b38f-9bbf2e1712e6%40googlegroups.com.

Sai Birada

unread,
Aug 19, 2019, 12:46:02 AM8/19/19
to Fluentd Google Group
<buffer>
       @type file
       path /sc/zagent/tmp/buffer
       flush_mode        interval
       flush_interval    10s
       delayed_commit_timeout 6000
       chunk_limit_records 1000
       retry_max_times 5
       flush_thread_count 6
       overflow_action block
</buffer>
 
is my buffer configuration and following are some of the sample buffered chunk files.
-rw-r--r--. 1 root root     89 Aug  1 16:48 buffer.q58f11014132dbbf98d46fe47c08d1190.log.meta
-rw-r--r--. 1 root root 193183 Aug  1 16:48 buffer.q58f11014132dbbf98d46fe47c08d1190.log

Reply all
Reply to author
Forward
0 new messages