Connection opened to Elasticsearch cluster => {:host=>"my_node_ip", :port=>30998, :scheme=>"http"}
failed to write data into buffer by buffer overflow action=:block
<match **>
@id elasticsearch
@type elasticsearch
@log_level info
include_tag_key true
type_name _doc
host my_node_ip
port 30998
scheme http
ssl_version TLSv1_2
logstash_format true
logstash_prefix logstash
reconnect_on_error true
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size 1M
queue_limit_length 8
overflow_action block
</buffer>
</match>
Thanks,
su
--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
2019-01-21 09:15:29 +0000 [warn]: [elasticsearch] bad chunk is moved to /tmp/fluentd-buffers/backup/worker0/elasticsearch/57fc4c71326aae28f21d7ca8e9869e5d.log
2019-01-21 09:16:33 +0000 [warn]: [elasticsearch] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2019-01-21 09:16:35 +0000 [info]: [elasticsearch] Connection opened to Elasticsearch cluster => {:host=>"192.168.23.5", :port=>30998, :scheme=>"http"}
2019-01-21 09:17:28 +0000 [warn]: [elasticsearch] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2019-01-21 09:17:29 +0000 [info]: [elasticsearch] Connection opened to Elasticsearch cluster => {:host=>"192.168.23.5", :port=>30998, :scheme=>"http"}
2019-01-21 09:17:41 +0000 [warn]: [elasticsearch] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2019-01-21 09:17:45 +0000 [info]: [elasticsearch] Connection opened to Elasticsearch cluster => {:host=>"192.168.23.5", :port=>30998, :scheme=>"http"}
2019-01-21 09:18:49 +0000 [warn]: [elasticsearch] got unrecoverable error in primary and no secondary error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error="Could not push logs to Elasticsearch after 2 retries. read timeout reached"
I have similar issue, you can refer:
https://groups.google.com/forum/#!topic/fluentd/u8PXVwKjIVU
I think increase your elastic search performance (More memory/ quick io disk/ es node count) may helpful to resolve this issue most efficiently
And, improve buffer in fluentd configure may helpful sometimes, but not resolve this root reason. (es is too slow)
Correct me if I am wrong, thanks.
Here is my conf, FYI
```
<match **>
@id elasticsearch
@type elasticsearch
@log_level info
type_name fluentd
include_tag_key true
host elasticsearch-logging
port 9200
logstash_format true
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 2s
retry_forever
retry_max_interval 30
chunk_limit_size 20M
queue_limit_length 16
overflow_action block
</buffer>
</match>
```