I'm not sure this is an same situation with jc..
but as you can see fluentd process suddenly consumed up the machine's ram.
in usual, fluentd consume about 50MB of RAM(this is approximately 0.84% of total RAM).
this is the fluentd ram consum statuses by minutes.
MEM(RAM) field keeps 0.85% until 2015-08-31 16:04.
currently I do not exactly know why the memory tracking did not made logs during 30minutes(16:05~16:32)
not only this day but also other days fluentd consumed very large memory.
here is my conf
<source>
type monitor_agent
bind 0.0.0.0
port 24220
</source>
<source>
type tail
path /home1/logs/offerwall/offerwall.20*.log
pos_file /home1/.fluentd/offerwall_tail_pos.log
refresh_interval 5
tag offerwall.log
format json
log_level error
</source>
<match offerwall.log>
type elasticsearch
buffer_type file
buffer_path /home1/.fluentd/buffer/offerwall.*.buffer
buffer_chunk_limit 32m
buffer_queue_limit 1024
logstash_format true
utc_index true
flush_interval 3
disable_retry_limit true
retry_wait 5
request_timeout 10
num_threads 1
hosts 10.98---------------
port 9200
type_name dodol.analytics.real
</match>
$ gem list
*** LOCAL GEMS ***
bigdecimal (1.2.6)
elasticsearch (1.0.9)
elasticsearch-api (1.0.9)
elasticsearch-transport (1.0.9)
excon (0.45.3)
faraday (0.9.1)
fluent-plugin-elasticsearch (0.8.0)
fluentd (0.12.8)
http_parser.rb (0.6.0)
io-console (0.4.3)
json (1.8.1)
msgpack (0.5.11)
multi_json (1.11.0)
multipart-post (2.0.0)
psych (2.0.8)
rake (10.4.2)
rdoc (4.2.0)
sigdump (0.2.2)
string-scrub (0.0.5)
thread_safe (0.3.5)
tzinfo (1.2.2)
tzinfo-data (1.2015.4)
yajl-ruby (1.2.1)
this is the fluent's log about that time
2015-08-31 11:16:07 +0900 [info]: plugin/in_tail.rb:477:initialize: following tail of /home1/logs/offerwall/offerwall.2015-08-31.log
2015-08-31 11:16:07 +0900 [info]: plugin/in_tail.rb:477:initialize: following tail of /home1/logs/offerwall/offerwall.2015-08-24.log
2015-08-31 11:16:07 +0900 [info]: plugin/in_tail.rb:477:initialize: following tail of /home1/logs/offerwall/offerwall.2015-08-28.log
2015-08-31 11:16:07 +0900 [info]: plugin/in_tail.rb:477:initialize: following tail of /home1/logs/offerwall/offerwall.2015-08-23.log
2015-08-31 11:16:07 +0900 [info]: plugin/in_tail.rb:477:initialize: following tail of /home1/logs/offerwall/offerwall.2015-08-26.log
2015-08-31 11:16:11 +0900 [info]: plugin/out_elasticsearch.rb:67:client: Connection opened to Elasticsearch cluster => {:host=>"10.9...", :port=>9200, :scheme=>"http"}, {:host=>"10.98....", :port=>9200, :scheme=>"http"}, {:host=>"10.98.1...", :port=>9200, :scheme=>"http"}, {:host=>"10.9...1", :port=>9200, :scheme=>"http"}
2015-08-31 16:04:42 +0900 [info]: plugin/in_tail.rb:477:initialize: following tail of /home1/logs/offerwall/offerwall.20.restore.2nd.log
2015-09-01 00:01:01 +0900 [info]: plugin/in_tail.rb:387:on_rotate: detected rotation of /home1/logs/offerwall/offerwall.2015-08-22.log; waiting 5 seconds
2015-09-01 09:00:04 +0900 [info]: plugin/in_tail.rb:477:initialize: following tail of /home1/logs/offerwall/offerwall.2015-09-01.log
2015-09-01 12:06:29 +0900 [info]: plugin/in_tail.rb:477:initialize: following tail of /home1/logs/offerwall/offerwall.20.restore.3nd.log
I suspect in_tail or elasticsearch plugin.
at 16:04:42 I operated some command like this.
$ touch offerwall.20.restore.2nd.log
(waited until fluentd tailing new file.)
$ cat somelargefile >> offerwall.20.restore.2nd.log
(somelargefile's size is near 200MB)
from when I restarted fluentd process after got sms notification now (24hours), fluentd's ram usage not going high.(keep under 50mb)
what can I provide more information?
2015년 9월 3일 목요일 오후 12시 45분 29초 UTC+9, repeatedly 님의 말: