Hi all,
I had a post last week where I mentioned that I'm using the Elastic Alpha as a standalone ELK server and sending Bro and IDS alerts via syslog to it. This functionality is now working well - however, after a few days of processing logs logstash stopped working on my server at 8:00 one morning last week. Instead of rebuilding the server, I disabled logstash on that system so I could allocate more resources to elasticsearch, and built a new server solely to handle logstash duties. I went through sosetup the same way and only enabled the elastic stack, and then manually disabled everything except logstash /etc/nsm/securityonion.conf. I then increased logstash workers in /etc/logstash/logstash.yml to the number of cores on the new system. This worked well from Friday until today - this morning at 9:00 EST, logstash stopped processing events on the new system with the same errors in /var/log/logstash.log. I restarted logstash and the docker with: so-elastic-restart and docker start so-logstash, but when logstash initializes I get the same error. It looks like logstash isn't happy with a filter in one of the configuration files in /etc/logstash/logstash.conf, however, I have not modified any of these files since the initial install so it's concerning and odd that this has happened on two different installs, and each broke at the top of an hour.
Here are the error lines from /var/log/logstash.log:
[2017-10-02T13:09:27,855][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"
0.0.0.0:6051"}
[2017-10-02T13:09:27,855][INFO ][logstash.inputs.tcp ] Automatically switching from json to json_lines codec {:plugin=>"tcp"}
[2017-10-02T13:09:27,856][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"
0.0.0.0:6052"}
[2017-10-02T13:09:27,856][INFO ][logstash.inputs.tcp ] Automatically switching from json to json_lines codec {:plugin=>"tcp"}
[2017-10-02T13:09:27,856][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"
0.0.0.0:6053"}
[2017-10-02T13:09:28,093][INFO ][logstash.pipeline ] Pipeline main started
[2017-10-02T13:09:28,210][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-10-02T13:09:28,560][ERROR][logstash.pipeline ] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash. {"exception"=>"undefined method `>' for nil:NilClass", "backtrace"=>["(eval):785646:in `initialize'", "org/jruby/RubyArray.java:1613:in `each'", "(eval):785644:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "(eval):785696:in `initialize'", "org/jruby/RubyArray.java:1613:in `each'", "(eval):785686:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "(eval):52309:in `filter_func'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:383:in `filter_batch'", "org/jruby/RubyProc.java:281:in `call'", "/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:316:in `each'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:315:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:382:in `filter_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:363:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:330:in `start_workers'"]}
[2017-10-02T13:09:28,617][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<NoMethodError: undefined method `>' for nil:NilClass>, :backtrace=>["(eval):785646:in `initialize'", "org/jruby/RubyArray.java:1613:in `each'", "(eval):785644:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "(eval):785696:in `initialize'", "org/jruby/RubyArray.java:1613:in `each'", "(eval):785686:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "(eval):52309:in `filter_func'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:383:in `filter_batch'", "org/jruby/RubyProc.java:281:in `call'", "/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:316:in `each'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:315:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:382:in `filter_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:363:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:330:in `start_workers'"]}
I checked the so-elastic-start script and the logstash conf files used by freqserver and domain stats (which are disabled) are not present in /etc/logstash/conf.d - so this is working correctly.
It looks like logstash does offer configuration test functionality, but I'm not sure how to run the test from a docker image. Is this possible, and if so, how would I go about testing the conf files to find out which is causing the issue?
sostat-redacted output attached.
Thanks,
James Gordon