Elastic Alpha - logstash pipeline errors

531 views
Skip to first unread message

James Gordon

unread,
Oct 2, 2017, 10:30:47 AM10/2/17
to security-onion
Hi all,

I had a post last week where I mentioned that I'm using the Elastic Alpha as a standalone ELK server and sending Bro and IDS alerts via syslog to it. This functionality is now working well - however, after a few days of processing logs logstash stopped working on my server at 8:00 one morning last week. Instead of rebuilding the server, I disabled logstash on that system so I could allocate more resources to elasticsearch, and built a new server solely to handle logstash duties. I went through sosetup the same way and only enabled the elastic stack, and then manually disabled everything except logstash /etc/nsm/securityonion.conf. I then increased logstash workers in /etc/logstash/logstash.yml to the number of cores on the new system. This worked well from Friday until today - this morning at 9:00 EST, logstash stopped processing events on the new system with the same errors in /var/log/logstash.log. I restarted logstash and the docker with: so-elastic-restart and docker start so-logstash, but when logstash initializes I get the same error. It looks like logstash isn't happy with a filter in one of the configuration files in /etc/logstash/logstash.conf, however, I have not modified any of these files since the initial install so it's concerning and odd that this has happened on two different installs, and each broke at the top of an hour.

Here are the error lines from /var/log/logstash.log:

[2017-10-02T13:09:27,855][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:6051"}
[2017-10-02T13:09:27,855][INFO ][logstash.inputs.tcp ] Automatically switching from json to json_lines codec {:plugin=>"tcp"}
[2017-10-02T13:09:27,856][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:6052"}
[2017-10-02T13:09:27,856][INFO ][logstash.inputs.tcp ] Automatically switching from json to json_lines codec {:plugin=>"tcp"}
[2017-10-02T13:09:27,856][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:6053"}
[2017-10-02T13:09:28,093][INFO ][logstash.pipeline ] Pipeline main started
[2017-10-02T13:09:28,210][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-10-02T13:09:28,560][ERROR][logstash.pipeline ] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash. {"exception"=>"undefined method `>' for nil:NilClass", "backtrace"=>["(eval):785646:in `initialize'", "org/jruby/RubyArray.java:1613:in `each'", "(eval):785644:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "(eval):785696:in `initialize'", "org/jruby/RubyArray.java:1613:in `each'", "(eval):785686:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "(eval):52309:in `filter_func'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:383:in `filter_batch'", "org/jruby/RubyProc.java:281:in `call'", "/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:316:in `each'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:315:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:382:in `filter_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:363:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:330:in `start_workers'"]}
[2017-10-02T13:09:28,617][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<NoMethodError: undefined method `>' for nil:NilClass>, :backtrace=>["(eval):785646:in `initialize'", "org/jruby/RubyArray.java:1613:in `each'", "(eval):785644:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "(eval):785696:in `initialize'", "org/jruby/RubyArray.java:1613:in `each'", "(eval):785686:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "(eval):52309:in `filter_func'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:383:in `filter_batch'", "org/jruby/RubyProc.java:281:in `call'", "/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:316:in `each'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_acked_queue.rb:315:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:382:in `filter_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:363:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:330:in `start_workers'"]}

I checked the so-elastic-start script and the logstash conf files used by freqserver and domain stats (which are disabled) are not present in /etc/logstash/conf.d - so this is working correctly.

It looks like logstash does offer configuration test functionality, but I'm not sure how to run the test from a docker image. Is this possible, and if so, how would I go about testing the conf files to find out which is causing the issue?

sostat-redacted output attached.

Thanks,

James Gordon
sostat

Wes Lambert

unread,
Oct 2, 2017, 12:21:09 PM10/2/17
to securit...@googlegroups.com
James,

You may want to try adjusting logging settings in /etc/logstash/log4j2.properties and then restart Logstash to see if it helps nail down the particular cause of the issue.


I would maybe start with setting the following to a logging type of DEBUG:

logstash.filters.grok
logstash.pipeline


Thanks,
Wes


--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

James Gordon

unread,
Oct 10, 2017, 1:27:07 PM10/10/17
to security-onion
> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
>
> To post to this group, send email to securit...@googlegroups.com.
>
> Visit this group at https://groups.google.com/group/security-onion.
>
> For more options, visit https://groups.google.com/d/optout.


Wes,

Sorry for the delay in response. I finally got back around to looking at this today - I added the logging options as suggested and a few others on the page you linked, but didn't get any additional context in my logstash.log.

After doing some more research, I decided to check if there was a problem in the logstash queue. I commented out the follow line from /etc/logstash/logstash.yml
queue.type: persisted

and logstash started right up and is processing events again. I'm not sure if this is the right way to clear out the logstash queue, but it seems to have worked. My understanding is the logstash queue is now written to memory instead of disk (hence, not persistent) but for my purposes right now this is OK.

Hopefully this info helps someone else out there. Just my .02 on this, but if it's possible implementing a cron job to assess the health of the logstash queue and clear out singular offending events might be beneficial to the SO Elastic project down the road.

Thanks,

James Gordon

Wes

unread,
Oct 10, 2017, 3:55:20 PM10/10/17
to security-onion
James,

You may still want to use the persisted queue instead of disabling. You can always clear out the queue with 'rm' in /nsm/logstash/queue/main and restart Logstash.

Thanks,
Wes
Reply all
Reply to author
Forward
0 new messages