Logstash errors after 16.04 update

215 views
Skip to first unread message

James Smith

unread,
Dec 3, 2020, 7:36:37 AM12/3/20
to security-onion

We are currently on 16.04 and planning the v2.3 update for the new year and carried out the latest 16.04 update across our distributed environment today using the standard soup method. All appeared to go okay with the master update, the new kibana dashboard looked pretty fast and after an so-status everything was in an OK state so continued with the updates to the other nodes. After completing those and rebooting I returned to the master server to find Logstash was now showing FAIL and looking at logstah.log I can see multiple of the following errors before it gives up and shuts it down:

[2020-12-03T07:28:15,996][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-bro-2020.12.03", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x342c2a26], :response=>{"index"=>{"_index"=>"logstash-bro-2020.12.03", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Failed to parse mapping [doc]: mapper [destination_geo.latitude] cannot be changed from type [long] to [half_float]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [destination_geo.latitude] cannot be changed from type [long] to [half_float]"}}}}}
[

I noticed the update has made a change to the mapping in /etc/logstash/logstash-template.json which removed some components that refer to destination_geo.latitude so attempted to swap the backed up version back in and restarting it to see if that made any difference but no go so have put it back.

During one of the other machine updates I did notice au.archive.ubuntu.com (seemed to be down temporarily) was not contactable and caused a bunch of errors so I re-ran soup on that machine after it came back up, I'm not getting any logstash errors on that machine or the other node only the master. I didnt notice any issues during the Master update but just to be sure given what I saw on one of the nodes I reran soup on it and it there was nothing more it did other than do a clean up.

Any help would be much appreciated.

James

Doug Burks

unread,
Dec 3, 2020, 9:38:29 AM12/3/20
to securit...@googlegroups.com
First, let's double-check that all Docker images were updated successfully on all nodes in your deployment.  To do that, run "sudo docker images" and verify that the output looks like this on all nodes:
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
securityonionsolutions/so-elastalert      latest              e56b7182bdbb        5 weeks ago         498MB
securityonionsolutions/so-freqserver      latest              c6f0250311fa        5 weeks ago         299MB
securityonionsolutions/so-domainstats     latest              c931c719cae3        5 weeks ago         340MB
securityonionsolutions/so-curator         latest              a79dcdc46815        5 weeks ago         309MB
securityonionsolutions/so-kibana          latest              a5e27d6fe838        5 weeks ago         697MB
securityonionsolutions/so-logstash        latest              5407c58133fb        5 weeks ago         734MB
securityonionsolutions/so-elasticsearch   latest              5459f8585f5b        5 weeks ago         628MB

Next, let's double-check that all Ubuntu packages were updated successfully on all nodes in your deployment.  To do that, run the following on all nodes:
dpkg -l |egrep 'securityonion-capme|securityonion-elastic|securityonion-setup|securityonion-sostat|securityonion-web-page'

and check the version numbers against those listed in the blog post:

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/security-onion/21c93ecf-8a06-4d71-ae0c-630ff21b8cc1n%40googlegroups.com.


--
Doug Burks
Founder and CEO
Security Onion Solutions, LLC

James Smith

unread,
Dec 3, 2020, 4:25:13 PM12/3/20
to security-onion
Thanks Doug,

I have checked across all nodes and can confirm all docker images are the latest as per your list, and all the Ubuntu packages are also the latest versions as per the linked post.

Regards
James 

Doug Burks

unread,
Dec 3, 2020, 4:40:33 PM12/3/20
to securit...@googlegroups.com
You said that Logstash is failing on your master server, so am I correct in assuming that all of your logs are stored on your master server and you don't have storage nodes?

What's the output of the following on your master server?
tail -10 /var/log/logstash/logstash.log

What's the output of the following on your master server (and any storage nodes)?
curl localhost:9200/_cat/templates


James Smith

unread,
Dec 3, 2020, 5:31:37 PM12/3/20
to security-onion
From my very basic understanding of how it is configured, we have a master and two forward nodes.

Tail output:
[2020-12-03T07:28:25,204][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
[2020-12-03T07:28:25,205][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
[2020-12-03T07:28:25,205][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
[2020-12-03T07:28:25,205][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
[2020-12-03T07:28:25,205][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
[2020-12-03T07:28:25,205][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
[2020-12-03T07:28:25,205][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
[2020-12-03T07:28:25,215][WARN ][logstash.javapipeline    ] Waiting for input plugin to close {:pipeline_id=>"main", :thread=>"#<Thread:0x3f9edd1e run>"}
[2020-12-03T07:28:35,761][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"main"}
[2020-12-03T07:28:36,193][INFO ][logstash.runner          ] Logstash shut down.

curl output:
logstash-*     [logstash-ids-*, logstash-firewall-*, logstash-syslog-*, logstash-bro-*, logstash-import-*, logstash-beats-*] 0 50001
kibana         [.kibana]                                                                                                     0
logstash-ossec [logstash-ossec*]                                                                                             1 50001
logstash       [logstash-ids-*, logstash-firewall-*, logstash-syslog-*, logstash-bro-*, logstash-import-*, logstash-beats-*] 0 50001
logstash-beats [logstash-beats-*]  

Doug Burks

unread,
Dec 3, 2020, 5:37:49 PM12/3/20
to securit...@googlegroups.com
Looks like you may have an old logstash template, so let's try this:
curl -XDELETE localhost:9200/_template/logstash
curl -XDELETE localhost:9200/_template/logstash-*
sudo so-logstash-restart

James Smith

unread,
Dec 3, 2020, 6:30:07 PM12/3/20
to security-onion
Looks like that has sorted it:
Status: Elastic stack
  * so-elasticsearch                                                                                                                                                                [  OK  ]
  * so-logstash                                                                                                                                                                     [  OK  ]
  * so-kibana                                                                                                                                                                       [  OK  ]
  * so-curator                                                                                                                                                                      [  OK  ]
  * so-elastalert                                                                                                                                                                   [  OK  ]


[2020-12-03T23:03:12,633][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-03T23:03:12,633][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-03T23:03:12,633][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-03T23:03:12,633][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-03T23:03:12,773][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-03T23:03:12,777][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-03T23:03:12,808][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-03T23:03:12,862][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-12-03T23:03:12,886][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2020-12-03T23:03:13,108][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}


Doug Burks

unread,
Dec 3, 2020, 7:20:49 PM12/3/20
to securit...@googlegroups.com
Excellent! I've updated the blog post in case others experience this issue:

Thanks!

James Smith

unread,
Dec 3, 2020, 7:35:46 PM12/3/20
to securit...@googlegroups.com
Thanks heaps for the help, much appreciated!

James

tjbu...@gmail.com

unread,
Dec 10, 2020, 12:13:05 AM12/10/20
to security-onion
We had a similar problem, synmpton was that the pipe from syslog-ng to logstash wasn't working, and the disk was filling up.
The suggested solution seems to have unblocked the problem.
Thanks.

mona...@gmail.com

unread,
Dec 11, 2020, 10:08:26 AM12/11/20
to security-onion
Same here, running the commands Doug mentioned fixed the issues.

paulk...@gmail.com

unread,
Dec 11, 2020, 10:49:02 AM12/11/20
to security-onion
This worked for us as well. Thank you!
Reply all
Reply to author
Forward
0 new messages