troubleshoot suricata alert flow

Skip to first unread message

Andrew Huang

unread,
Dec 12, 2019, 2:19:08 PM12/12/19
to security-onion
I'm in distribute mode. Everyday around 7am, alerts will stop flowing except the following for every few seconds in snort_agent.log. In kibana, I see the alerts drop from over 150,000 to around 200.

Sending sguild (sock1eeef10) BYEventRcvd sock1f19420 0 7 6384675 so-forward-enp2s0f1 8551882 8551882 {2019-12-10 01:48:52} 2 1 1 {tag: Tagged Packet} {2019-12-10 01:48:52} 2 unknown 169082969 10.20.0.89 169738365 10.30.0.125 6 4 5 0 52 25917 2 0 126 33407 {} {} {} {} {} 135 43312 2191273371 1151876544 8 0 16 256 61116 0 {} {} {}

And before the log was rotated, I see "Session terminated, terminating shell... ...terminated.". Autossh was still running since it's pushing above logs to sguil. Any suggestion on how I can troubleshoot this further?

Wes Lambert

unread,
Dec 13, 2019, 7:03:36 AM12/13/19
to securit...@googlegroups.com
Could this be a daily restart scheduled by the system?

Thanks,
Wes

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/security-onion/682a9d73-105a-4186-863e-ed11b4a99e6d%40googlegroups.com.


--

Andrew Huang

unread,
Dec 13, 2019, 9:50:19 AM12/13/19
to security-onion
This is in cronlog around that time. Could nsm_sensor_clean be causing it?

Dec 13 11:58:01 so-forward CRON[14765]: (root) CMD (/usr/sbin/so-netsniff-ng-cron > /dev/null 2>&1)                  
Dec 13 11:59:01 so-forward CRON[14817]: (root) CMD ( /usr/sbin/so-nsm-watchdog >> /var/log/nsm/watchdog.log 2>&1)    
Dec 13 11:59:01 so-forward CRON[14818]: (root) CMD (/usr/sbin/so-netsniff-ng-cron > /dev/null 2>&1)                   Dec 13 11:59:01 so-forward CRON[14819]: (root) CMD (find /var/www/so/capme/pcap/*.pcap -mmin +1440 -delete >/dev/null
2>&1)                                                                                                                 Dec 13 11:59:01 so-forward CRON[14820]: (root) CMD (/usr/sbin/nsm_sensor_clean -y >> /var/log/nsm/sensor-clean.log 2>&
1)                                                                                                                   
Dec 13 12:00:01 so-forward CRON[16413]: (root) CMD (/usr/sbin/so-bro-cron >> /var/log/nsm/so-bro-cron.log 2>&1)      
Dec 13 12:00:01 so-forward CRON[16414]: (root) CMD (/usr/sbin/nsm_sensor_ps-restart --only-sancp-agent >/dev/null)    Dec 13 12:00:01 so-forward CRON[16415]: (root) CMD (/usr/bin/salt-call state.highstate >/dev/null 2>&1)              
Dec 13 12:00:01 so-forward CRON[16417]: (root) CMD (/usr/sbin/nsm_sensor_clean -y >> /var/log/nsm/sensor-clean.log 2>&1)                                                                                                                   
Dec 13 12:00:01 so-forward CRON[16416]: (root) CMD (/usr/sbin/so-netsniff-ng-cron > /dev/null 2>&1)                   Dec 13 12:00:01 so-forward CRON[16418]: (root) CMD (/usr/sbin/so-squert-ip2c-5min > /dev/null 2>&1)                  
Dec 13 12:00:01 so-forward CRON[16419]: (root) CMD (find /var/www/so/capme/pcap/*.pcap -mmin +1440 -delete >/dev/null 2>&1)                                                                                                                
Dec 13 12:01:01 so-forward CRON[16952]: (root) CMD (find /var/www/so/capme/pcap/*.pcap -mmin +1440 -delete >/dev/null
2>&1)                                                                                                                
Dec 13 12:01:01 so-forward CRON[16953]: (root) CMD (/usr/sbin/so-netsniff-ng-cron > /dev/null 2>&1)                  
Dec 13 12:01:01 so-forward CRON[16954]: (root) CMD (/usr/sbin/nsm_sensor_ps-restart --only-http-agent >/dev/null)    
Dec 13 12:01:01 so-forward CRON[16955]: (root) CMD (/usr/sbin/nsm_sensor_clean -y >> /var/log/nsm/sensor-clean.log 2>&1)


I notice barnyard processing might be a bit slow, if I'm reading it right. At the moment barnyard2.log tells me it opened spool file snort.unified2.1576233377, and the latest unified2 file is snort.unified2.1576244730. It's been like that for some time. There are about 10 other files between them. Server has enough processing power and memory. Is that cause for concern?

If I restart with so-sensor-restart, I can immediately see alerts being processed in snort_agent.log.

Wes Lambert

unread,
Dec 17, 2019, 12:16:27 AM12/17/19
to securit...@googlegroups.com
Do alerts stop flowing completely after that, or is the issue that you see a drop in alerts just at that time?

Thanks,
Wes

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.

Andrew Huang

unread,
Dec 17, 2019, 3:23:52 PM12/17/19
to security-onion
Alerts still flow, just a huge drop, and seem to be sporadic through out the day. I think it's something to do with sguil database so I'm trying to tune it. Thank you.
To unsubscribe from this group and stop receiving emails from it, send an email to securit...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages