Sensor unable to send logs to Master (SecurityOnion Elastic 2 beta)

940 views
Skip to first unread message

id1010...@gmail.com

unread,
Dec 7, 2017, 4:34:42 PM12/7/17
to security-onion
I have 1 Master and 4 sensors on my current setup.
3 of the 4 sensors report fine to the Master, but the 4th doesn't.
I've confirmed that the IP address is allowed via soallow on the Master.

I also noticed that the other 3 sensors generated additional entries in ufw
ex: 179.18.0.1 50000/tcp allow 178.18.0.0/16
ex: 179.18.0.1 50001/tcp allow 178.18.0.0/16
ex: 179.18.0.1 50002/tcp allow 178.18.0.0/16

But when I created the last one, it didn't create a 4th matching entry.

I've run so-status/sostat on the master and sensor and everything came back as passing. The sensor is collecting logs in /nsm/bro and I've verified that all of the ELK stack is running but for whatever reason the sensor can send to the master.

I feel like this is a dockerism, are these ip's generated when you run so-setup?

Any ideas?


Wes Lambert

unread,
Dec 7, 2017, 10:00:18 PM12/7/17
to securit...@googlegroups.com
Docker configures IPs for containers within the Docker so-elastic-net network (172.18.0.0/16 by default).

Regardless, have you checked the sosetup log on the sensor to make sure everything went okay during setup?

It should be located in /var/log/nsm/.

Thanks,
Wes



--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

id1010...@gmail.com

unread,
Dec 8, 2017, 9:51:33 AM12/8/17
to security-onion
Thanks for the suggestion Wes, I checked the log on the system and there appears to be a duplication of the reverse ports that were generated by sosetup on Sensors 2 and 4 (2 was created first and it works, 4 however doesn't work)

Any ideas on how I can resolve this in docker?



On Thursday, December 7, 2017 at 10:00:18 PM UTC-5, Wes wrote:
> Docker configures IPs for containers within the Docker so-elastic-net network (172.18.0.0/16 by default).
>
>
> Regardless, have you checked the sosetup log on the sensor to make sure everything went okay during setup?
>
>
> It should be located in /var/log/nsm/.
>
>
> Thanks,
> Wes
>
>
> On Thu, Dec 7, 2017 at 4:34 PM, <id1010...@gmail.com> wrote:
> I have 1 Master and 4 sensors on my current setup.
>
> 3 of the 4 sensors report fine to the Master, but the 4th doesn't.
>
> I've confirmed that the IP address is allowed via soallow on the Master.
>
>
>
> I also noticed that the other 3 sensors generated additional entries in ufw
>
> ex: 179.18.0.1 50000/tcp    allow    178.18.0.0/16
>
> ex: 179.18.0.1 50001/tcp    allow    178.18.0.0/16
>
> ex: 179.18.0.1 50002/tcp    allow    178.18.0.0/16
>
>
>
> But when I created the last one, it didn't create a 4th matching entry.
>
>
>
> I've run so-status/sostat on the master and sensor and everything came back as passing. The sensor is collecting logs in /nsm/bro and I've verified that all of the ELK stack is running but for whatever reason the sensor can send to the master.
>
>
>
> I feel like this is a dockerism, are these ip's generated when you run so-setup?
>
>
>
> Any ideas?
>
>
>
>
>
> --
>
> Follow Security Onion on Twitter!
>
> https://twitter.com/securityonion
>
> ---
>
> You received this message because you are subscribed to the Google Groups "security-onion" group.
>
> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
>
> To post to this group, send email to securit...@googlegroups.com.
Message has been deleted

id1010...@gmail.com

unread,
Dec 8, 2017, 9:54:11 AM12/8/17
to security-onion
It may be worth noting the other 2 sensors have unique ports generated.

This would explain why I don't see the firewall opening up corresponding docker ports.

Wes

unread,
Dec 8, 2017, 10:24:39 AM12/8/17
to security-onion
On Friday, December 8, 2017 at 9:54:11 AM UTC-5, id1010...@gmail.com wrote:
> It may be worth noting the other 2 sensors have unique ports generated.
>
> This would explain why I don't see the firewall opening up corresponding docker ports.

Have you tried re-running setup on the sensor to see if that helps?

Thanks,
Wes

id1010...@gmail.com

unread,
Dec 8, 2017, 10:49:04 AM12/8/17
to security-onion
I'm about to run my 3rd try now.

id1010...@gmail.com

unread,
Dec 8, 2017, 10:56:27 AM12/8/17
to security-onion
On Friday, December 8, 2017 at 10:24:39 AM UTC-5, Wes wrote:

Sosetup doesn't appear to resolve this issue. I'm reinstalling now.

id1010...@gmail.com

unread,
Dec 8, 2017, 1:54:28 PM12/8/17
to security-onion
System has be reinstalled, still isn't allowed to connect to master...

Wes

unread,
Dec 8, 2017, 2:05:38 PM12/8/17
to security-onion
On Friday, December 8, 2017 at 1:54:28 PM UTC-5, id1010...@gmail.com wrote:
> System has be reinstalled, still isn't allowed to connect to master...

What is the output of the following, from the master?

curl localhost:9200/_cluster/settings

cat /etc/nsm/crossclustertab

Thanks,
Wes

Jay Hawk

unread,
Dec 8, 2017, 5:54:42 PM12/8/17
to securit...@googlegroups.com
Still getting the same results and this is after a complete reinstall of the sensor.

output of curl localhost:9200/_cluster/settings:
{"persistent":{"search":{"remote":{"box2":{"seeds":["172.18.0.1:50002"]},"box1":{"seeds":["172.18.0.1:50000"]},"box4":{"seeds":["172.18.0.1:50002"]},"box3":{"seeds":["172.18.0.1:50001"]},"box-svr":{"seeds":["127.0.0.1:9300"]}}}},"transient":{}}
5:48 PM
output of /etc/nsm/crossclustertab: 
5:49 PM

Is there something special i need to do to the master when removing a sensor and rebuilding it? could it be holding onto the old configs?



--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to a topic in the Google Groups "security-onion" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/security-onion/EsbeYoh4ymU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Jay Hawk

unread,
Dec 8, 2017, 5:57:17 PM12/8/17
to securit...@googlegroups.com
This reflects the same thing I see when checking the /var/log/nsm/sosetup.log, the both box2 and box4 hold the same reverse port of 50002.
So this is a dockerism right?

To post to this group, send email to securit...@googlegroups.com.

Wes

unread,
Dec 8, 2017, 6:07:59 PM12/8/17
to security-onion
> To unsubscribe from this group and all its topics, send an email to security-onio...@googlegroups.com.
>
> To post to this group, send email to securit...@googlegroups.com.
>
> Visit this group at https://groups.google.com/group/security-onion.
>
> For more options, visit https://groups.google.com/d/optout.

These should not all have the same reverse port. Something during setup/config caused these sensor to both utilize the same reverse port.

What are the contents of:

(Sensor)
/etc/elasticsearch/elasticsearch.yml
/root/.ssh/securityonion_ssh.conf

Is there reverse port defined as 50002 there as well?

Thanks,
Wes

id1010...@gmail.com

unread,
Dec 8, 2017, 6:41:36 PM12/8/17
to security-onion
Yep, all of them say SOSensor4 has a reverse port of 50002

id1010...@gmail.com

unread,
Dec 8, 2017, 6:56:21 PM12/8/17
to security-onion
A TCPdump from the sensor shows the configuration is currently exchanging information over ssh and from what i understand that's the elastic search tunnel and ofcourse the logs being sent over sguil are working without issue... It's just the way elastic is utilizing the docker network that's giving the issues.

Wes

unread,
Dec 8, 2017, 7:04:10 PM12/8/17
to security-onion
On Friday, December 8, 2017 at 6:56:21 PM UTC-5, id1010...@gmail.com wrote:
> A TCPdump from the sensor shows the configuration is currently exchanging information over ssh and from what i understand that's the elastic search tunnel and ofcourse the logs being sent over sguil are working without issue... It's just the way elastic is utilizing the docker network that's giving the issues.

Did you perhaps modify the Docker interface value to be 179.18.0.1?

You could try doing the following:

On the master

Allow the sensor to master via the appropriate port:

sudo ufw allow proto tcp from 178.18.0.0/16 to 179.18.0.1 port 50003

Change the port for box4 in crossclustertab to 50003

Run the following curl command

curl -XPUT http://localhost:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"box2":{"seeds":["172.18.0.1:50002"]},"box1":{"seeds":["172.18.0.1:50000"]},"box4":{"seeds":["172.18.0.1:50003"]},"box3":{"seeds":["172.18.0.1:50001"]},"box-svr":{"seeds":["127.0.0.1:9300"]}}}}'

On the sensor
-------------

Modify the ssh configuration file to use port 50003
Kill the existing autossh process with port 50002 and run /usr/sbin/so-autossh-start (or reboot the sensor)

id1010...@gmail.com

unread,
Dec 8, 2017, 8:11:30 PM12/8/17
to security-onion
No luck... I really thought that was going to work too.

Wes

unread,
Dec 8, 2017, 8:34:13 PM12/8/17
to security-onion
On Friday, December 8, 2017 at 8:11:30 PM UTC-5, id1010...@gmail.com wrote:
> No luck... I really thought that was going to work too.

What is the output of the following?

Server
--------
netstat -an | grep 50003

Sensor
--------
ps aux | grep autossh | grep -v grep

id1010...@gmail.com

unread,
Dec 8, 2017, 9:39:44 PM12/8/17
to security-onion

Master is listening on port 172.18.0.1:50003 from all IP's
netstat -an |grep 50003
tcp 0 0 172.18.0.1:50003 0.0.0.0:* LISTEN
tcp 0 0 172.18.0.1:47866 172.18.0.1:50003 TIME_WAIT


and autossh is running on the sensor
ps -aux |grep autossh
root 24754 0.0 0.0 4360 96 ? Ss 00:43 0:00 /usr/lib/autossh/autossh -M 0 -q -N -o ServerAliveInterval 60 -o ServerAliveCountMax 3 -i /root/.ssh/securityonion -R 172.18.0.1:50003:localhost:9300 on...@x.x.x.x

Wes

unread,
Dec 8, 2017, 9:56:05 PM12/8/17
to security-onion

Forgot about /etc/elasticsearch/elasticsearch.yml on the sensor:

In that file, try modifying the following to be:

transport.publish_host: 172.18.0.1
transport.publish_port: 50003

and then restart Elasticsearch with 'sudo docker stop so-elasticsearch && sudo so-elastic-start' (or just run so-elastic-restart) on the sensor.

You may also have to reboot the sensor afterward for good measure if that doesn't seem to help.

Thanks,
Wes

id1010...@gmail.com

unread,
Dec 8, 2017, 10:35:01 PM12/8/17
to security-onion

Thanks Wes,I'd already made those modifications to the yml file and ran so-restart before, but I'm rebooting now.

id1010...@gmail.com

unread,
Dec 8, 2017, 10:50:22 PM12/8/17
to security-onion
Restart didn't seem to repair the connection.

It's odd, all the reports (so-status, sostat, etc) say it's good. Sguil seems to connect fine (not using the elastic connection over 22 that feeds into the docker network).

But the sensor doesn't show up in the kibana dashboard (even though a TCPdump on the sensor show's it's forwarding the data to the physical address just not the virtual docker address) and the Bro data doesn't seem to be getting forwarded through the docker network even though the sensors are pulling in bro data to the sensor in /nsm/bro/logs...

I assume you guys have tested this iso out running more than 3 sensors on a deployment before? I'm sure it's a docker thing I just don't understand Docker/SOSetup process well enough to understand what could be going wrong here.

Doug Burks

unread,
Dec 9, 2017, 8:39:55 AM12/9/17
to securit...@googlegroups.com
I was able to duplicate this issue and manually resolve as follows:

# on sensor
update /root/.ssh/securityonion_ssh.conf with unique REVERSE_PORT (50003)
update /etc/elasticsearch/elasticsearch.yml with matching publish_port (50003)
reboot

# on master
remove incorrect sensor entry from Elasticsearch _cluster/settings and /etc/nsm/crossclustertab
add new sensor entry to Elasticsearch _cluster/settings with unique port (50003)
add ufw rule to allow traffic from 172.18.0.0/16 to 172.18.0.1:50003


So there's no inherent limitation of 3 sensors, but I'm still not exactly sure why the fourth sensor chose the wrong port.  My best guess at this point is that it's a race condition with crossclustercheck updating _cluster/settings on the fly.  I may change that section of sosetup-elastic to determine the list of sensor ports from /etc/nsm/crossclustertab instead.

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.



--
Doug Burks

id1010...@gmail.com

unread,
Dec 9, 2017, 10:08:32 AM12/9/17
to security-onion
So the sensor updates without issue. But I've noticed that after deleting or editing the wrong entry in /etc/nsm/crossclustertab the entry comes back on it's own after a duration of around 1-3 minutes. So what's feeding these changes?

You mentioned updating Elasticsearch _cluster/settings - I'd assumed you meant by running the curl xput command mention above by Wes but that doesn't seem to work.

How do I
1. Remove the sensor entry from _cluster/settings
2. Update the sensor entry for _cluster/settings

Doug Burks

unread,
Dec 9, 2017, 10:13:26 AM12/9/17
to securit...@googlegroups.com
Replies inline.

On Sat, Dec 9, 2017 at 10:08 AM, <id1010...@gmail.com> wrote:
> So the sensor updates without issue. But I've noticed that after deleting or editing the wrong entry in /etc/nsm/crossclustertab the entry comes back on it's own after a duration of around 1-3 minutes. So what's feeding these changes?

/etc/cron.d/crossclustercheck runs /usr/sbin/so-crossclustercheck
every minute and it updates /etc/nsm/crossclustertab based on what it
finds in _cluster/settings.

> You mentioned updating Elasticsearch _cluster/settings - I'd assumed you meant by running the curl xput command mention above by Wes but that doesn't seem to work.
>
> How do I
> 1. Remove the sensor entry from _cluster/settings

https://github.com/Security-Onion-Solutions/security-onion/wiki/Elasticsearch#removing-a-sensor

> 2. Update the sensor entry for _cluster/settings

same as #1 but replace null with "$DOCKER_INTERFACE:$REVERSE_PORT" as
we do in sosetup-elastic:
https://github.com/dougburks/elastic-test/blob/master/usr/sbin/sosetup-elastic#L2070



--
Doug Burks

Doug Burks

unread,
Dec 9, 2017, 10:57:52 AM12/9/17
to securit...@googlegroups.com
I just committed a new version of sosetup-elastic that determines HIGHEST_REVERSE_PORT using /etc/nsm/crossclustertab instead of _cluster/settings:
https://github.com/dougburks/elastic-test/blob/master/usr/sbin/sosetup-elastic#L2025-L2037

I tested with a brand new master and 4 brand new sensors and it worked fine:


It'd be great if you could try fresh installations of your master and 4 sensors (if you're not already using VM snapshots for your testing, I'd highly recommend it!) with this new version of sosetup-elastic and see if it works for you.

Thanks!
--
Doug Burks

id1010...@gmail.com

unread,
Dec 9, 2017, 11:19:38 AM12/9/17
to security-onion
This is fantastic Doug, unfortunately I won't be able to do a complete re-installation until Monday morning on these systems...

That said, after making the recommended changes it appears that Kibana isn't able to display properly anymore. It keeps on timing out.

Any ideas why that would be?

Doug Burks

unread,
Dec 9, 2017, 11:29:50 AM12/9/17
to securit...@googlegroups.com
Timing out most likely means that elasticsearch on your master server
can't contact one or more of the sensors listed in _cluster/settings.
If _cluster/settings looks correct, then this could be due to:
- missing or incorrect firewall rules
- disconnected or misconfigured autossh tunnels
- down or misconfigured elasticsearch on sensors

At this point, your best bet may be a fresh installation.
> --
> Follow Security Onion on Twitter!
> https://twitter.com/securityonion
> ---
> You received this message because you are subscribed to the Google Groups "security-onion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
> To post to this group, send email to securit...@googlegroups.com.

id1010...@gmail.com

unread,
Dec 9, 2017, 11:49:40 AM12/9/17
to security-onion
Ok...
Everything works as it should!

I really appreciate you guys (Doug/Wes) helping me solve these issues.


-J

Reply all
Reply to author
Forward
0 new messages