I also noticed that the other 3 sensors generated additional entries in ufw
ex: 179.18.0.1 50000/tcp allow 178.18.0.0/16
ex: 179.18.0.1 50001/tcp allow 178.18.0.0/16
ex: 179.18.0.1 50002/tcp allow 178.18.0.0/16
But when I created the last one, it didn't create a 4th matching entry.
I've run so-status/sostat on the master and sensor and everything came back as passing. The sensor is collecting logs in /nsm/bro and I've verified that all of the ELK stack is running but for whatever reason the sensor can send to the master.
I feel like this is a dockerism, are these ip's generated when you run so-setup?
Any ideas?
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
This would explain why I don't see the firewall opening up corresponding docker ports.
Have you tried re-running setup on the sensor to see if that helps?
Thanks,
Wes
Sosetup doesn't appear to resolve this issue. I'm reinstalling now.
What is the output of the following, from the master?
curl localhost:9200/_cluster/settings
cat /etc/nsm/crossclustertab
Thanks,
Wes
You received this message because you are subscribed to a topic in the Google Groups "security-onion" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/security-onion/EsbeYoh4ymU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Did you perhaps modify the Docker interface value to be 179.18.0.1?
You could try doing the following:
On the master
Allow the sensor to master via the appropriate port:
sudo ufw allow proto tcp from 178.18.0.0/16 to 179.18.0.1 port 50003
Change the port for box4 in crossclustertab to 50003
Run the following curl command
curl -XPUT http://localhost:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"box2":{"seeds":["172.18.0.1:50002"]},"box1":{"seeds":["172.18.0.1:50000"]},"box4":{"seeds":["172.18.0.1:50003"]},"box3":{"seeds":["172.18.0.1:50001"]},"box-svr":{"seeds":["127.0.0.1:9300"]}}}}'
On the sensor
-------------
Modify the ssh configuration file to use port 50003
Kill the existing autossh process with port 50002 and run /usr/sbin/so-autossh-start (or reboot the sensor)
What is the output of the following?
Server
--------
netstat -an | grep 50003
Sensor
--------
ps aux | grep autossh | grep -v grep
Master is listening on port 172.18.0.1:50003 from all IP's
netstat -an |grep 50003
tcp 0 0 172.18.0.1:50003 0.0.0.0:* LISTEN
tcp 0 0 172.18.0.1:47866 172.18.0.1:50003 TIME_WAIT
and autossh is running on the sensor
ps -aux |grep autossh
root 24754 0.0 0.0 4360 96 ? Ss 00:43 0:00 /usr/lib/autossh/autossh -M 0 -q -N -o ServerAliveInterval 60 -o ServerAliveCountMax 3 -i /root/.ssh/securityonion -R 172.18.0.1:50003:localhost:9300 on...@x.x.x.x
Forgot about /etc/elasticsearch/elasticsearch.yml on the sensor:
In that file, try modifying the following to be:
transport.publish_host: 172.18.0.1
transport.publish_port: 50003
and then restart Elasticsearch with 'sudo docker stop so-elasticsearch && sudo so-elastic-start' (or just run so-elastic-restart) on the sensor.
You may also have to reboot the sensor afterward for good measure if that doesn't seem to help.
Thanks,
Wes
Thanks Wes,I'd already made those modifications to the yml file and ran so-restart before, but I'm rebooting now.
It's odd, all the reports (so-status, sostat, etc) say it's good. Sguil seems to connect fine (not using the elastic connection over 22 that feeds into the docker network).
But the sensor doesn't show up in the kibana dashboard (even though a TCPdump on the sensor show's it's forwarding the data to the physical address just not the virtual docker address) and the Bro data doesn't seem to be getting forwarded through the docker network even though the sensors are pulling in bro data to the sensor in /nsm/bro/logs...
I assume you guys have tested this iso out running more than 3 sensors on a deployment before? I'm sure it's a docker thing I just don't understand Docker/SOSetup process well enough to understand what could be going wrong here.
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
You mentioned updating Elasticsearch _cluster/settings - I'd assumed you meant by running the curl xput command mention above by Wes but that doesn't seem to work.
How do I
1. Remove the sensor entry from _cluster/settings
2. Update the sensor entry for _cluster/settings
That said, after making the recommended changes it appears that Kibana isn't able to display properly anymore. It keeps on timing out.
Any ideas why that would be?
I really appreciate you guys (Doug/Wes) helping me solve these issues.
-J