I saw some topics, where some advise was to send logs to SO Master server for parsing and storing.
It may be a good idea, but if we take in to account that data can grow, we need to start increase ES data nodes on Master site.
Do you have an idea how you will deal with that, I mean join more ES data nodes to SO Master ES Cluster?
Regards,
Audrius
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
Thanks for some ideas!
If I uderstood you correctly, in that case, these additional nodes will be single cluster instantces too (like a sensor is).
I was also thinking about some additinal options.
One of them is to make a master node as ES master node and additional ES nodes as an ES Data nodes by exposing required ports to form a cluster.
In that case all nodes on master site will be just bigger ES cluster, and yuo will be able to search them (and all sensor nodes) by using kibana on SO Master.
But need to test this out...
Regards,
Audrius
Regards,
Audrius
waw... so many improvements and you saved me the day :)
I will start downloading new image and start with deployment.
Actually, you fixed some things I noticed before, so that's great.
Addition of redis should be good idea, but also we should limit the memory usage of it. I mean , if something goes wrong with current ES storage nodes, it will start to save logs in the queue (in memory). So it can go up, until it will kill the master node.
So in my configuration I add an threshold, where redis starts to discard logs.
In current production deployment, my logs comes to logstash, and logstash outputs them to redis.
In redis output configuration I add a threshold:
It looks like this, but you need to change accordingly:
output {
redis {
host => [ "host1", "host2", "host3" ]
shuffle_hosts => true
data_type => "list"
key => "logstash"
congestion_interval => 1
congestion_threshold => 50000000
}
}
Regards,
Audrius
TIA,
Steve
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.