Doug --
the bullet "experimental script to migrate data from ELSA to Elastic" -- Would this(or future plans) work for moving data from an existing sensor into this new Elastic SO setup? just curious.
New screenshots look bad-ass ;) I'll be putting this on my 720xd machine next week :-D
-Bob
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
Did some testing today did not run into any issues.
Everything is looking really good.
Your work is just amazing! I just looked around how much visualization, searches, dashboards you did... it is really time consuming work!
Some advices from my point of view:
- Elastic lacks export to csv feature in saved searches/discovery tab. It was really annoying for couple of years. But now it seems that is a workaround for this. Please take a look at https://github.com/elastic/kibana/issues/1992 and a workaround here https://github.com/fbaligand/kibana/releases. From my point of view you can add it to your Docker container so-kibana.
- Then you click on hyperlinked IP, you are redirected to Indicator dashboard, which show what elastic knows about your indicator for 5years time frame. From my point of view it is beneficial to change it to 1h or 12h or even 24h. In all cases analyst can change it if they needs bigger time frame. Also this wills assure you that you will not kill your elastic search instance.
- Visualizations, with tables view, I would advise to change the number entries to 10. The default 5 is ok for pie charts, but if we look at the tables, you always want to have more when 5. Let's imagine if you look at user agent strings, you want to get as much as you can. Probably even 20...
- Because geoip lookup takes a while, I suggest you to check if IP does not belong to a private IP ranges, if yes - skip lookup;
- You can add temporary buffer like redis, to avoid data spikes (test needed. We use it, but not sure how redis will behave in this case.)
- Log files of elastic, logstash could be stored on the system for better troubleshooting;
- We need something to monitor our cluster state and manage indices. Maybe it can be some kind of plugin which works with ES5 or even Marvel, with instruction how to apply the basic license, which is free. Also it is docker image for head, which should be compatible with ES5 (docker run -p 9100:9100 mobz/elasticsearch-head:5).
General opinion regarding scale:
I work with elastic for a long time and I really like it. But despite this we have always some problems with some products. One of them is logstash. To parse and enrich our logs we use several 48 cores servers, especially if your parsers are complex. So I think that it can be a huge pain for high speed networks. In cases, when we need to deal with it on the same machine as sensor, we try to export bro logs directly to json and then directly to ES, but of course you will be lacking data enrichment and tagging.
For other data, syslog-ng or even logstash can be used. At least I see it some big deployments.
I just create a PoC of your setup in one of our network with 100Mbps load. The SO VM has 12 vCPU, 32GB of RAM and 1TB HDD.
The memory heap of LS was adjusted to 4GB, LS workers to 6, ES HEAP to 8GB. And it seems that it can handle that load. But CPU load is ~90% in average.
Again, really good work! Thanks!
Regards,
Audrius
Or must this be done only on a fresh copy of Security Onion after it has been set up for evaluation mode?
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
--
Doug Burks
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
--
Doug Burks
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
>> email to security-onion+unsubscribe@googlegroups.com.
>> To post to this group, send email to security-onion@googlegroups.com.
>> Visit this group at https://groups.google.com/group/security-onion.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> Find me on:
> www.google.com/+ThilinaPathirana
> www.facebook.com/t.d.k.pathirana
> lk.linkedin.com/in/thilinapathirana/
> skype: tdkp123
>
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!
> %%#tdkp#%%
> $$$$$$$$$$$$
>
> --
> Follow Security Onion on Twitter!
> https://twitter.com/securityonion
> ---
> You received this message because you are subscribed to the Google Groups
> "security-onion" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to security-onion+unsubscribe@googlegroups.com.
> To post to this group, send email to security-onion@googlegroups.com.
> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.
--
Doug Burks
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
All,
I'm attempting to add Suricata EVE logs into the ELK dashboards
(primarily to bypass barnyard so that IPv6 addresses display properly
in the IDS logs).
I've tried a few variations:
- syslog-ng modifications to read the eve.json log and send it to
the logstash port 5063, which is listening for logs in JSON format
- creating a new 0004_input_suricata_eve.conf file in
/etc/logstash/conf.d:
```
input {
file {
path => ["/nsm/sensor_data/elk-onion-eth1/eve.json"]
codec => json
# I've tried both "suricata" and "snort" here
type => "suricata"
}
}
filter {
if [type] == "suricata" {
mutate {
#add_tag => [ "conf_file_0005"]
}
}
}
```
Both methods successfully send logs all the way through to Kibana,
but neither method has yet to have Kibana recognize them as IDS logs
and insert them into the dashboards created for the rest of the Onion logs.
I'm sure there's something simple I'm missing here, but I'm not finding it.
Any help would be greatly appreciated!
--
JM
/* If you haven't found something strange during the day, it hasn't
been much of a day.
-- John A. Wheeler */
--
Doug Burks
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
I think, that first thing what you need to do is to disconnect your SPAN port from you IDS box. Wait and see if you at least, after some while, can connect to kibana at all. If not, try to restart box and try to connect to kibana in few minutes again (mirror port must be still disconnected).
If no, something is wrong with your installation or setup.
In general 300 Mbps is quite huge traffic to process. So, I don't think so, that your HW is capable to process that load...
Regards,
Audrius
JM.
What type do these logs show up as in Kibana (search in Discover)? The logs used for the IDS alert visualizations use a type of "snort" (type:snort).
Thanks,
Wes
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
Wes,
I updated the 0004_input_suricata.conf file to tag the logs as type:
"snort"
```
input {
file {
path => ["/nsm/sensor_data/elk-onion-eth1/eve.json"]
codec => json
#type => "suricata"
type => "snort"
}
}
filter {
#if [type] == "suricata" {
if [type] == "snort" {
mutate {
#add_tag => [ "conf_file_0005"]
}
}
}
```
And after further review, the only EVE logs that logstash was showing
at all were the Suricata flow and stats events. I visually verified
that the alert events **are** in the same file, though, so that's
even more confusing.
When I filtered out those events, nothing shows up at all.
My understanding of the logstash process is input -> preprocces ->
output.
Since the encoding of the Suricata EVE alerts is json, will the
logstash preprocessors for snort still work?
I was hoping to get a better idea for the layout of securityonion with the ELK stack.
With ELK in place have you removed the use of mysql, or is there some sort of database interaction that is still present?
Also how exactly is data fed to Kibana? Once again does it interact with databases at all or is this direct information transfer from elasticsearch? If it is could you point me in the direction of some good documentation for this process?
Thanks,
Brodie
Brodie,
MySQL still exists, just not in regard to ELSA. It is still used for Sguil/Squert.
Data is fed into Elasticsearch as it is received and parsed by Logstash (from network, syslog-ng, etc). That data is then queryable/viewable by Kibana. In regard to the Elastic Stack, to my knowledge, there is no interaction with the MySQL database, other than for authentication purposes (which is handled in conjunction with Apache).
Since we are still in a Technology Preview stance, there is not much documentation present, as it could likely change a great deal as we move forward. However, there is some brief documentation on the Security Onion wiki, and that is where documentation for these applications will reside in the future.
https://github.com/Security-Onion-Solutions/security-onion/wiki
Hope that helps to clarify.
Thanks,
Wes
JM,
I'll have to take a look at the inputs and get back to you.
Thanks,
Wes
Thank you that did help quite a bit.
Just to make sure I am on the same page Logstash essentially gets that is sniffed and from the log files (bro, etc.) once Logstash parses it, it then sends it to Elasticsearch (which stores it?) and then Kibana is used to query Elasticsearch for whatever data the user deems necessary and uses Kibana's interface to create a visualization? Is that about accurate?
Would it be possible (if my assumptions above are correct) to query Kibana from a separate program besides Kibana and possibly store that data into a physical database table, or possibly directly into said program?
Thanks,
Brodie
So.... I'm a few weeks late but things seem to be AOK in this release.
Here is what I did... We were running TP1 on this same machine.
- stop all processes
- apt-get removed --purge <ELK related packages>
- sudo soup
- ran so-setup
- downloaded TP2 script and commented out any pcap playback
- ran the TP2 script... profit
Although ELK fell on it's face by the next morning..... changed heap settings to 28g in /etc/nsm/securityonion.conf and restarted. Then found that logstash could not keep up so my co-worker adjusted workers to 16?? and memory?? in logstash.yml file and things are humming along now for a few days.
I like the bigger pie charts as the old ones were way too small.
I'm going to try the script to migrate data from a dev sensor over to a fresh install one day... we just ran out of time.
Thanks to everyone involved.
-Bob
ELASTICSEARCH_HEAP="8g"LOGSTASH_HEAP="4g"
> email to security-onion+unsubscribe@googlegroups.com.
> To post to this group, send email to security-onion@googlegroups.com.
> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.
--
Doug Burks
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
1501111747.054579 172.18.1.132 - HTTP::BROWSER SophosUpdateLibrary 1 0 - - SDDS/2 SophosUpdateLibrary/1.0 SDDS/2.0 (u="FAVLw6ADAC07O")
[2017-07-26T23:29:07,999][WARN ][logstash.filters.csv ] Error parsing csv {:field=>"message", :source=>"1501111747.054579\t172.18.1.132\t-\tHTTP::BROWSER\tSophosUpdateLibrary\t1\t0\t-\t-\tSDDS/2\tSophosUpdateLibrary/1.0 SDDS/2.0 (u=\"FAVLw6ADAC07O\")", :exception=>#<CSV::MalformedCSVError: Illegal quoting in line 1.>}
1501113855.500932 Cjykrt292flJQSCe51 172.12.2.190 57990 4.2.2.1 53 udp 32487 0.000188 jjj.snprobbx.pbz.m.00.s.sophosxl.net 1 C_INTERNET 16 TXT 0 NOERROR F F T T 0 TXT 139 w l h 20 235079\x09!QV@824NO\x2c=D/?)L:"PY*XE]=/3H&51E"NS!_V`TV:(?W#DW80(V8.*YD0=)T!(S3'\x2c']<XG>GU`6@Y=^'\\;=@ED0P_LFLL;'A"T*_)/_ #837dbc0acf55f370 10.000000 F
> To post to this group, send email to securit...@googlegroups.com.
> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.
--
Doug Burks
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
this is normal behavior of logstash. You always will see errors like this.
The main problem here is, that some fields are not well formed and this is not SO or logstash problem. This is because the data have strings, which are no compliant to CSV RFC.
So, usually you will see such errors time to time. But I think that these logs you will still see in kibana, they will not parsed, but still searchable (if they are not dropped somewhere in the config) (just make a visualization with tags and you will see what kind of errors you have).
If persistent queue is filling up, that means, that logstash can't handle the load. You can try to increase logstash workers with -w flag, but it has it's own bottleneck.
From our point of view, the perfect situation is to avoid heavy parsing, so we are now evaluating a solution to add additional logging of bro and save logs also in json. Next ship these data to logstash. Because it has json structure, no parsing should be done on a data itself, so we expect to have a boost in performance. For data enrichment you still can use logstash, but at least you will touch only fields, that only requires processing (like geoip etc.).
In lab, for now, it works quite well.
To have additional logging we use this bro script https://github.com/J-Gras/add-json.
Next week I will try to move this solution to some of our production nodes and will try to figure out if it's improving things.
From my experience, we use a parsing a lot and dedicated logstash node with 48 cores can parse only 15k events/s. We will see how it will change.
Also take into account, that despite you have big storage, elastic not always can utilize it all. On SSD's we see if we have > 6-8TB of data, JAVA heap goes above 75-85% and shortly it can move your cluster down. So you always need to take a look at JAVA Heap.
ES people say, that you need to find the right spot by experimentation on your own data.
Also a solution here can be to add additional ES data nodes with docker on the same node and improve things.
Also again, thanks Doug and Wes for their effort!
Regards,
Audrius
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
http://blog.securityonion.net/2017/06/towards-elastic-on-security-onion.html
Please let us know what you think.
Thanks in advance for any and all feedback!
--
Doug Burks
> To post to this group, send email to securit...@googlegroups.com.
> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.
--
Doug Burks
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
I noticed that you do not remove the service field if its value is "-".
Was this on purpose?
My reason for asking is I am trying to query elasticsearch for bro_conn documents that are not matched up with any particular type of connection (i.e. bro_weird, bro_http, etc.)
However, when these documents are indexed into elasticsearch they are analyzed. Therefore, when I query elastic search for a match on "service": "-" I get zero hits because the analyzer returns zero tokens and thereby does not recognize it as a literal - when I am searching for it.
Any feedback on the reasoning for keeping the service field even when it is set to "-" would be greatly appreciated.
Thank you,
Brodie
Brodie,
Have you tried the following?
service.keyword:"-"
Thanks,
Wes
Tank you, that was exactly what I was looking for. Couldn't quite remember that one.
Thanks,
Brodie
Where is the log for elasticsearch located on TP2?
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
I have no elasticsearch folder in var/log/
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Thank you, I will check that out.
Not sure if you can help with this other issue but I will throw it out there as well.
The reason I asked about the logs is because lately I have been loading up kibana and it is showing STATUS: Red and the ui settings say elasticsearch plugin is red, and plugin:elasti...@5.4.0 says no living connections.
Not sure if you know the cause and or the remedy for this situation.
If you have any ideas it would be appreciated.
Thank you,
Brodie
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
You may want to first try 'sudo so-elastic-restart' to see if it helps to resolve your issue.You may need to wait for a couple of minutes afterwards for everything to initialize.Thanks,Wes
On Thu, Aug 10, 2017 at 5:55 PM, 'Brodie Mather' via security-onion <security-onion@googlegroups.com> wrote:
Wes,
Thank you, I will check that out.
Not sure if you can help with this other issue but I will throw it out there as well.
The reason I asked about the logs is because lately I have been loading up kibana and it is showing STATUS: Red and the ui settings say elasticsearch plugin is red, and plugin:elasti...@5.4.0 says no living connections.
Not sure if you know the cause and or the remedy for this situation.
If you have any ideas it would be appreciated.
Thank you,
Brodie
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
After finishing initializing after the restart it says plugin:elasti...@5.4.0 Unable to connect to Elasticsearch at http://elasticsearch:9200
No problem. I think a full system restart fixed the problem for a little a few days ago. I may just do that again. I understand you have to set your priorities. Was just wondering if there was any known quick fix.
I will considered moving to TP3 thanks for all the work you guys are doing.
Thanks,
Brodie