Towards Elastic on Security Onion: Technology Preview 2 (TP2)

1,014 views
Skip to first unread message

Doug Burks

unread,
Jun 2, 2017, 8:29:58 AM6/2/17
to securit...@googlegroups.com
http://blog.securityonion.net/2017/06/towards-elastic-on-security-onion.html

Please let us know what you think.

Thanks in advance for any and all feedback!

--
Doug Burks

wedgeshot

unread,
Jun 2, 2017, 7:51:05 PM6/2/17
to security-onion
On Friday, June 2, 2017 at 8:29:58 AM UTC-4, Doug Burks wrote:
> http://blog.securityonion.net/2017/06/towards-elastic-on-security-onion.html
>
> Please let us know what you think.
>

Doug --

the bullet "experimental script to migrate data from ELSA to Elastic" -- Would this(or future plans) work for moving data from an existing sensor into this new Elastic SO setup? just curious.

New screenshots look bad-ass ;) I'll be putting this on my 720xd machine next week :-D

-Bob

Thilina Pathirana

unread,
Jun 3, 2017, 5:42:34 AM6/3/17
to securit...@googlegroups.com
Dear Doug,

Thanks for enabling IPv6 on this preview. As I had mentioned in a separate thread, I customized ELK to read detections directly from barnyard and to populate Kibana illustrations with IPv6 for my Masters project. I bypassed sguil as it wasnt supporting IPv6. Anyway when I connects it to a 1Gbps link the server required at least 4 cores and 32GB RAM otherwise graphing keeps lagging for more that 20hours. Somehow I had to keep the settings unpublished as still my thesis is not yet evaluated. My results will be released soon and I will publish my changes after that.

So my question is, have you stress tested the new preview with high loads. I will test the preview on Monday as soon as I go to my lab and will post the results.

As Bob mentioned, +1 on new kibana dashboard which is extremely eye catching... :)



Thanks
Thilina 

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.



--

!!!!!!!!!!!!!!!!!!!!!!!!!!!!
%%#tdkp#%%
$$$$$$$$$$$$

Doug Burks

unread,
Jun 3, 2017, 6:16:57 AM6/3/17
to securit...@googlegroups.com
Hi Bob,

Replies inline.

On Fri, Jun 2, 2017 at 7:51 PM, wedgeshot <wedg...@gmail.com> wrote:
> the bullet "experimental script to migrate data from ELSA to Elastic" --

First, let me re-emphasize *experimental* here. Migration currently
only works for Bro logs and only for Bro 2.5.

> Would this(or future plans) work for moving data from an existing sensor into this new Elastic SO setup? just curious.

/usr/sbin/so-migrate-elsa-data-to-elastic exports data from ELSA to
/nsm/import/ and we configure Logstash to monitor that directory. In
theory, you could run /usr/sbin/so-migrate-elsa-data-to-elastic on an
ELSA box and then scp/rsync the files from /nsm/import/ to a separate
box running the Elastic stack which would then import the files from
/nsm/import/.

> New screenshots look bad-ass ;) I'll be putting this on my 720xd machine next week :-D

Please let us know how it goes. Thanks!


--
Doug Burks

Doug Burks

unread,
Jun 3, 2017, 6:20:34 AM6/3/17
to securit...@googlegroups.com
Hi Thilina,

Replies inline.

On Sat, Jun 3, 2017 at 5:42 AM, Thilina Pathirana <tdk...@gmail.com> wrote:
> Dear Doug,
>
> Thanks for enabling IPv6 on this preview. As I had mentioned in a separate
> thread, I customized ELK to read detections directly from barnyard and to
> populate Kibana illustrations with IPv6 for my Masters project. I bypassed
> sguil as it wasnt supporting IPv6. Anyway when I connects it to a 1Gbps link
> the server required at least 4 cores and 32GB RAM otherwise graphing keeps
> lagging for more that 20hours. Somehow I had to keep the settings
> unpublished as still my thesis is not yet evaluated. My results will be
> released soon and I will publish my changes after that.
>
> So my question is, have you stress tested the new preview with high loads. I
> will test the preview on Monday as soon as I go to my lab and will post the
> results.

We haven't done any stress testing on high loads yet, but we are
definitely aware of the fact that the Elastic stack requires lots of
hardware resources and will require some tweaking and tuning.

> As Bob mentioned, +1 on new kibana dashboard which is extremely eye
> catching... :)

Thanks, glad you like it!

>
>
> Thanks
> Thilina
>
> On Sat, Jun 3, 2017 at 5:21 AM, wedgeshot <wedg...@gmail.com> wrote:
>>
>> On Friday, June 2, 2017 at 8:29:58 AM UTC-4, Doug Burks wrote:
>> >
>> > http://blog.securityonion.net/2017/06/towards-elastic-on-security-onion.html
>> >
>> > Please let us know what you think.
>> >
>>
>> Doug --
>>
>> the bullet "experimental script to migrate data from ELSA to Elastic" --
>> Would this(or future plans) work for moving data from an existing sensor
>> into this new Elastic SO setup? just curious.
>>
>> New screenshots look bad-ass ;) I'll be putting this on my 720xd machine
>> next week :-D
>>
>> -Bob
>>
>>
>>
>> > Thanks in advance for any and all feedback!
>> >
>> > --
>> > Doug Burks
>>
>> --
>> Follow Security Onion on Twitter!
>> https://twitter.com/securityonion
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "security-onion" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to security-onio...@googlegroups.com.
>> To post to this group, send email to securit...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/security-onion.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> Find me on:
> www.google.com/+ThilinaPathirana
> www.facebook.com/t.d.k.pathirana
> lk.linkedin.com/in/thilinapathirana/
> skype: tdkp123
>
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!
> %%#tdkp#%%
> $$$$$$$$$$$$
>
> --
> Follow Security Onion on Twitter!
> https://twitter.com/securityonion
> ---
> You received this message because you are subscribed to the Google Groups
> "security-onion" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to security-onio...@googlegroups.com.
> To post to this group, send email to securit...@googlegroups.com.
> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.



--
Doug Burks

James Taylor

unread,
Jun 7, 2017, 11:38:48 AM6/7/17
to security-onion

Did some testing today did not run into any issues.

Everything is looking really good.

Doug Burks

unread,
Jun 7, 2017, 11:46:32 AM6/7/17
to securit...@googlegroups.com
On Wed, Jun 7, 2017 at 11:38 AM, James Taylor <jtay...@gmail.com> wrote:
> Did some testing today did not run into any issues.
>
> Everything is looking really good.

Thanks, James!

--
Doug Burks

wfp...@gmail.com

unread,
Jun 22, 2017, 9:15:23 PM6/22/17
to security-onion
On Friday, June 2, 2017 at 5:29:58 AM UTC-7, Doug Burks wrote:
Hi Doug! I'd like to use tcpreplay to send pcaps to the sniffing interface, how do I clear the sample data that is loaded from the initial setup? Thanks!

Doug Burks

unread,
Jun 22, 2017, 10:27:23 PM6/22/17
to securit...@googlegroups.com
Hi wfpa40,

You could certainly use tcpreplay to send your own pcaps to the
sniffing interface and then just change the Kibana time window to look
at time after the original logs were written.

If you really do need to remove the sample logs, you could delete the
Elasticsearch index for that day or wipe the entire Elasticsearch database.

Alternativately, you could always build a new Security Onion test VM
but, before running the Elastic script, remove the tcpreplay lines
that create the sample logs:
https://github.com/Security-Onion-Solutions/elastic-test/blob/master/securityonion_elsa2elastic.sh#L128
https://github.com/Security-Onion-Solutions/elastic-test/blob/master/securityonion_elsa2elastic.sh#L390



--
Doug Burks

wfp...@gmail.com

unread,
Jun 23, 2017, 8:54:33 PM6/23/17
to security-onion
Thank you sir! It's working great now!

Tung Nguyen

unread,
Jun 26, 2017, 6:07:32 AM6/26/17
to security-onion
Doug --

If the setup is not AIO but separate management and sensor, do I just run the upgrade script on both? in which order? or it doesn't support upgrading from this setup? -- Thanks TN

Doug Burks

unread,
Jun 26, 2017, 6:08:57 AM6/26/17
to securit...@googlegroups.com
Hi Tung,

From the blog post:

"This script is only designed for standalone boxes and does NOT
support distributed deployments."


--
Doug Burks

Audrius J

unread,
Jun 27, 2017, 6:00:10 AM6/27/17
to security-onion
Hi Doug,

Your work is just amazing! I just looked around how much visualization, searches, dashboards you did... it is really time consuming work!

Some advices from my point of view:
- Elastic lacks export to csv feature in saved searches/discovery tab. It was really annoying for couple of years. But now it seems that is a workaround for this. Please take a look at https://github.com/elastic/kibana/issues/1992 and a workaround here https://github.com/fbaligand/kibana/releases. From my point of view you can add it to your Docker container so-kibana.
- Then you click on hyperlinked IP, you are redirected to Indicator dashboard, which show what elastic knows about your indicator for 5years time frame. From my point of view it is beneficial to change it to 1h or 12h or even 24h. In all cases analyst can change it if they needs bigger time frame. Also this wills assure you that you will not kill your elastic search instance.
- Visualizations, with tables view, I would advise to change the number entries to 10. The default 5 is ok for pie charts, but if we look at the tables, you always want to have more when 5. Let's imagine if you look at user agent strings, you want to get as much as you can. Probably even 20...
- Because geoip lookup takes a while, I suggest you to check if IP does not belong to a private IP ranges, if yes - skip lookup;
- You can add temporary buffer like redis, to avoid data spikes (test needed. We use it, but not sure how redis will behave in this case.)
- Log files of elastic, logstash could be stored on the system for better troubleshooting;
- We need something to monitor our cluster state and manage indices. Maybe it can be some kind of plugin which works with ES5 or even Marvel, with instruction how to apply the basic license, which is free. Also it is docker image for head, which should be compatible with ES5 (docker run -p 9100:9100 mobz/elasticsearch-head:5).


General opinion regarding scale:
I work with elastic for a long time and I really like it. But despite this we have always some problems with some products. One of them is logstash. To parse and enrich our logs we use several 48 cores servers, especially if your parsers are complex. So I think that it can be a huge pain for high speed networks. In cases, when we need to deal with it on the same machine as sensor, we try to export bro logs directly to json and then directly to ES, but of course you will be lacking data enrichment and tagging.
For other data, syslog-ng or even logstash can be used. At least I see it some big deployments.

I just create a PoC of your setup in one of our network with 100Mbps load. The SO VM has 12 vCPU, 32GB of RAM and 1TB HDD.
The memory heap of LS was adjusted to 4GB, LS workers to 6, ES HEAP to 8GB. And it seems that it can handle that load. But CPU load is ~90% in average.

Again, really good work! Thanks!

Regards,
Audrius

Doug Burks

unread,
Jun 27, 2017, 8:48:30 AM6/27/17
to securit...@googlegroups.com
Hi Audrius,

This is really great feedback, thanks so much! Further replies inline.

On Tue, Jun 27, 2017 at 6:00 AM, Audrius J <aud...@gmail.com> wrote:
> Hi Doug,
>
> Your work is just amazing! I just looked around how much visualization, searches, dashboards you did... it is really time consuming work!

I can't take credit, Wes Lambert did most of the dashboard work in this release.

> Some advices from my point of view:
> - Elastic lacks export to csv feature in saved searches/discovery tab. It was really annoying for couple of years. But now it seems that is a workaround for this. Please take a look at https://github.com/elastic/kibana/issues/1992 and a workaround here https://github.com/fbaligand/kibana/releases. From my point of view you can add it to your Docker container so-kibana.

Looks like CSV export is coming in 6.0:
https://www.elastic.co/blog/kibana-6-0-0-alpha2-released

Perhaps we just wait for that?

> - Then you click on hyperlinked IP, you are redirected to Indicator dashboard, which show what elastic knows about your indicator for 5years time frame. From my point of view it is beneficial to change it to 1h or 12h or even 24h. In all cases analyst can change it if they needs bigger time frame. Also this wills assure you that you will not kill your elastic search instance.

Agreed, added to:
https://github.com/Security-Onion-Solutions/security-onion/issues/1095

> - Visualizations, with tables view, I would advise to change the number entries to 10. The default 5 is ok for pie charts, but if we look at the tables, you always want to have more when 5. Let's imagine if you look at user agent strings, you want to get as much as you can. Probably even 20...

Yes, this was higher in our last release, we'll increase it for the
next release. Added to:
https://github.com/Security-Onion-Solutions/security-onion/issues/1095

> - Because geoip lookup takes a while, I suggest you to check if IP does not belong to a private IP ranges, if yes - skip lookup;

Added to:
https://github.com/Security-Onion-Solutions/security-onion/issues/1095

> - You can add temporary buffer like redis, to avoid data spikes (test needed. We use it, but not sure how redis will behave in this case.)

We've enabled persistent queues in Logstash:
https://www.elastic.co/guide/en/logstash/current/persistent-queues.html

Perhaps we can get by without redis?

> - Log files of elastic, logstash could be stored on the system for better troubleshooting;

Added to:
https://github.com/Security-Onion-Solutions/security-onion/issues/1095

> - We need something to monitor our cluster state and manage indices. Maybe it can be some kind of plugin which works with ES5 or even Marvel, with instruction how to apply the basic license, which is free. Also it is docker image for head, which should be compatible with ES5 (docker run -p 9100:9100 mobz/elasticsearch-head:5).

Will take a look.

> General opinion regarding scale:
> I work with elastic for a long time and I really like it. But despite this we have always some problems with some products. One of them is logstash. To parse and enrich our logs we use several 48 cores servers, especially if your parsers are complex. So I think that it can be a huge pain for high speed networks. In cases, when we need to deal with it on the same machine as sensor, we try to export bro logs directly to json and then directly to ES, but of course you will be lacking data enrichment and tagging.
> For other data, syslog-ng or even logstash can be used. At least I see it some big deployments.
>
> I just create a PoC of your setup in one of our network with 100Mbps load. The SO VM has 12 vCPU, 32GB of RAM and 1TB HDD.
> The memory heap of LS was adjusted to 4GB, LS workers to 6, ES HEAP to 8GB. And it seems that it can handle that load. But CPU load is ~90% in average.
>
> Again, really good work! Thanks!

Thanks again for the detailed feedback! I really appreciate it!

--
Doug Burks

Ian

unread,
Jun 27, 2017, 2:15:40 PM6/27/17
to security-onion
If I clone our production security onion server, can I run this tech preview script on the clone? I assume I can omit the test pcap stuff from the script instead?

Or must this be done only on a fresh copy of Security Onion after it has been set up for evaluation mode?

Doug Burks

unread,
Jun 27, 2017, 2:23:01 PM6/27/17
to securit...@googlegroups.com
Hi Ian,

Replies inline.

On Tue, Jun 27, 2017 at 2:15 PM, Ian <zest...@gmail.com> wrote:
> If I clone our production security onion server, can I run this tech preview script on the clone?

Assuming your clone is a fully updated standalone box, the tech
preview script *should* work, but we don't guarantee or support it.
Please double-check all the warnings and disclaimers.

> I assume I can omit the test pcap stuff from the script instead?

Yes.

--
Doug Burks

Thilina Pathirana

unread,
Jun 28, 2017, 5:23:49 AM6/28/17
to securit...@googlegroups.com
Hello Doug,

I am getting a connection error in Kibana as " Unable to connect to Elasticsearch at http://elasticsearch:9200. ". Tried restarting using sudo /usr/sbin/so-elastic-restart but didnt worked out.

Please advice me...

Thanks
Thilina

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Doug Burks

unread,
Jun 28, 2017, 6:12:11 AM6/28/17
to securit...@googlegroups.com
On Wed, Jun 28, 2017 at 5:23 AM, Thilina Pathirana <tdk...@gmail.com> wrote:
> Hello Doug,
>
> I am getting a connection error in Kibana as " Unable to connect to
> Elasticsearch at http://elasticsearch:9200. ". Tried restarting using sudo
> /usr/sbin/so-elastic-restart but didnt worked out.
>
> Please advice me...

Hi Thilina,

What was the output of /usr/sbin/so-elastic-restart?

Do you happen to have the output of securityonion_elsa2elastic.sh?

What are the specs of your box?




--
Doug Burks

Thilina Pathirana

unread,
Jun 28, 2017, 7:40:14 AM6/28/17
to securit...@googlegroups.com
Hi Doug,

restart says,

Stopping containers:
so-logstash
so-kibana
so-elasticsearch

Removing existing containers:
so-elasticsearch
so-logstash
so-kibana

Starting new containers:
so-elasticsearch: 9a527d0c07a38d53d914bc4adc7126164bd6b38e22c86a228994da2ba16aa649
so-logstash: 84c977ce5616f942f72fba2132ab377271f770ad20461e265835b6d85b58906d
so-kibana: 801915e6e6f45d530eefeefb6840995c1912a08bf8ed363f863e30217d442848

When I first run securityonion_elsa2elastic.sh it installed ELK as it should be. but after restarting the server that error was coming. Sorry I do not have the original output. But as remembered it didnt showed any errors.
If I run the script now it says "ELSA in not enabled! Exiting!"

My specs are:

 Intel i5-2400 @ 3.1Ghz
 RAM 12GB






--
Doug Burks

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

Doug Burks

unread,
Jun 28, 2017, 7:42:02 AM6/28/17
to securit...@googlegroups.com
On Wed, Jun 28, 2017 at 7:40 AM, Thilina Pathirana <tdk...@gmail.com> wrote:
> Hi Doug,
>
> restart says,
>
> Stopping containers:
> so-logstash
> so-kibana
> so-elasticsearch
>
> Removing existing containers:
> so-elasticsearch
> so-logstash
> so-kibana
>
> Starting new containers:
> so-elasticsearch:
> 9a527d0c07a38d53d914bc4adc7126164bd6b38e22c86a228994da2ba16aa649
> so-logstash:
> 84c977ce5616f942f72fba2132ab377271f770ad20461e265835b6d85b58906d
> so-kibana: 801915e6e6f45d530eefeefb6840995c1912a08bf8ed363f863e30217d442848
>
> When I first run securityonion_elsa2elastic.sh it installed ELK as it should
> be. but after restarting the server that error was coming. Sorry I do not
> have the original output. But as remembered it didnt showed any errors.
> If I run the script now it says "ELSA in not enabled! Exiting!"
>
> My specs are:
>
> Intel i5-2400 @ 3.1Ghz
> RAM 12GB

What is the output of the following?

sudo docker ps


--
Doug Burks

Thilina Pathirana

unread,
Jun 28, 2017, 7:51:18 AM6/28/17
to securit...@googlegroups.com
it is,

CONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS              PORTS                                                  NAMES
801915e6e6f4        securityonionsolutions/so-kibana          "/bin/sh -c /usr/l..."   26 minutes ago      Up 26 minutes       0.0.0.0:5601->5601/tcp                                 so-kibana
84c977ce5616        securityonionsolutions/so-logstash        "/usr/local/bin/do..."   26 minutes ago      Up 26 minutes       5044/tcp, 9600/tcp, 0.0.0.0:6050-6053->6050-6053/tcp   so-logstash
9a527d0c07a3        securityonionsolutions/so-elasticsearch   "/bin/bash bin/es-..."   26 minutes ago      Up 26 minutes       0.0.0.0:9200->9200/tcp, 9300/tcp                       so-elasticsearch


Thanks



--
Doug Burks

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

Doug Burks

unread,
Jun 28, 2017, 7:54:31 AM6/28/17
to securit...@googlegroups.com
That looks fine. Have you tried logging into Kibana since restarting
26 minutes ago?
>> email to security-onio...@googlegroups.com.
>> To post to this group, send email to securit...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/security-onion.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> Find me on:
> www.google.com/+ThilinaPathirana
> www.facebook.com/t.d.k.pathirana
> lk.linkedin.com/in/thilinapathirana/
> skype: tdkp123
>
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!
> %%#tdkp#%%
> $$$$$$$$$$$$
>
> --
> Follow Security Onion on Twitter!
> https://twitter.com/securityonion
> ---
> You received this message because you are subscribed to the Google Groups
> "security-onion" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to security-onio...@googlegroups.com.
> To post to this group, send email to securit...@googlegroups.com.
> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.



--
Doug Burks

Thilina Pathirana

unread,
Jun 28, 2017, 7:58:24 AM6/28/17
to securit...@googlegroups.com
yes still the same, I think, as my network traffic is around 300Mbps it takes time to process, anyway I will keep this server up and running in the night and update you tomorrow


Thanks
Thilina 

>> email to security-onion+unsubscribe@googlegroups.com.
>> To post to this group, send email to security-onion@googlegroups.com.

>> Visit this group at https://groups.google.com/group/security-onion.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> Find me on:
> www.google.com/+ThilinaPathirana
> www.facebook.com/t.d.k.pathirana
> lk.linkedin.com/in/thilinapathirana/
> skype:  tdkp123
>
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!
> %%#tdkp#%%
> $$$$$$$$$$$$
>
> --
> Follow Security Onion on Twitter!
> https://twitter.com/securityonion
> ---
> You received this message because you are subscribed to the Google Groups
> "security-onion" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to security-onion+unsubscribe@googlegroups.com.
> To post to this group, send email to security-onion@googlegroups.com.

> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.



--
Doug Burks

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Doug Burks

unread,
Jun 28, 2017, 8:01:57 AM6/28/17
to securit...@googlegroups.com
On Wed, Jun 28, 2017 at 7:58 AM, Thilina Pathirana <tdk...@gmail.com> wrote:
> yes still the same, I think, as my network traffic is around 300Mbps it
> takes time to process, anyway I will keep this server up and running in the
> night and update you tomorrow

If you're monitoring 300Mbps of traffic, you may want to look at
Audrius's tuning advice earlier in this thread.



--
Doug Burks

Jon Mark Allen

unread,
Jun 28, 2017, 11:22:54 PM6/28/17
to security-onion

All,

I'm attempting to add Suricata EVE logs into the ELK dashboards
(primarily to bypass barnyard so that IPv6 addresses display properly
in the IDS logs).

I've tried a few variations:

- syslog-ng modifications to read the eve.json log and send it to
the logstash port 5063, which is listening for logs in JSON format
- creating a new 0004_input_suricata_eve.conf file in
/etc/logstash/conf.d:

```
input {
file {
path => ["/nsm/sensor_data/elk-onion-eth1/eve.json"]
codec => json
# I've tried both "suricata" and "snort" here
type => "suricata"
}
}
filter {
if [type] == "suricata" {
mutate {
#add_tag => [ "conf_file_0005"]
}
}
}
```

Both methods successfully send logs all the way through to Kibana,
but neither method has yet to have Kibana recognize them as IDS logs
and insert them into the dashboards created for the rest of the Onion logs.

I'm sure there's something simple I'm missing here, but I'm not finding it.

Any help would be greatly appreciated!

--
JM

/* If you haven't found something strange during the day, it hasn't
been much of a day.
-- John A. Wheeler */

Thilina Pathirana

unread,
Jun 29, 2017, 2:14:12 AM6/29/17
to securit...@googlegroups.com
Thanks Doug, I will refer Audrius mail. The issue is still the same, therefore I will change the hardware and will try.


BR
Thilina




--
Doug Burks

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

Audrius J

unread,
Jun 29, 2017, 7:31:35 AM6/29/17
to security-onion
Hi Thilina,

I think, that first thing what you need to do is to disconnect your SPAN port from you IDS box. Wait and see if you at least, after some while, can connect to kibana at all. If not, try to restart box and try to connect to kibana in few minutes again (mirror port must be still disconnected).
If no, something is wrong with your installation or setup.
In general 300 Mbps is quite huge traffic to process. So, I don't think so, that your HW is capable to process that load...

Regards,
Audrius

Wes

unread,
Jun 29, 2017, 8:28:14 AM6/29/17
to security-onion

JM.

What type do these logs show up as in Kibana (search in Discover)? The logs used for the IDS alert visualizations use a type of "snort" (type:snort).

Thanks,
Wes

Thilina Pathirana

unread,
Jun 29, 2017, 8:33:27 AM6/29/17
to securit...@googlegroups.com
Thanks Audrius, yes I did disconnect and checked but the system didnt went well. Anyway I am now looking for a high capacity server, Once I get it I will reinstall and let you all know...


Thanks

Thilina

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

Jon Mark Allen

unread,
Jun 29, 2017, 12:20:38 PM6/29/17
to security-onion
On Thursday, June 29, 2017 at 7:28:14 AM UTC-5, Wes wrote:
> What type do these logs show up as in Kibana (search in Discover)? The logs
> used for the IDS alert visualizations use a type of "snort" (type:snort).
>
> Thanks, Wes

Wes,

I updated the 0004_input_suricata.conf file to tag the logs as type:
"snort"

```
input {
file {
path => ["/nsm/sensor_data/elk-onion-eth1/eve.json"]
codec => json

#type => "suricata"
type => "snort"
}
}
filter {
#if [type] == "suricata" {
if [type] == "snort" {
mutate {
#add_tag => [ "conf_file_0005"]
}
}
}
```

And after further review, the only EVE logs that logstash was showing
at all were the Suricata flow and stats events. I visually verified
that the alert events **are** in the same file, though, so that's
even more confusing.

When I filtered out those events, nothing shows up at all.

My understanding of the logstash process is input -> preprocces ->
output.

Since the encoding of the Suricata EVE alerts is json, will the
logstash preprocessors for snort still work?

Brodie Mather

unread,
Jul 3, 2017, 11:31:31 AM7/3/17
to security-onion
Doug,

I was hoping to get a better idea for the layout of securityonion with the ELK stack.

With ELK in place have you removed the use of mysql, or is there some sort of database interaction that is still present?

Also how exactly is data fed to Kibana? Once again does it interact with databases at all or is this direct information transfer from elasticsearch? If it is could you point me in the direction of some good documentation for this process?

Thanks,
Brodie

Wes

unread,
Jul 3, 2017, 11:48:15 AM7/3/17
to security-onion

Brodie,

MySQL still exists, just not in regard to ELSA. It is still used for Sguil/Squert.

Data is fed into Elasticsearch as it is received and parsed by Logstash (from network, syslog-ng, etc). That data is then queryable/viewable by Kibana. In regard to the Elastic Stack, to my knowledge, there is no interaction with the MySQL database, other than for authentication purposes (which is handled in conjunction with Apache).

Since we are still in a Technology Preview stance, there is not much documentation present, as it could likely change a great deal as we move forward. However, there is some brief documentation on the Security Onion wiki, and that is where documentation for these applications will reside in the future.

https://github.com/Security-Onion-Solutions/security-onion/wiki

Hope that helps to clarify.

Thanks,
Wes

Wes

unread,
Jul 3, 2017, 11:50:07 AM7/3/17
to security-onion

JM,

I'll have to take a look at the inputs and get back to you.

Thanks,
Wes

Brodie Mather

unread,
Jul 3, 2017, 12:29:35 PM7/3/17
to security-onion
Wes,

Thank you that did help quite a bit.

Just to make sure I am on the same page Logstash essentially gets that is sniffed and from the log files (bro, etc.) once Logstash parses it, it then sends it to Elasticsearch (which stores it?) and then Kibana is used to query Elasticsearch for whatever data the user deems necessary and uses Kibana's interface to create a visualization? Is that about accurate?

Would it be possible (if my assumptions above are correct) to query Kibana from a separate program besides Kibana and possibly store that data into a physical database table, or possibly directly into said program?

Thanks,
Brodie

wedgeshot

unread,
Jul 20, 2017, 11:34:40 PM7/20/17
to security-onion
On Friday, June 2, 2017 at 8:29:58 AM UTC-4, Doug Burks wrote:
> http://blog.securityonion.net/2017/06/towards-elastic-on-security-onion.html
>
> Please let us know what you think.
>
> Thanks in advance for any and all feedback!
>
> --
> Doug Burks


So.... I'm a few weeks late but things seem to be AOK in this release.

Here is what I did... We were running TP1 on this same machine.

- stop all processes
- apt-get removed --purge <ELK related packages>
- sudo soup
- ran so-setup
- downloaded TP2 script and commented out any pcap playback
- ran the TP2 script... profit

Although ELK fell on it's face by the next morning..... changed heap settings to 28g in /etc/nsm/securityonion.conf and restarted. Then found that logstash could not keep up so my co-worker adjusted workers to 16?? and memory?? in logstash.yml file and things are humming along now for a few days.

I like the bigger pie charts as the old ones were way too small.

I'm going to try the script to migrate data from a dev sensor over to a fresh install one day... we just ran out of time.


Thanks to everyone involved.

-Bob

Doug Burks

unread,
Jul 21, 2017, 6:08:28 AM7/21/17
to securit...@googlegroups.com
Thanks for the feedback, Bob!
> --
> Follow Security Onion on Twitter!
> https://twitter.com/securityonion
> ---
> You received this message because you are subscribed to the Google Groups "security-onion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
> To post to this group, send email to securit...@googlegroups.com.
> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.



--
Doug Burks

Kevin Branch

unread,
Jul 25, 2017, 8:18:44 PM7/25/17
to securit...@googlegroups.com
I'm late to the party, but I've now set up TP2 on a couple of servers.  I'm really impressed!  One TP2 server is monitoring a very busy mirror port, and Elastic put up with thousands of events per second even with all default settings for a while.  The dashboards were uber cool, and really were showing my own data, not just the imported/replayed stuff.

Then all new events ceased appearing in Kibana.  Tcpdump on the TP2 server shows logstash replying with zero-windows TCP acks in reply syslog-ng's attempts to push records via port 6050, so it's clear the pipeline is jammed.  Doing a so-elastic-restart does not fix this, nor does so-elastic-reset.   Rebooting the TP2 server itself also does not help.  Making these updates to securityonion.conf and restarting also did not help.  
ELASTICSEARCH_HEAP="8g"
LOGSTASH_HEAP="4g"

/nsm is only around 50% full.

It appears "docker logs so-logstash" is just showing a few warnings like this per minute, no real errors.

[2017-07-26T00:07:31,161][WARN ][logstash.filters.grok    ] Timeout executing grok '(?<timestamp>(.*?))\t(?<id>(.*?))\t(?<certificate_version>(.*?))\t(?<certificate_serial>(.*?))\t(?<certificate_subject>(.*?))\t(?<certificate_issuer>(.*?))\t(?<certificate_not_valid_before>(.*?))\t(?<certificate_not_valid_after>(.*?))\t(?<certificate_key_algorithm>(.*?))\t(?<certificate_signing_algorithm>(.*))\t(?<certificate_key_type>(.*))\t(?<certificate_key_length>(.*))\t(?<certificate_exponent>(.*))\t(?<certificate_curve>(.*))\t(?<san_dns>(.*))\t(?<san_uri>(.*))\t(?<san_email>(.*))\t(?<san_ip>(.*))\t(?<basic_constraints_ca>(.*))\t(?<basic_constraints_path_length>(.*))' against field 'message' with value 'Value too large to output (8192 bytes)! First 255 chars are: 1500998744.916204        Fl79jJ3Hbzb9LVkF4       3       68AAFBB69AA529E4D362DF3E     CN=ssl003.qwqw.net,O=Instart Logic\\, Inc.,L=Mountain View,ST=California,C=US  CN=x.y.z        1499345138.000000       1521991866.000000       rsaEncryption   sha256WithRSAEncryption rsa     2048    65537   '!
 
Also, "docker logs so-elasticsearch" shows no errors, mostly just this warning about once per second.

[2017-07-26T00:12:50,251][WARN ][o.e.d.i.m.MapperService  ] [unmapped_type:string] should be replaced with [unmapped_type:keyword]

Any ideas where I might look next?  I'm already getting comfortable with Elastic but am pretty new to the Docker environment and am unsure what that brings to the mix as far as how to debug a stuck pipeline.

Thanks,
Kevin

Doug Burks

unread,
Jul 26, 2017, 6:11:43 AM7/26/17
to securit...@googlegroups.com
Hi Kevin,

Thanks for the feedback!

Did you have old Bro logs in ELSA that were imported into Elastic? If
so, were they perhaps from an older version of Bro with a different
format?

If you don't care about the data (and you shouldn't since this is just
a technology preview), you might try stopping the logstash container,
purging the queue in /nsm/logstash/queue/, and then restarting the
logstash container.

Kevin Branch

unread,
Jul 26, 2017, 6:45:17 PM7/26/17
to securit...@googlegroups.com
Thanks Doug, that help was spot on.  Purging the logstash queue as you described got things flowing again.  I had almost 1 GB of data backed up in there.
I'll watch and see if it stays up longer this time, now that I've substantially upped heap memory for logstash and elasticsearch, as well as added a BPF to tune out excessive traffic.

This box was fresh installed from the latest SO ISO image and then TP2 was installed.  There would have been Bro logs built up in between, but nothing from before the SO reinstall.

The server is purely a dev box and I expect to purge data on it repeatedly.

Kevin

> email to security-onion+unsubscribe@googlegroups.com.
> To post to this group, send email to security-onion@googlegroups.com.

> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.



--
Doug Burks

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Kevin Branch

unread,
Jul 26, 2017, 8:24:29 PM7/26/17
to securit...@googlegroups.com
I was seeing a number of these CSV parsing failures in the logstash log before data stopped reaching Elasticsearch  Could it be that the Logstash CSV parsing failures are causing the persistent queue to back up and stop the pipeline?  It appears that if even a single event in a given persistent queue page does not get acked, like due to a csv parsing failure, that the queue page will not be deleted.

The CSV parsing errors seem to be tripped by the presence of double quotes in a log record.  For example, this log record in /nsm/bro/spool/bro/software.log:
1501111747.054579       172.18.1.132    -       HTTP::BROWSER   SophosUpdateLibrary     1       0       -       -       SDDS/2  SophosUpdateLibrary/1.0 SDDS/2.0 (u="FAVLw6ADAC07O")
tripped this Logstash CSV parsing error:
[2017-07-26T23:29:07,999][WARN ][logstash.filters.csv     ] Error parsing csv {:field=>"message", :source=>"1501111747.054579\t172.18.1.132\t-\tHTTP::BROWSER\tSophosUpdateLibrary\t1\t0\t-\t-\tSDDS/2\tSophosUpdateLibrary/1.0 SDDS/2.0 (u=\"FAVLw6ADAC07O\")", :exception=>#<CSV::MalformedCSVError: Illegal quoting in line 1.>}

Here is another CSV-breaking record from bro's dns.log, because it is about a DNS TXT record containing some quotes:
1501113855.500932       Cjykrt292flJQSCe51      172.12.2.190    57990   4.2.2.1 53      udp     32487   0.000188        jjj.snprobbx.pbz.m.00.s.sophosxl.net    1       C_INTERNET      16      TXT     0       NOERROR F       F       T       T       0       TXT 139 w l h 20 235079\x09!QV@824NO\x2c=D/?)L:"PY*XE]=/3H&51E"NS!_V`TV:(?W#DW80(V8.*YD0=)T!(S3'\x2c']<XG>GU`6@Y=^'\\;=@ED0P_LFLL;'A"T*_)/_ #837dbc0acf55f370        10.000000       F

I also see the same issue with bro_syslog records in which the syslog message contains double quotes.

It rather appears that something needs to happen pre-csv in Logstash to escape any existing double quotes in a way that will be recognized by the csv filter.

Kevin




> To post to this group, send email to securit...@googlegroups.com.

> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.



--
Doug Burks

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.

Audrius J

unread,
Jul 27, 2017, 3:21:57 AM7/27/17
to security-onion
Kevin,

this is normal behavior of logstash. You always will see errors like this.
The main problem here is, that some fields are not well formed and this is not SO or logstash problem. This is because the data have strings, which are no compliant to CSV RFC.
So, usually you will see such errors time to time. But I think that these logs you will still see in kibana, they will not parsed, but still searchable (if they are not dropped somewhere in the config) (just make a visualization with tags and you will see what kind of errors you have).

If persistent queue is filling up, that means, that logstash can't handle the load. You can try to increase logstash workers with -w flag, but it has it's own bottleneck.
From our point of view, the perfect situation is to avoid heavy parsing, so we are now evaluating a solution to add additional logging of bro and save logs also in json. Next ship these data to logstash. Because it has json structure, no parsing should be done on a data itself, so we expect to have a boost in performance. For data enrichment you still can use logstash, but at least you will touch only fields, that only requires processing (like geoip etc.).
In lab, for now, it works quite well.
To have additional logging we use this bro script https://github.com/J-Gras/add-json.
Next week I will try to move this solution to some of our production nodes and will try to figure out if it's improving things.

From my experience, we use a parsing a lot and dedicated logstash node with 48 cores can parse only 15k events/s. We will see how it will change.

Also take into account, that despite you have big storage, elastic not always can utilize it all. On SSD's we see if we have > 6-8TB of data, JAVA heap goes above 75-85% and shortly it can move your cluster down. So you always need to take a look at JAVA Heap.
ES people say, that you need to find the right spot by experimentation on your own data.
Also a solution here can be to add additional ES data nodes with docker on the same node and improve things.

Also again, thanks Doug and Wes for their effort!

Regards,
Audrius

Kevin Branch

unread,
Jul 27, 2017, 4:50:31 AM7/27/17
to securit...@googlegroups.com
Hi Audrius,

Thanks for your comments.  I appreciate your idea of wanting to unburden logstash from unnecessary parsing.  Since bro already knows about the separate fields before it writes them to a text log file, it makes sense that it would more efficiently write that text log in json format representing those fields, compared to logstash having to parse the text back into separate fields again.  The more we can hand json data over to logstash with fields already separated out, the better.   Do let us know whether or not your experiment with the json-logging bro script has a significant performance benefit for you.

I'm looking at the same issue with logstash having to do all the parsing of the diverse OS logs arriving from OSSEC agents, vs using the Wazuh fork of OSSEC to send these logs that have already been parsed out by Wazuh, over to logstash in json form with the fields already broken out.  Then logstash only needs to do some enrichment and some supplemental parsing rather than the heavy lifting.  This saves on the parsing load and saves on me having to recreate in logstash all the field decoding already being done by Wazuh anyway.  I plan to experiment with this approach in the coming week.

About the csv errors, is it inherently illegal for a double quote to appear inside a csv field, if that double quote is properly escaped?  I was thinking we just needed to make sure a logstash filter always escapes any double quotes in the field to be parsed, before invoking the csv parser.

Kevin

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Kevin Branch

unread,
Jul 27, 2017, 11:47:20 AM7/27/17
to securit...@googlegroups.com
With TP2, the "Log Count Over Time" visualization in each of the dashboards seems to have a funny problem with the x-axis label neither corresponding to the particular time window I choose, like last 24 hours, nor to time intervals of the visual, like each bar representing 5 minutes.  
Also, specifically "DNS - Log Count Over Time" appears to be broken.  The x-axis just says "_all" and there appears to be no date histogram configured in that visualization at all.

Kevin

On Fri, Jun 2, 2017 at 8:29 AM, Doug Burks <doug....@gmail.com> wrote:
http://blog.securityonion.net/2017/06/towards-elastic-on-security-onion.html

Please let us know what you think.

Thanks in advance for any and all feedback!

--
Doug Burks

Kevin Branch

unread,
Jul 27, 2017, 12:31:35 PM7/27/17
to securit...@googlegroups.com
This time TP2 lasted all night before jamming up.  It is definitely the persistent queue getting backed up: 

root@nsm-dev:~# ls -alh  /nsm/logstash/queue/main/
total 792M
drwxr-xr-x 2 branch branch 4.0K Jul 27 15:53 .
drwxr-xr-x 3 branch branch 4.0K Jul 26 22:26 ..
-rw-r--r-- 1 branch branch   34 Jul 27 15:54 checkpoint.15
-rw-r--r-- 1 branch branch   34 Jul 27 15:02 checkpoint.16
-rw-r--r-- 1 branch branch   34 Jul 27 15:45 checkpoint.17
-rw-r--r-- 1 branch branch   34 Jul 27 15:54 checkpoint.head
-rw-r--r-- 1 branch branch    0 Jul 26 22:26 .lock
-rw-r--r-- 1 branch branch 250M Jul 27 14:20 page.15
-rw-r--r-- 1 branch branch 250M Jul 27 15:02 page.16
-rw-r--r-- 1 branch branch 250M Jul 27 15:45 page.17
-rw-r--r-- 1 branch branch 250M Jul 27 15:54 page.18

I'd say this is not due to logstash falling behind due to resource exhaustion.  I see plenty of free CPU, disk IO capacity, and unused Logstash heap memory.   I really suspect log parsing failures are causing persistent queue pages to stick around and stack up until the 1GB queue space limit is reached.  See:

I tried to toy with Logstash's Dead Letter Queues feature but found that it is only supported by Logstash 5.5+.  I figure TP3 will include 5.5 so I'll wait till then to explore that avenue further.

For now I'm switching Logstash over to default memory queuing to see if I can keep logstash going that away, albeit at the expense of losing some records here and there.

Kevin


> To post to this group, send email to securit...@googlegroups.com.

> Visit this group at https://groups.google.com/group/security-onion.
> For more options, visit https://groups.google.com/d/optout.



--
Doug Burks

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.

Wes

unread,
Jul 27, 2017, 3:05:14 PM7/27/17
to security-onion
On Thursday, July 27, 2017 at 11:47:20 AM UTC-4, Kevin Branch wrote:
> With TP2, the "Log Count Over Time" visualization in each of the dashboards seems to have a funny problem with the x-axis label neither corresponding to the particular time window I choose, like last 24 hours, nor to time intervals of the visual, like each bar representing 5 minutes.  
> Also, specifically "DNS - Log Count Over Time" appears to be broken.  The x-axis just says "_all" and there appears to be no date histogram configured in that visualization at all.
>
>
> Kevin
>
>
> On Fri, Jun 2, 2017 at 8:29 AM, Doug Burks <doug....@gmail.com> wrote:
> http://blog.securityonion.net/2017/06/towards-elastic-on-security-onion.html
>
>
>
> Please let us know what you think.
>
>
>
> Thanks in advance for any and all feedback!
>
>
>
> --
>
> Doug Burks
>
>
>
> --
>
> Follow Security Onion on Twitter!
>
> https://twitter.com/securityonion
>
> ---
>
> You received this message because you are subscribed to the Google Groups "security-onion" group.
>
> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
>
> To post to this group, send email to securit...@googlegroups.com.
>
> Visit this group at https://groups.google.com/group/security-onion.
>
> For more options, visit https://groups.google.com/d/optout.

Kevin,

Thanks for the heads up!

I've confirmed the issue of the DNS Log Count Over Time visualization not being correctly formatted with the date historgram in TP2. However, with other Log Count Over Time visualizations, I am not able to duplicate the time range issue you have described. At any rate, these should not be an issue when going to TP3, which should be released soon. For now, you should be able to manually modify the DNS Log Count Over Time visualization.

Thanks,
Wes

Doug Burks

unread,
Jul 28, 2017, 4:31:27 PM7/28/17
to securit...@googlegroups.com

Brodie Mather

unread,
Aug 3, 2017, 5:26:57 PM8/3/17
to security-onion
Still running in TP2. When you are processing the bro_conn logs into logstash (and ultimately moved to elasticsearch) you are removing a lot of fields if the have a value of "-".

I noticed that you do not remove the service field if its value is "-".
Was this on purpose?
My reason for asking is I am trying to query elasticsearch for bro_conn documents that are not matched up with any particular type of connection (i.e. bro_weird, bro_http, etc.)

However, when these documents are indexed into elasticsearch they are analyzed. Therefore, when I query elastic search for a match on "service": "-" I get zero hits because the analyzer returns zero tokens and thereby does not recognize it as a literal - when I am searching for it.

Any feedback on the reasoning for keeping the service field even when it is set to "-" would be greatly appreciated.

Thank you,
Brodie

Wes

unread,
Aug 3, 2017, 6:02:33 PM8/3/17
to security-onion

Brodie,

Have you tried the following?

service.keyword:"-"

Thanks,
Wes

Brodie Mather

unread,
Aug 4, 2017, 10:54:17 AM8/4/17
to security-onion
Wes,

Tank you, that was exactly what I was looking for. Couldn't quite remember that one.

Thanks,
Brodie

Brodie Mather

unread,
Aug 10, 2017, 3:22:36 PM8/10/17
to security-onion
Where is the log for elasticsearch located on TP2?

Wes Lambert

unread,
Aug 10, 2017, 3:41:00 PM8/10/17
to securit...@googlegroups.com
Brodie,

Try looking in /var/log/elasticsearch/

Thanks,
Wes

On Aug 10, 2017 3:22 PM, "'Brodie Mather' via security-onion" <securit...@googlegroups.com> wrote:
Where is the log for elasticsearch located on TP2?

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Brodie Mather

unread,
Aug 10, 2017, 5:29:21 PM8/10/17
to security-onion
I have no elasticsearch folder in var/log/

Wes Lambert

unread,
Aug 10, 2017, 5:42:42 PM8/10/17
to securit...@googlegroups.com
My apologies, Brodie.

Elasticsearch didn't log to a file until June 5th, 2017.

TP2 was released on June 2nd, 2017.

You can still view the logs for Elasticsearch by doing:

sudo docker logs -f so-elasticsearch


Thanks,
Wes
On Aug 10, 2017 5:29 PM, "'Brodie Mather' via security-onion" <security-onion@googlegroups.com> wrote:
I have no elasticsearch folder in var/log/

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.

Brodie Mather

unread,
Aug 10, 2017, 5:55:00 PM8/10/17
to security-onion
Wes,

Thank you, I will check that out.

Not sure if you can help with this other issue but I will throw it out there as well.
The reason I asked about the logs is because lately I have been loading up kibana and it is showing STATUS: Red and the ui settings say elasticsearch plugin is red, and plugin:elasti...@5.4.0 says no living connections.

Not sure if you know the cause and or the remedy for this situation.

If you have any ideas it would be appreciated.

Thank you,

Brodie

Wes Lambert

unread,
Aug 10, 2017, 5:56:47 PM8/10/17
to securit...@googlegroups.com
You may want to first try 'sudo so-elastic-restart' to see if it helps to resolve your issue.

You may need to wait for a couple of minutes afterwards for everything to initialize.

Thanks,
Wes



--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Wes Lambert

unread,
Aug 10, 2017, 5:57:20 PM8/10/17
to securit...@googlegroups.com
You can also check the status of containers with:

sudo docker ps

or

sudo so-elastic-status 

Thanks,
Wes

On Thu, Aug 10, 2017 at 5:56 PM, Wes Lambert <wlamb...@gmail.com> wrote:
You may want to first try 'sudo so-elastic-restart' to see if it helps to resolve your issue.

You may need to wait for a couple of minutes afterwards for everything to initialize.

Thanks,
Wes


On Thu, Aug 10, 2017 at 5:55 PM, 'Brodie Mather' via security-onion <security-onion@googlegroups.com> wrote:
Wes,

Thank you, I will check that out.

Not sure if you can help with this other issue but I will throw it out there as well.
The reason I asked about the logs is because lately I have been loading up kibana and it is showing STATUS: Red and the ui settings say elasticsearch plugin is red, and plugin:elasti...@5.4.0 says no living connections.

Not sure if you know the cause and or the remedy for this situation.

If you have any ideas it would be appreciated.

Thank you,

Brodie

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.

Brodie Mather

unread,
Aug 10, 2017, 6:10:43 PM8/10/17
to security-onion
Wes,

After finishing initializing after the restart it says plugin:elasti...@5.4.0 Unable to connect to Elasticsearch at http://elasticsearch:9200

Doug Burks

unread,
Aug 10, 2017, 6:14:16 PM8/10/17
to securit...@googlegroups.com
Hi Brodie,

Since this is just a "technology preview" (and an older one at that),
we're not able to devote lots of time to troubleshooting. At this
point, I'd recommend wiping and installing Technology Preview 3:
http://blog.securityonion.net/2017/07/towards-elastic-on-security-onion.html
> --
> Follow Security Onion on Twitter!
> https://twitter.com/securityonion
> ---
> You received this message because you are subscribed to the Google Groups "security-onion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
> To post to this group, send email to securit...@googlegroups.com.
--
Doug Burks

Brodie Mather

unread,
Aug 10, 2017, 6:22:50 PM8/10/17
to security-onion
Doug,

No problem. I think a full system restart fixed the problem for a little a few days ago. I may just do that again. I understand you have to set your priorities. Was just wondering if there was any known quick fix.
I will considered moving to TP3 thanks for all the work you guys are doing.

Thanks,
Brodie

Reply all
Reply to author
Forward
0 new messages