logstash custom filters help

735 views
Skip to first unread message

Konrad W

unread,
Apr 16, 2018, 10:39:52 AM4/16/18
to security-onion
Hello,

I am trying to setup a custom logstash filter to parse Juniper SRX logs. Without making any changes I see messages being parsed as follows:

{
"_index": "xxxxx:logstash-syslog-2018.04.16",
"_type": "doc",
"_id": "TIbXzmIBbGlsOh53NQat",
"_version": 1,
"_score": null,
"_source": {
"port": 37766,
"@version": "1",
"syslog-host_from": "X.X.X.X",
"syslog-sourceip": "X.X.X.X",
"event_type": "RT_FLOW",
"syslog-facility": "user",
"syslog-host": "X.X.X.X",
"tags": [
"syslogng",
"syslog"
],
"syslog-legacy_msghdr": "RT_FLOW: ",
"@timestamp": "2018-04-16T14:23:53.153Z",
"logstash_time": 0.0016071796417236328,
"host": "gateway",
"syslog-tags": ".source.s_network",
"message": "RT_FLOW_SESSION_CLOSE: session closed idle Timeout: X.X.X.X/53946->X.X.X.X/443 None X.X.X.X/4042->X.X.X.X/443 source-nat-rule None 17 inside-to-untrust inside untrust 154354 8(2658) 8(2020) 74 UNKNOWN UNKNOWN N/A(N/A) vlan.2 UNKNOW",
"syslog-priority": "info"
},
"fields": {
"@timestamp": [
"2018-04-16T14:23:53.153Z"
]
},
"highlight": {
"event_type": [
"@kibana-highlighted-field@RT_FLOW@/kibana-highlighted-field@"
]
},
"sort": [
1523888633153
]
}


I started with creating a filter to initially just to add "firewall" tag and add "juniper_srx" in "type" field. I placed the file "1005_preprocess_junipersrx.conf" in /etc/logstash/custom, restarted the logstash and I do not see the tag applied nor the type field changed in the logs. I see the file was copied over to the /etc/logstash/conf.d/

Contents of the 1005_preprocess_junipersrx.conf file:

filter {
if "syslog" in [tags] {
if [event_type] == "RT_FLOW" {
mutate {
add_field => { "type" => "juniper_srx" }
add_tag => [ "firewall" ]
}
}
}
}

Thank you in advance for your help

Konrad

Wes Lambert

unread,
Apr 16, 2018, 11:33:25 AM4/16/18
to securit...@googlegroups.com
Have you tried checking for errors in /var/log/logstash/logstash.log?

Thanks,
Wes


Konrad


--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.



--

Konrad W

unread,
Apr 16, 2018, 12:10:56 PM4/16/18
to security-onion
Hey Wes,

Yes I did and the only errors I see since I restarted logstash are related to domainstats (time-outs).

I noticed that before logstash restart I had some warnings below....but I do not see any recent ones (after restarting logstash).

[2018-04-16T14:18:18,936][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}

Thanks,

Konrad
> To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
>
> To post to this group, send email to securit...@googlegroups.com.

Wes Lambert

unread,
Apr 16, 2018, 1:27:15 PM4/16/18
to securit...@googlegroups.com
You may  try something like the following instead of the previous to see if it helps:

filter {
  if "syslog" in [tags] {
    if [message] =~ "RT_FLOW" {
      mutate {
        replace => { "event_type" => "juniper_srx" }
        add_tag => [ "firewall" ]
      }
    }
  }
}

Thanks,
Wes

To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Konrad W

unread,
Apr 16, 2018, 2:14:44 PM4/16/18
to security-onion
Hey Wes,

So I see the "firewall" tag applied now, but the "event_type" field value was not changed to "juniper_srx" for some reason.

Thanks

Konrad

{
"_index": "xxxxx:logstash-syslog-2018.04.16",
"_type": "doc",
"_id": "Aoeiz2IBbGlsOh53RoYi",
"_version": 1,
"_score": null,
"_source": {
"syslog-facility": "user",
"syslog-host_from": "x.x.x.x",
"message": "RT_FLOW_SESSION_CLOSE: session closed idle Timeout: x.x.x.x/56841->x.x.x.x/53 junos-dns-udp x.x.x.x/2672->x.x.x.x/53 inside None 17 inside-to-untrust inside untrust 14662 2(142) 2(261) 3 UNKNOWN UNKNOWN N/A(N/A) vlan.207 UNKNOWN",
"syslog-sourceip": "x.x.x.x",
"logstash_time": 0.001325845718383789,
"syslog-tags": ".source.s_network",
"tags": [
"syslogng",
"syslog",
"firewall"
],
"@timestamp": "2018-04-16T18:05:41.177Z",
"syslog-legacy_msghdr": "RT_FLOW: ",
"@version": "1",
"host": "gateway",
"port": 51854,
"syslog-priority": "info",
"event_type": "RT_FLOW",
"syslog-host": "x.x.x.x"
},
"fields": {
"@timestamp": [
"2018-04-16T18:05:41.177Z"
]
},
"highlight": {
"event_type": [
"@kibana-highlighted-field@RT_FLOW@/kibana-highlighted-field@"
],
"syslog-legacy_msghdr": [
"@kibana-highlighted-field@RT_FLOW@/kibana-highlighted-field@:"
],
"event_type.keyword": [
"@kibana-highlighted-field@RT_FLOW@/kibana-highlighted-field@"
]
},
"sort": [
1523901941177
]

Wes Lambert

unread,
Apr 16, 2018, 2:28:49 PM4/16/18
to securit...@googlegroups.com
Actually, try changing 'event_type' back to 'type' in the replace -- I forgot it gets changed later in the pipeline.

Thanks,
Wes

To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Konrad W

unread,
Apr 16, 2018, 3:23:50 PM4/16/18
to security-onion
Thank you for your help Wes. That did it.

I was actually able to go back to my original filter and change "event_type" to "type" and used "replace" instead of "add_field" in mutate. I think those were the mistakes I had initially. This way I can add to it for different syslog types in the future with more "if" statements. Below is final working:

filter {
if "syslog" in [tags] {
if [type] == "RT_FLOW" {
mutate {
replace => { "type" => "juniper_srx" }
add_tag => [ "firewall" ]
}
}
}
}

I will continue with parsing the "message" itself next. Will see how that goes :)

Quick question, if I use grok patterns and define field names that do not exist, will they get added automatically or do I need to add them first?

Wes Lambert

unread,
Apr 16, 2018, 5:21:08 PM4/16/18
to securit...@googlegroups.com
Konrad,

You can name them whatever you wish during your pattern matching without having to add the fields beforehand.  Keep in mind, however, that if you are adding to an existing index, you will need to add a new template, or modify the existing template in /etc/logstash/custom/logstash-template.json so that the fields can be correctly mapped and cached in Kibana.  If you do not, you will see that the fields will show up in Kibana, but they will not be indexed in ES.  This may seem like a bit of extra work, however, it is meant to help prevent field explosion within the main indices.

Thanks,
Wes

To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Konrad W

unread,
Apr 16, 2018, 10:40:25 PM4/16/18
to security-onion
Hi Wes,

So for example if I want to use existing logstash-firewall-* index that is defined in logstash-template.json, what I need to do is copy the file to /etc/logstash/custom/ folder and add new fields to it, restart the service and it will replace the original file with this one...correct?

There are some fields I would like to re-use from existing template (source_ip, destination_ip, etc) for normalization purposes, so if I search for specific source_ip for example I get logs related to that IP from various sources (bro, firewall, etc)

What happens if that template file ever gets changed/updated at one point as part of security onion updates and then my custom file overwrites it again...I guess that can be an issue no?

Also, what If I define new template/index vs modifying existing one, can I have the same field names defined there as in existing template (logstash-template.json) - examples: source_ip, destination_ip, etc?

Thank you again for helping me out. I am a bit of a noob here with ELK.

Konrad

Wes Lambert

unread,
Apr 17, 2018, 8:16:24 AM4/17/18
to securit...@googlegroups.com
Konrad,

If there are updates to the default template, then you will need to manually merge those into your custom template, as you file will overwrite the default template.  If you define a new template, you can have the same field names in there -- you will just need to make sure the data types match if the same field is used across different indices.

Thanks,
Wes

To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Konrad W

unread,
Apr 17, 2018, 9:01:02 AM4/17/18
to security-onion
Thank you very much again for all your help! I will create a new template and index for this log source and tie it all together in the output file.

Konrad

Konrad W

unread,
Apr 20, 2018, 8:38:18 AM4/20/18
to security-onion
Hey Wes,

I created a template for the juniper srx. I basically copied the logstash-template.json to custom folder, modified it by removing fields not required and added the ones I need, changed the index pattern, etc. I also created new output file and referenced the new template. I see the template is copied over to the /etc/logstash folder.

When I restart logstash I get error below:

output {
elasticsearch {
# This setting must be a path
# File does not exist or cannot be opened /juniper_srx-template.json
template => "/juniper_srx-template.json"
...
}
}

Below is the output file:

output {
if "juniper_srx" in [tags] and "test_data" not in [tags] {
# stdout { codec => rubydebug }
elasticsearch {
hosts => elasticsearch
index => "logstash-juniper_srx-%{+YYYY.MM.dd}"
template_name => "logstash-juniper_srx"
template => "/juniper_srx-template.json"
template_overwrite => true
}
}
}


Here is the beginning of the template file:

more ../juniper_srx-template.json
{
"index_patterns": ["logstash-juniper_srx-*"],
"version":50001,
"order" : 0,
"settings":{
"number_of_replicas":0,
"number_of_shards":1,
"index.refresh_interval":"30s"
},

Would you know what I am missing here?

Thanks

Konrad

Wes Lambert

unread,
Apr 20, 2018, 11:52:16 AM4/20/18
to securit...@googlegroups.com
Konrad,

Please try the following putting the following /etc/nsm/securityonion.conf:

LOGSTASH_OPTIONS="--volume /etc/logstash/juniper_srx-template.json:/juniper_srx-template.json :ro"

Then restart Logstash.

Thanks,
Wes

To unsubscribe from this group and stop receiving emails from it, send an email to security-onion+unsubscribe@googlegroups.com.
To post to this group, send email to security-onion@googlegroups.com.

Konrad W

unread,
Apr 20, 2018, 4:15:24 PM4/20/18
to security-onion
> ...

That did it...thank you again!

Konrad

Konrad W

unread,
Apr 20, 2018, 4:16:54 PM4/20/18
to security-onion
Reply all
Reply to author
Forward
0 new messages