Filebeat

100 views
Skip to first unread message

senatorh...@gmail.com

unread,
Nov 1, 2018, 3:09:39 PM11/1/18
to security-onion
Anyone willing to share a logstash .conf file for filebeat?

I would like to start by parsing a simple csv logfile. I dont even require headers assigned.

Right now I just get the entire line in message.

I also need to understand how to include only logs with a specific tag (set in the client filebeat yml file)

Any help for getting started would be highly appreciated.

Wes Lambert

unread,
Nov 1, 2018, 5:50:35 PM11/1/18
to securit...@googlegroups.com
It's a little difficult to determine what exactly you want to parse.  Could you provide an example log?

Thanks,
Wes

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.


--

senatorh...@gmail.com

unread,
Nov 2, 2018, 5:01:19 AM11/2/18
to security-onion
It is a CSV file from the Windows NPS system. This is my current config file, that I have copied to customs directory.

filter {
if [type] == "log" and [beat.hostname] =="SERVER.DOMAIN.LOCAL" {
csv {
columns => ["ServerName","Service","Date","Time","PacketType","UserName","FQDN","CalledStaID","CallingStaID","CallbackNumber","FramedIPAddress","NASIdent","NASIP","NASPort","ClientVendor","ClientIP","ClientFriendlyName","EventTimeStamp","PortLimit","NASPortType","ConnectionInfo","FramedProtocol","ServiceType","AuthenticationType","PolicyName","Reason-Code","Class","SessionTimeOut","IdleTimeOut","TerminationAction","EAPFriendlyName","AcctStatusType","AcctDelayTime","AcctInputOctets","AcctOutputOctets","Acct-Session-Id","Acct-Authentic","Acct-Session-Time","Acct-Input-Packets","Acct-Output-Packets","Acct-Terminate-Cause","Acct-Multi-Ssn-ID","Acct-Link-Count","Acct-Interim-Interval","Tunnel-Type","Tunnel-Medium-Type","Tunnel-Client-Endpt","Tunnel-Server-Endpt","Acct-Tunnel-Conn","Tunnel-Pvt-Group-ID","Tunnel-Assignment-ID","Tunnel-Preference","MS-Acct-Auth-Type","MS-Acct-EAP-Type","MS-RAS-Version","MS-RAS-Vendor","MS-CHAP-Error","MS-CHAP-Domain","MS-MPPE-Encryption-Types","MS-MPPE-Encryption-Policy","Proxy-Policy-Name","Provider-Type","Provider-Name","Remote-Server-Address","MS-RAS-Client-Name","MS-RAS-Client-Version"]
}
}
}

Wes Lambert

unread,
Nov 2, 2018, 7:42:40 AM11/2/18
to securit...@googlegroups.com
Instead, of:

if [type] == "log" and [beat.hostname] =="SERVER.DOMAIN.LOCAL" 

Have you considered setting a tag in in the filebeat.yml and filtering on that tag?  I don't think the type is actually "log" at the point.  I think you are misunderstanding the "type" setting in filebeat.yml


ex. filebeat.yml:

tags: ["mytag"]
  

Ex. Logstash config:

if "mytag" in [tags] 

You should also be able to add fields (https://www.elastic.co/guide/en/beats/filebeat/6.0/include-fields.html), such as event_type in filebeat.yml and filter on that with something like:

if [event_type] == "myeventtype" 

Thanks,
Wes

 

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

senatorh...@gmail.com

unread,
Nov 2, 2018, 8:21:25 AM11/2/18
to security-onion
I tried using the tags filter instead of type. To me is doesn't really matter. I just need to separate my different source machines.

But still all of my csv data end up in the message field, when looking in Kibana. Obviously I am missing something. But what?

Wes Lambert

unread,
Nov 2, 2018, 8:47:17 AM11/2/18
to securit...@googlegroups.com
You may want to try adding a tag with mutate to check if your filter is actually matching:

ex.

mutate {
  add_tag => ["matches_filter"]
}

On Fri, Nov 2, 2018 at 8:21 AM <senatorh...@gmail.com> wrote:
I tried using the tags filter instead of type. To me is doesn't really matter. I just need to separate my different source machines.

But still all of my csv data end up in the message field, when looking in Kibana. Obviously I am missing something. But what?

--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

senatorh...@gmail.com

unread,
Nov 2, 2018, 10:33:40 AM11/2/18
to security-onion
Filtering on tags appears to have resolved my issue. Great & thanks a lot.
Now I just need to learn about the indexing of the new fields inside of elastic.

senatorh...@gmail.com

unread,
Nov 2, 2018, 2:54:33 PM11/2/18
to security-onion
Seeing the seperated fields in Kibana now, but appears that they are not properly initialized.

"No cached mapping for this field"

I tried refreshing fields from the managememt> index page.

Do I need to create a json custom file too? Instructions appreciated :-)

Wes Lambert

unread,
Nov 2, 2018, 7:00:01 PM11/2/18
to securit...@googlegroups.com
Unless you are populating fields to a custom index, you'll need to have those fields mapped via a mapping template, and put it in /etc/logstash/custom (as well as specify it as a bind mount in /etc/nsm/securityonion.conf in LOGSTASH_OPTIONS).  You can find an example in /etc/logstash/logstash-template.json.


Thanks,
Wes



--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.
Message has been deleted

senatorh...@gmail.com

unread,
Nov 3, 2018, 6:32:32 AM11/3/18
to security-onion
I found that I get searchable fields in Kibana, if I just reuse fieldnames which are already present in the Indexes. That is not the best solution. I know you kinda already tried explaining, but I didnt quite get it. It is a bit new to me.

Can I simply define new field names + their type directly in the beats-templates.json? I really don't want to much complexity, before I understand the basics.

Another thing:
What is the purpose of having three files (input/filter/output) for Logstash parsing? Is it mandatory or can I consolidate everything into one file?

Wes Lambert

unread,
Nov 3, 2018, 7:16:45 AM11/3/18
to securit...@googlegroups.com
You'll actually want to re-use fields as much as possible to prevent field explosion.  That was the primary reason we chose to explicitly map field data types for the core indices -- to prevent folks from having various issues with dynamic mapping/conflicts/etc, and prevent the auto-generation of thousands and thousands of fields.  I know it may not be ideal at the moment to have to add things in various places, but it is a safeguard that we feel needs to be in place for the time being.

You can certainly create a new index if you like, and decide if you would like to map the fields explicitly there, or leave it to be auto-typed.

Keep in find, if there are field data type conflicts (fields with the same name and diff data types) across different indices in ES, then you will not be able to use them for search/aggregation in Kibana and it will complain of the conflicts.  At that point you can either re-index with the correct data type for the field(s) defined, or delete the offending indices.

Instead of directly editing beats-template.json, we recommend that you copy the file to /etc/logstash/custom and modify it there, as it may get updated in the future, and your changes would be overwritten. By placing it in Logstash custom, it will replace the one in /etc/logstash, even if the one managed by Security Onion changes.

 Our Logstash pipeline config is largely based off of Justin Henderson's work, which you can find here:  


The purpose of having input/preprocess/output files is so, that at various points in the pipeline, we can add/modify things as we wish, based on certain conditions, for multiple types of events.  I think this separation allows us to be more granular with how we want to process events. 

Thanks,
Wes

 





--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups "security-onion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to security-onio...@googlegroups.com.
To post to this group, send email to securit...@googlegroups.com.
Visit this group at https://groups.google.com/group/security-onion.
For more options, visit https://groups.google.com/d/optout.

senatorh...@gmail.com

unread,
Nov 3, 2018, 12:05:40 PM11/3/18
to security-onion
Awesome. Thank you. I feel so much smarter now :-)
Reply all
Reply to author
Forward
0 new messages