Dashboards & Data

722 views
Skip to first unread message

λ

unread,
Jul 28, 2021, 2:07:12 AM7/28/21
to Wazuh mailing list
Good day!

I am busy evaluating Wazuh for use in our organization and I havew to say I am very impressed with Wazuh.

Unfortunately, I know management is going to ask me the following question:
  Wazuh's dashboards cater for all of the security related events and assessments.  Is there some documentation available that can be distributed that explains each section (such as MITRE ATT&CK, PCI DSS) use and how to interpret the data there?  It'll help if I can distribute this to management as they would not know what they are looing at and would probably write it off as confusing.  That is unless I educate them first.

Secondly, I have looked at the additional (default) dashboards in Kibana. Like the Netflow, ASA Firewall, User Activity, SSH login attempts etc.  As an example I built a demo system and setup the Wazuh agent on an IIS server.

Both the Wazuh agent and Filebeat can collect IIS logs and forward it to the server:

<agent_config>
<!-- Shared agent configuration here -->
<localfile>
<log_format>iis</log_format>
<location>C:\inetpub\logs\LogFiles\W3SVC1\*.log</location>
<location>C:\inetpub\logs\LogFiles\W3SVC2\*.log</location>
</localfile>
<localfile>
<log_format>iis</log_format>
<location>C:\Windows\System32\LogFiles\HTTPERR\*.log</location>
</localfile>
</agent_config>

and

- module: iis
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ['C:/inetpub/logs/LogFiles/*/*.log']

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ['C:/Windows/System32/LogFiles/HTTPERR/*.log']

Which is the more correct configuration?  To have the Wazuh agent collect IIS logs; Filebeat; or both?

The end goal would be to collect logs from all over to monitor system health, network traffic and monitor a few important applications for errors and problems.

Thanks!

Federico Rodriguez

unread,
Jul 28, 2021, 12:57:22 PM7/28/21
to Wazuh mailing list
Hi Lambda!

There's online documentation describing Wazuh capabilities and compliance that may prove useful:

And of course Wazuh training services can also check be checked:

What Wazuh course offers:


As for reading IIS logs:
If you want Wazuh to interprete and show IIS alerts, you should set up Wazuh Agent configuration to do so. In case you just want to send the logs to elasticsearch, you could just make it with filebeat.
Beware if you just use filebeat, Wazuh won't be able to display any alert and you will need to build your own visualizations and Dashboards in Kibana.

Hope it helps, if you need more assistance don't hesitate to write back!

λ

unread,
Jul 29, 2021, 2:58:04 AM7/29/21
to Wazuh mailing list
Hi Federico,

Thanks for the info.  The documentation is exactly what I needed.

As for the log collection, it is a bit more complex than just IIS.
First lets start withe filebeat and kibana.

I noticed that filebeat has a series of pre-built dashboards for Kibana.  I'll list a few here as an example:
Filebeat-iis
Filebeat-netflow-overview
Filebeat-new-users-and-group
Filebeat-Iptables-Overview

I understand that these dashboards are built specifically for use with Filebeat.  Is there an appropriate way of getting the required data through to these dashboards via the Wazuh agent OR do I have to setup Filebeat on each device separately?
Secondly, I would love to collect logs from various applications that we use to monitor for application errors and send alerts in the event that a serious problem occurs.  I have done something similar in an old version of OSSEC to parse syslog entries and send me an email when something is afoot on the system.  This time around I would like to monitor application logs on both Linux and Windows based systems.

I can setup the appropriate decoders and rules on Wazuh as I had done in OSSEC years ago.
Will the agent then be able to parse these log files or must I also create a custom <log_format> config for the agent separately?

It feels a bit silly to have 2 different application collecting logs on a system if we could have one do everything.

Federico Rodriguez

unread,
Jul 30, 2021, 12:43:37 PM7/30/21
to Wazuh mailing list
Hi Lambda, sorry for the late response.
About using pre-built filebeat dashboards, I'm not sure about the proper setup. I will ask my team mates and be back as soon as I have news.

Speaking of reusing old OSSEC rules and decoders, if you are using syslog format you should be able to use custom old rules and decoders. 
Keep in mind Wazuh uses a predecoder according to the log format to process logs. If the format is unknown it won't be able to parse it.

λ

unread,
Aug 2, 2021, 3:07:39 AM8/2/21
to Wazuh mailing list
Thanks again for all of the help, I appreciate it a lot.

It is more than just the filebeat dashboards, I'd like to add my own to display relevant info on some systems that we run.
I know the easy route would be just to use filebeat to send the logs to the server or, where available, just use syslog to forward logs to the server. 

The ideal would be to create a 'decoder' that the wazuh agent can use to parse the application log files and then forward that info to the Wazuh server.
Then I need to build a few dashboards in Kibana for these apps.  It adds to maintenance and system load if we run two different processes monitor log files and I would like to avoid that if possible.

"Keep in mind Wazuh uses a predecoder according to the log format to process logs" ~ is this server side or the agent itself?

Franco Charriol

unread,
Aug 2, 2021, 4:22:36 PM8/2/21
to Wazuh mailing list
Hello λ,

I'm trying to follow what you need but I'm a little lost here. Is your doubt using Filebeat or Wazuh Syslog to send IIS logs to ES, but using the Filebeat pre-defined dashboards?

In the meanwhile,
If you choose to use Wazuh Syslog here are some links that you could find useful:
- How log collecting works between Wazuh agents and Wazuh server.
- Here is a blog about monitoring network devices
- Here is an interesting thread about how monitoring IIS with Wazuh


Regarding the Filebeat dashboards, all dashboards are saved objects of ES you can customize it to use the fields or visualization that you need, so I think using these pre-defined dashboards for Filebeat doesn't mean using only Filebeat to data ingest.


In any case, please let me know what I missing or if this was useful.
Best!

λ

unread,
Aug 3, 2021, 4:07:16 AM8/3/21
to Wazuh mailing list

Hi Franco,

Maybe if I draw what I am asking it might be easier.  Its a two part question so lets start with part 1:
Log Collection.jpg

I have a monitored hosts with applications and services running that include things like:
  - IIS
  - MSSQL
  - Oracle DBMS
  - Custom Applications

I would like to know if the Agent can monitor a custom log file (a), process it via the Logcollector (b) and forward those logs to Wazuh. I assume that the agent itself will want to do some pre-decoding of the log files before it is sent on to the server?
I draw this conclusion based of the the agent config, where you have to specify the format of the file you want to monitor, for example: 
<localfile>
<log_format>iis</log_format>
<location>C:\inetpub\logs\LogFiles\W3SVC1\*.log</location>
<location>C:\inetpub\logs\LogFiles\W3SVC2\*.log</location>
</localfile>
 
If I can get that part sorted, then I can create custom dashboards for my services and application to monitor them and also alarm me when there are problems on the system.  Which would be absolutely terrific.

The second question is that File Beat has default dashboards that you can import into Kibana, such as:
Filebeat Dashboards.png

I know management will look at this and say "Why can't we use some of these dashboards? Look, IIS is already there, just turn it on."
I have tested this by setting up both the Agent and File Beat on a host:

Wazuh Filebeat.jpg

I don't think anybody would easily accept the idea of setting up 2 different monitoring applications on each of our hosts, File Beat and Wazuh Agent.
Is there a way to simplify the the setup so that the Agent alone will collect all of the logs required to drive the Dashboards, such as the [Filebeat IIS] Access and error logs ECS?

When I look at this layout in the documentation, it would seem so:

Wazuh Integration.jpg
I would assume that if I can collect the required logs, I can get those logs to Wazuh via the agent, then output the logs to the raw data log.  From there Filebeat can pick through the logs and populate the data required for the default dashboards.

I hope this makes a bit more sense?

Kind Regards

Franco Charriol

unread,
Aug 3, 2021, 10:18:07 AM8/3/21
to Wazuh mailing list
Hi! wow, thanks very much for the super clarify answer!


| I would like to know if the Agent can monitor a custom log file (a), process it via the Logcollector (b) and forward those logs to Wazuh. I assume that the agent itself will want to do some pre-decoding of the log files before it is sent on to the server?
The logcollector just is sending the logs in pure format without pre-decoding them, logcollector uses log_format to set the format that the server should read the logs of this source.
Please check the values that you can set for log_format here. But how the documentation says For most of the text log files that only have one entry per line, syslog may be used.

After this, you should create the rules and decoders that you need to create the alerts that'll be sent to ElasticSearch in the same indices that use Wazuh normally wazuh-alerts-*.

| Is there a way to simplify the the setup so that the Agent alone will collect all of the logs required to drive the Dashboards, such as the [Filebeat IIS] Access and error logs ECS?
If you need only the pre-defined dashboards of Filebeat modules for Elastic, I recommend replicating the dashboards manually in Kibana with the wazuh-alerts-* index pattern and the new fields
the UI should make the task easier comparing the fields that use the original dashboards.

Another option is, as I mentioned in the previous answer you could import just the dashboards as saved objects and customize them to "read" the fields from Wazuh indices
but you should change the index pattern for each visualization in the ndjson (file result of a saved object exported), also you should change the fields
for example, the Filebeat-IIS module uses these fields in the dashboard, maybe if you send the same logs through Wazuh you should change the fields, for example, data.iis.access.sub_status instead of iis.access.sub_status
I'm not sure if you could change the fields from the UI without errors, if don't you should change them from the ndjson before import it
and modifying the ndjson is an annoying  task

saved objects - dashboards.png

I hope this answer helps you to make a decision, in any case, if this does not clarify or you have any others questions please let me know.

Best!



λ

unread,
Aug 3, 2021, 11:08:08 AM8/3/21
to Wazuh mailing list
Excellent, glad to see that we are on the same page now.
You have managed to answer all of my queries - I am going to setup a test to see if I can get it all working, thank you very much!

λ

unread,
Aug 12, 2021, 7:09:23 AM8/12/21
to Wazuh mailing list
Good day,

Work has kept me super busy on other tasks and I've only recently gotten back to investigating this topic.
To be honest I am rather lost as to the precise steps I need to take in order to get data from a server to WAZUH, through to Kibana and build a Dashboard. 

Lets take IIS as an example.

I have configured an agent to monitor IIS:

<localfile>
      <log_format>iis</log_format>
      <location>C:\inetpub\logs\LogFiles\W3SVC1\*.log</location>
      <location>C:\inetpub\logs\LogFiles\W3SVC2\*.log</location>
</localfile>

IIS decoders are already built into the system: 

<decoder name="web-accesslog-iis-default">
  <parent>windows-date-format</parent>
  <type>web-log</type>
  <use_own_name>true</use_own_name>
  <prematch offset="after_parent">^\S+ GET |^\S+ POST</prematch>
  <regex offset="after_parent">^\S+ (\w+) (\S+ \S+) (\S+) \S+ (\S+) (\S+) \.*(\d\d\d) </regex>
  <order>action, url, srcport, srcip, user_agent, id</order>
</decoder>

So from here I can expect the system to decode the information which is fine.

According to:

Wazuh Integration.jpg
So data has to be made available to ElasticSearch via Security Alerts or Raw Data Events.
I assume that Raw Data Events would be the Log All option one can turn on.  ElasticSearch is by default configured to monitor for the default file?

ElasticSearch will index all of the data and it'll be available in Kibana. 
So I can view everything under "Discover" in Kibana:
wazuh_kibana_discover.png

This'll quickly show me what data has been pulled in and I can use that to build a dashboard.
Using the IIS example, I can go and modify the default template for Filebeat-IIS to use the right data indexes Or I can rebuild the dashboard manually and to my liking.

Does this sound about right?

Franco Charriol

unread,
Aug 13, 2021, 9:55:02 AM8/13/21
to Wazuh mailing list
Hi! this sounds excellent!
please, let me know how you are doing with the template

best!

λ

unread,
Sep 1, 2021, 3:59:39 AM9/1/21
to Wazuh mailing list
Good day,

Just a followup question.  I am trying to get filebeat to read the archives log file.
I have enabled <logall_json>yes</logall_json> on the ossec.conf

I have also enabled the following setting on /etc/filebeat/filebeat.yml

filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: true

but when I check if filebeat is monitoring the file via lsof I only get:

lsof archives.log
COMMAND    PID  USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
wazuh-ana 1650 ossec   10w   REG    8,5 97383911 8267114 archives.log

When researching this I found that in the past you had o manually add in paths for the archives log file in filebeat.yml:


 - input_type: log paths: 
 - "/var/ossec/logs/archives/archives.json" 
 fields:
  event_type: archives
 fields_under_root: true
 document_type: json
 json.message_key: log
 json.keys_under_root: true
 json.overwrite_keys: true

I assume this is legacy and was replaced with the simple config option of enabling archives.

Some help would be greatly appreciated.

Franco Charriol

unread,
Sep 2, 2021, 2:48:33 PM9/2/21
to Wazuh mailing list
Hi λ,
please let me try to reproduce it. What version of Wazhu and ES are you using now?

Franco Charriol

unread,
Sep 2, 2021, 3:15:35 PM9/2/21
to Wazuh mailing list
Well turning on <logall_json> and enabling the Filebeat module should be enough.

Do you have in Kibana wazuh-archives* indices? You can check this running in Kibana Dev Tools the next request
`GET _cat/indices/wazuh-archives*`

Also adding a new Index Pattern `wazuh-archives*` you will be able to explore these indices in Kibana discover.

In case that you don't have the corresponding index, you could try to verify if Wazuh is filling the archives.json with data
`tail -n 20 /var/ossec/logs/archives/archives.json`

λ

unread,
Sep 2, 2021, 6:04:55 PM9/2/21
to Wazuh mailing list
Hi!

Yes. I can see the logs in archives - no problem.
The index does not exist.
wazuh1.png

Using the UI to add an index I get the following:

wazuh2.png

λ

unread,
Sep 2, 2021, 6:09:54 PM9/2/21
to Wazuh mailing list
I recently updated all of the components as part of the 4.1 -> 4.2 upgrade, following the upgrade guide.
Looking via apt list --installed:

elasticsearch-oss/stable,now 7.10.2 amd64 [installed]
opendistroforelasticsearch-kibana/stable,now 1.13.2 amd64 [installed]
opendistroforelasticsearch/stable,now 1.13.2-1 amd64 [installed]

Franco Charriol

unread,
Sep 3, 2021, 12:32:26 PM9/3/21
to Wazuh mailing list
You can try to run `filebeat setup --pipelines` to force to change and restart Filebeat
What is the result of running this command? `filebeat test output`

Herman Rossouw

unread,
Sep 3, 2021, 2:09:26 PM9/3/21
to Franco Charriol, Wazuh mailing list
Hi!,

Here we go:

root@esfseim:/var/ossec/logs/archives# filebeat test output
elasticsearch: https://127.0.0.1:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 127.0.0.1
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.3
    dial up... OK
  talk to server... OK
  version: 7.10.2

--
You received this message because you are subscribed to a topic in the Google Groups "Wazuh mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/wazuh/1eH3AeDTvog/unsubscribe.
To unsubscribe from this group and all its topics, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/117f6a1c-b9c5-4090-84d4-e4199d3f4e35n%40googlegroups.com.

Franco Charriol

unread,
Sep 3, 2021, 2:34:59 PM9/3/21
to Wazuh mailing list
It looks weird, Filebeat is pushing only the alerts to ES, right?
Did you find any related log in filebeat?
`cat /var/log/filebeat/filebeat | grep -E "ERROR|WARN|archives"`

Herman Rossouw

unread,
Sep 3, 2021, 4:36:15 PM9/3/21
to Franco Charriol, Wazuh mailing list
Hi,

I am not seeing anything coming in on filebeat.
In fact, that log file only contains the following:

2021-09-03T20:08:47.966+0200    INFO    instance/beat.go:645    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2021-09-03T20:08:47.967+0200    INFO    instance/beat.go:653    Beat ID: 569dd9ff-0124-4049-8d3e-5dcc259a6270
2021-09-03T20:08:47.967+0200    INFO    [index-management]      idxmgmt/std.go:184      Set output.elasticsearch.index to 'filebeat-7.10.2' as ILM is enabled.
2021-09-03T20:08:47.968+0200    INFO    eslegclient/connection.go:99    elasticsearch url: https://127.0.0.1:9200
2021-09-03T20:08:48.027+0200    INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.10.2

λ

unread,
Sep 7, 2021, 3:46:14 AM9/7/21
to Wazuh mailing list
Good day,

I suspect I have found the problem.

According to the config in: /usr/share/filebeat/module/wazuh/archives/manifest.yml

var:
  - name: paths
    default:
      - /var/ossec/logs/archives/archives.json
  - name: index_prefix
    default: wazuh-archives-4.x-

input: config/archives.yml

ingest_pipeline: ingest/pipeline.json

the default path for archives is to the json file and not the old log file.  Which means i have to enable <logall_json>

    <logall>yes</logall>
    <logall_json>no</logall_json>

instead of <logall>

Franco Charriol

unread,
Sep 7, 2021, 8:00:16 AM9/7/21
to Wazuh mailing list
Oh I assumed that was a fact because your comment

| I have enabled <logall_json>yes</logall_json> on the ossec.conf
And I think I mentioned it too. But anyway, now you can see the new indices for wazuh-archives* in Kibana, right?
 

Herman Rossouw

unread,
Sep 7, 2021, 8:22:14 AM9/7/21
to Franco Charriol, Wazuh mailing list
I had to create the indice wazuh-archives*, but yes I can see it now, thanks.

Franco Charriol

unread,
Sep 7, 2021, 8:26:45 AM9/7/21
to Wazuh mailing list
Glad the issue has been resolved.

If you have any further questions, please do not hesitate to contact us.
Best!
Reply all
Reply to author
Forward
0 new messages