Logall option - Catchall rule to send everything to ELK

963 views
Skip to first unread message

Alexis C

unread,
Jun 12, 2019, 6:38:03 AM6/12/19
to Wazuh mailing list
Hi team, 

I’m currently using the Wazuh agent to forward logs to the manager. 

However, some agents forward too much logs to the manager and thus to ELK and I would like to filter out some noise directly on the agent side and/or the manager. 

As of today, I’m forwarding all the logs from archives.json (using the logall option) as I need history in case of incident. 

Based on that, I have multiple questions:

1) What is the current best way to filter out noise coming from endpoint when using Wazuh Agent? 
          - It seems we cannot filter on the agent itself, but how to do so on the manager when using archives.json? Are there better solution than using Filebeat or Logstash for this purpose?

2) Using the logall option doesn't allow me to do any filtering (using rules) on the manager. Thus I wanted to create a catchall rule. However, we cannot send alerts level "0" but from "1" to "16" ... :(. Why is that? Is there a way to bypass that?

3) I tried to modify any rules with level of 0 to set them to 1 in order for them to be forwarded to the alerts.json file. But of course, it creates many issues afterward because of that (order of the rule, etc etc ...). What would then be the best way to do?

4) Another downside of using the logall option  is that it costs so much money for nothing. Indeed, if we want to populate the logs in rules (rule groups, info, level, severity, etc...) to have better control over the logs, then we start getting so much duplicate between the alerts.json and archives.json! How do you guys deal with that?


I would be very glad to have ideas/tips/overview of how you guys are forwarding all your logs to ELK.


Thank you :)
Alex

Daniel Escalona

unread,
Jun 12, 2019, 1:04:28 PM6/12/19
to Wazuh mailing list

Hi dear Alexis!

I will try to help you the best way possible, but I am not completly sure about understanding your requirements. So, if I am not clear enough with some of the reponses please let me know.

Firstly, what kind of events you consider noisy or not relevant in case of incident?
If you refer to a specific kind of event, it would be easier filter them.

Relating to your questions :

1) According to the type of log you want to filter we can look for different ways to solve it. On the other side, Filebeat and Logstash are good options as well, but we can look for another solution at a lower level.

Related to filter events on the agent side, there are different possibilities depending on the type of logs you want to silence.

For example :

- For Wazuh modules the only way to silence their events is to disable specific modules such as Syscollector or SCA. Here you can find related documentation about them:


https://documentation.wazuh.com/3.x/user-manual/reference/ossec-conf/wodle-syscollector.html
https://documentation.wazuh.com/3.x/user-manual/reference/ossec-conf/sca.html

- When collecting Windows logs from the EventChannel, you can use the `<query>` option to filter noisy Windows events on the agent log collector.

https://documentation.wazuh.com/3.x/user-manual/capabilities/log-data-collection/how-to-collect-wlogs.html#filtering-events-from-windows-event-channel-with-queries

- For other modules like Syscheck, there exists an `ignore` option which allows to filter events for specific files or directories.

https://documentation.wazuh.com/3.x/user-manual/reference/ossec-conf/syscheck.html#ignore

2) By design, level 0 is used to mute alerts, so rules with that level won’t never be written to the alerts file.


For the other two questions, every generated alert will be duplicated on the archives files.
So, the more alerts your manager fires (changing level 0 to 1), the more duplicated events will appears if monitoring both files. Our documentation already notices this:

Alerts will be duplicated if you use both of these files. Also, note that both files receive fully decoded event data.” - - - https://documentation.wazuh.com/3.x/getting-started/architecture.html


It would be very useful to have more detailed information about your needs. It is very important for us to understand the needed of collect all the generated events for security reasons and to avoid noisy events at the same time.

I hope I have been helpful.
We are at your disposal.

Best regards,
Daniel


Alexis C

unread,
Jun 19, 2019, 7:29:05 AM6/19/19
to Wazuh mailing list
Hi Daniel,

Sorry for the delay in responding and thank you for your answer.

I will try to explain a bit more my issue. 

So here, I'm only speaking about Syslog events (logs) and not SCA, Syscheck, etc.

For compliance reason, everything has to be sent to ELK except if I blacklist anything specifically. So by default we send everything and then we blacklist not the opposite. We don't whitelist but blacklist.

For the moment, to forward syslog events from machines, I'm using Wazuh as such:
Wazuh-Agent (will collect logs using log files) --> Wazuh-Manager --> Filebeat (installed on the wazuh manager machine) --> Logstash --> ELK

The issue by doing so is that I cannot filter any logs on the endpoint itself (Wazuh Agent cannot filter specific events out) so everything collected is sent to the Manager.

Of course, I can filter events out on Wazuh Manager using rules which is working as expected. But because of compliance reason mentionned above, I need to send every logs for a certain period of time ... So, the only way that I found so far to keep everything is using the logall option on the manager side.

However, by using the logall option, we don't have access to the rules capabilities of Wazuh so it means the logs are not categorised as they should be.

For instance "success authentication" is very verbose. I want to categorize it in ELK  (creating a specific rule for it) to be able to search for them afterward or create a detection. But because I have the logall option, it gets duplicated in ELK (alerts.json + archives.json). 
I understand why it is duplicated and the reason behind the "logall" option (archives.json). However, I start getting too many duplicate and the storage suffer because of that.

So the real question, is what is the best way to send everything to ELK but also having the possibility to use Wazuh rules to categorise the logs?

I was thinking of turning off the "logall" option and using a catchall rule in Wazuh. It will allow me to use the power of Wazuh (rules) as well as logging everything which has not previously triggered a rule.
But because the "level 0" does not get log in "alerts.json", it seems that a catch all rule is not possible.

How would you guys deal with this type of scenario?

Hope it is clear now :)

Thanks,
Alexis

Daniel Escalona

unread,
Jun 24, 2019, 10:27:45 AM6/24/19
to Wazuh mailing list
Hi, dear Alexis!
Thank you for the explanation.

In this case, you should update the rules what you need, from level=0 to level=[from 1 to 16 as you wish].
So, we must proceed as follows :
At first, we copy the rule(s) we care about to local_rules.xml file which is placed in /var/ossec/etc/rules/. Now, we will change the following options per rule added :
level, overwrite and noalert (if the last one was present remove it).
Finally, you restart wazuh-manager and you will be able to receive these alerts.

I show you an example :
<rule id="5700" level="0" noalert="1">
   
<decoded_as>sshd</decoded_as>
   
<description>SSHD messages grouped.</description>
 
</rule>

The above rule is stored in the next file :
0095-sshd_rules.xml

Now, we duplicate it in local_rules.xml as follows :
<rule id="5700" level="4" overwrite="yes">
   
<decoded_as>sshd</decoded_as>
   
<description>SSHD messages grouped.</description>
 
</rule>

The output achieved from alerts.log and alerts.json, respectively :
** Alert 1561380100.34052: - local,syslog,sshd,
2019 Jun 24 12:41:40 wazuhManager->/var/log/auth.log
Rule: 5700 (level 4) -> 'SSHD messages grouped.'
Jun 24 12:41:40 wazuhManager sshd[25809]: Disconnected from 10.0.2.2 port 36834Enter code here...


{"timestamp":"2019-06-24T12:41:40.916+0000","rule":{"level":4,"description":"SSHD messages grouped.","id":"5700","firedtimes":1,"mail":false,"groups":["local","syslog","sshd"]},"agent":{"id":"000","name":"wazuhManager"},"manager":{"name":"wazuhManager"},"id":"1561380100.34052","full_log":"Jun 24 12:41:40 wazuhManager sshd[25809]: Disconnected from 10.0.2.2 port 36834","predecoder":{"program_name":"sshd","timestamp":"Jun 24 12:41:40","hostname":"wazuhManager"},"decoder":{"name":"sshd"},"location":"/var/log/auth.log"}

I hope I have been helpful.
If you have more questions, don't hesitate to contact us.

Best regards,
Daniel & Wazuh Team.
Reply all
Reply to author
Forward
0 new messages