Wazuh Splunk Ingestion

215 views
Skip to first unread message

Niall McDowell

unread,
Nov 5, 2020, 10:24:07 AM11/5/20
to Wazuh mailing list

Hey,
currently we have Wazuh logging to a Splunk instance with two different sourcetypes. One looks to be the decoder phase (ossec) while the other is alerting phase (wazuh). Due to the high daily Splunk ingestion with Wazuh is the logging of the decoder phase necessary, in other words, does the logging of the alerting phase rely on the logging of the decoder phase?

Thanks

Yana Zaeva

unread,
Nov 9, 2020, 11:09:29 AM11/9/20
to Wazuh mailing list
Hi Niall, 

Well, as you said, in order to create alerts for events we make use of two phases: decoding and alerting. For the decoding part, we use decoders, which you can find in the following path: /var/ossec/ruleset/decoders. Their main goal is to parse the event, creating different fields for each piece of information. Here you can check how this information is parsed in Splunk:
 
7.3.0_fields_defaultselected-compressor.png

If you have the decoders disabled you will not be seeing these fields parsing. Maybe Splunk can parse some of these fields like the timestamp, but it will not parse them all. 

Regarding the second phase, the alerting one, we use rules for it. You can find all the default rules in this path /var/ossec/ruleset/rules/. Some of these rules are written independently from decoders, like this one:


<rule id="1002" level="2">

<match>$BAD_WORDS</match>

<description>Unknown problem somewhere in the system.</description>

<group>gpg13_4.3,</group>

</rule>  

but many of them rely on some already parsed fields, like the following rule: 

<rule id="999303" level="3">

<decoded_as>test_expr_negation</decoded_as>

<match>test_dstip</match>

<dstip negate="yes">192.168.0.15</dstip>

<description>Testing enabled dstip field negation</description>

</rule> 

We can see here that this rule is relying on being decoded by test_expr_negation and having a field called dstip in where we can find the IP 192.168.0.15. If we have the decoder disabled, this rule will never be triggered and consequently, it will not be received by Splunk as an alert.

In a nutshell, having the decoders disabled has two disadvantages: you probably will have to overwrite some of the default rules to not rely on decoders and parsed fields (if you do not modify them, most of the events won't be arriving), and your events' information will not be parsed. 

In case you still want to disable them, just use the tag <decoder_exclude>decoder's name</decoder_exclude>  in the ossec.conf file. You can find more information about this here: https://documentation.wazuh.com/4.0/user-manual/reference/ossec-conf/ruleset.html#decoder-exclude

In order to modify the rules accordingly, check this link, where you can find further information about it: https://documentation.wazuh.com/4.0/user-manual/ruleset/ruleset-xml-syntax/rules.html

Also, you can find all default rules and decoders here:

Hope I was helpful. Let me know if you have any doubt.

Regards,
Yana.

Niall McDowell

unread,
Nov 10, 2020, 8:57:16 AM11/10/20
to Wazuh mailing list
Hi Yana thanks for the reply,

what would be the disadvantages of disabling the alerting phase in Splunk?
Would it be possible to use regular expressions to extract the data from the decoding phase to create alerts?

Yana Zaeva

unread,
Nov 10, 2020, 2:14:07 PM11/10/20
to Wazuh mailing list
Hi Niall,

First of all, take into account that decoding and alerting are complementary phases and it would be pretty difficult to implement them separately. Once an event arrives at the Wazuh manager, this event will be parsed by a decoder, then it will be checked If any rule matches this event and only in the case a rule is being triggered this alert will be sent to Splunk. 

The decoding and applying rules do not generate extra alerting: the alerts that you are receiving only belongs to the events that are taking place in your monitored hosts and devices. 

Regarding the regular expressions query, take into account that only those events who triggered a rule were sent to Splunk. Events that do not match any rule are not being sent to Splunk. In order to be able to send them, you will have to modify the file from which Wazuh is picking the events, which by default is /var/ossec/logs/alerts/alerts.log. Apart from this, you will have to modify some of the configurations of the Splunk forwarder too. 
 
Please, let me know exactly what you are trying to achieve because if you want to lower the ingestion of alerts to Splunk, you can decrease the level of those alerts which are not of your interest.

Waiting for your reply,
Yana.

Niall McDowell

unread,
Nov 11, 2020, 5:24:25 AM11/11/20
to Yana Zaeva, Wazuh mailing list
Hi Yana, the main goal here is to reduce the ingestion of logs to splunk from wazuh.
As said before we have both the decoding phase and alerting phase currently logging.

Thanks

--
You received this message because you are subscribed to a topic in the Google Groups "Wazuh mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/wazuh/qS9BJObNdjk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/3516a810-4413-4469-b8ab-685a3cc1fcf8n%40googlegroups.com.

Yana Zaeva

unread,
Nov 11, 2020, 9:58:21 AM11/11/20
to Wazuh mailing list
Hi Niall,

Both, the decoding and alerting phases are complementary. The decoding phase relies on the alerting phase, and the alerting phase relies on the decoding phase. 

Once an event is received by the manager, the decoder starts parsing this information (decoding phase, which does not generate any additional logs while being performed). Once this event is parsed, this event is not injected to Splunk, so there is not any log ingestion in this phase. When this decoding phase is finished, the already parsed event go through the rules (which we can call the alerting phase). If this event does not match any rule, no alert would be generated, so it will not be injected to Splunk either. Only when an event matches a rule the alerting phase take place and this event is actually being sent to Splunk. 

In a nutshell, there is not any log ingestion to Splunk after the decoding phase, only after the alerting one and just in the case an event matches a rule, so an alert is generated. This alert is what is being sent to Splunk. 

We could also say that the decoder is the first phase and if an event matches a decoder, it would pass to the second phase, which is the alerting one. Once in this second phase, if this already parsed event matches a rule, an alert will be generated and only then we will pass to the third phase, which is the ingestion of this alert to Splunk.

Lastly, you can check the Splunk Forwarder configuration in this file /opt/splunkforwarder/etc/system/local/inputs.conf, which contains this information by default:

[monitor:///var/ossec/logs/alerts/alerts.json]
disabled = 0
host = MANAGER_HOSTNAME
index = wazuh
sourcetype = wazuh

This is where you can make sure that the information is being picked from only the alerts.json file, where all the events are being stored after matching a certain rule and also matching a decoder. They are only stored there if they surpassed both, the decoding and alerting phase. If an event only passes one of them, it will not be injected to Splunk nor stored here. 


Let me know if you have any questions.

Regards,
Yana.
Reply all
Reply to author
Forward
0 new messages