Custom Decoders and Rules

2,001 views
Skip to first unread message

Dinie Rosli

unread,
Oct 18, 2022, 3:36:13 AM10/18/22
to Wazuh mailing list
Hi,

Do I add the decoders and rules on the Wazuh Manager in the following respective path; /var/ossec/etc/decoders and /var/ossec/etc/rules? Or do I need to do it on the Agents that needs the specific decoders/rules?

If let's say I need a Jenkins decoder and rules, I can grab them from Wazuh Github. However, it says that it is suggested to not use local_decoder.xml/local_rules.xml and to create a new .xml file. Does it matter what name I put as the decoder/rules, as long as it is unique, or it needs a certain naming convention (I figure not but just double checking)? 

Jonathan Martín Valera

unread,
Oct 18, 2022, 4:18:53 AM10/18/22
to Wazuh mailing list

Hi,

The decoders and rules are only needed in the wazuh-manager. Note that those files are used for event analysis, and the component that is in charge of doing that is the wazuh-manager, the wazuh-agent are only in charge of collecting events and sending them to the wazuh-manager.

Regarding where to create those decoders and rules, as you say you have to do it in the /var/ossec/etc/rules/<rules_file_name>.xml path for the rules and /var/ossec/etc/decoders/<decoders_file_name>.xml. This is done so that the added files are not deleted during a wazuh-manager upgrade. The default wazuh-manager decoders and rules are located in a different path /var/ossec/ruleset/, and they can be modified during the upgrade.

As for the names of these files in principle does not matter, will be processed all those contained in the directories /var/ossec/etc/decoders/ and /var/ossec/etc/rules/ (configuration specified in the ossec.conf file).

My recommendation is that if you are going to create a decoder and/or single rule, do it in the existing local_rules.xml or local_decoder.xml file, but if you are going to create a complete group as in your case, create a new file, such as jenkins_rules.xml and add there all the custom ones. Of course, these files must have xml format and decoder and/or rules syntax.

I hope you find this information helpful.

Best regards.

Dinie Rosli

unread,
Oct 18, 2022, 4:29:47 AM10/18/22
to Wazuh mailing list
Hi Jonathan,

Thank you so much, that is very helpful. But now I come across another issue, after setting up jenkins decoders and rules, it still does not show up in the wazuh dashboard. I have edited the agent.conf on the manager side to fetch the jenkins.log but it does not appear in the wazuh dashboard. The agent config name is directed to the proper agent name registered. Is it because of the log format issue? If so, what is the appropriate log format and is there a documentation for it so I can parse through?

<agent_config name="My-Name">
    <localfile>
        <location>/var/log/apt/term.log</location>
        <log_format>syslog</log_format>
    </localfile>
   
    <localfile>
        <location>/var/log/syslog</location>
        <log_format>syslog</log_format>
    </localfile>
   
    <localfile>
        <location>/var/log/auth.log</location>
        <log_format>syslog</log_format>
    </localfile>
   
    <localfile>
        <location>/var/log/jenkins/jenkins.log</location>
        <log_format></log_format>
    </localfile>
</agent_config>

Dinie Rosli

unread,
Oct 18, 2022, 4:33:48 AM10/18/22
to Wazuh mailing list
I meant this is the proper agent.conf, but still not showing jenkins logs. 

<agent_config name="My-Name">
    <localfile>
        <location>/var/log/apt/term.log</location>
        <log_format>syslog</log_format>
    </localfile>
   
    <localfile>
        <location>/var/log/syslog</location>
        <log_format>syslog</log_format>
    </localfile>
   
    <localfile>
        <location>/var/log/auth.log</location>
        <log_format>syslog</log_format>
    </localfile>
   
    <localfile>
        <location>/var/log/jenkins/jenkins.log</location>
        <log_format>syslog</log_format>
    </localfile>
</agent_config>

Jonathan Martín Valera

unread,
Oct 18, 2022, 6:49:29 AM10/18/22
to Wazuh mailing list

Hi,

There are several things to keep in mind from the time you monitor a file until the alerts appear on the wazuh-dashboard.

In the wazuh-dashboard will appear all the security alerts generated by the wazuh-manager. A security alert is an event that has been received by the wazuh-manager, decoded and patched with some level 3 or higher rule. It is important to understand that not all events generate alerts and therefore do not appear in the wazuh-dashboard, but it depends on the ruleset whether or not to generate such alerts. To summarize, in the wazuh-dashboard only the alerts will appear, and these alerts are generated according to the rules and decoders configured for the received events.

Now, you have to check where in this process is the lack of configuration or understanding on your part. Let’s do a walkthrough from low level to the highest level. To do so, I would ask myself the following questions in that order:

  • (1) Is the wazuh-agent monitoring the log file?
  • (2) Is the wazuh-agent sending the events to the wazuh-manager?
  • (3) Is the wazuh-manager receiving events from the wazuh-agent?
  • (4) Is the wazuh-manager generating alert for the desired use case from the received event?
  • (5) Is the alert being sent and indexed correctly in the wazuh-indexer?

You will have to find out where is the problem. Here are some recommendations and tips to check each one of them, although I think that what is probably failing you is creating the necessary decoders and rules to generate the alerts in the desired cases (point 4).

1. Is the wazuh-agent monitoring the log file?

To monitor the log file, it is necessary to configure a <localfile> block. In this case, I see that you have done it through the centralized configuration (agent.conf) which is done from the wazuh-manager.

<localfile>
    <location>/var/log/jenkins/jenkins.log</location>
    <log_format>syslog</log_format>
</localfile>

The previous block is correct, now you have to check if the wazuh-agent has received it correctly. To do this, go to the agent.conf of the wazuh-agent and check that you have this configuration.

Also, you can check the ossec.log of the wazuh-agent to see if you have a log like the following (UNIX log syntax)

2022/09/26 08:09:10 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/log/jenkins/jenkins.log'.

If yes, it means that it is monitoring correctly.

2. Is the wazuh-agent sending the events to the wazuh-manager?

Check that the wazuh-agent is registered and connected correctly to the wazuh-manager. To do this, you can run the following command in the wazuh-manager and check if that agent appears in the list of registered agents and is in an active state.

/var/ossec/bin/agent_control -l

Also, you can check if in the ossec.log of the wazuh-agent there is a log indicating that it is connected to the wazuh-manager. For example:

2022/10/18 10:27:39 wazuh-agentd: INFO: (4102): Connected to the server (172.16.1.50:1514/tcp).

3. The wazuh-manager is receiving the events from the wazuh-agent.

To check the events received by the wazuh-manager you can enable logging and check if the events monitored by the wazuh-agent appear in the wazuh-agent file. To do this, edit the /var/ossec/etc/ossec.conf file of the wazuh-manager and activate:

<logall_json>yes</logall_json>

Then restart to apply the changes

systemctl restart wazuh-manager

Now, generate events in the Jenkins log (on the wazuh-agent side), to see if these events are recorded in the /var/ossec/logs/archives/archives.json file on the wazuh-manager side. In this file, a line will be generated for each event received.

If it appears, we can affirm that the wazuh-manager is correctly receiving the wazuh-agent events. In case it does not, we have to check the previous steps.

Note: Remember to disable the to no and restart wazuh to avoid unnecessary disk usage (when you finish debugging).

4. Is the wazuh-manager generating alert for the desired use case from the received event?.

Once we know that the wazuh-manager has received the event, we must check if the alert is being generated or not. To do this, just look for if after receiving the event an alert is being generated in the file /var/ossec/logs/alerts/alerts.json. A line should be generated for each alert generated.

Check if the related alerts are being generated. If not, then you will probably have to create new decoders and rules for the cases you want.

5. Is the alert being sent and indexed correctly in the wazuh-indexer?

Once the alert is generated and stored in the /var/ossec/logs/alerts/alerts.json file, the Filebeat component is in charge of sending it to the wazuh-indexer where it is stored and consulted to be shown in the wazuh-dashboard.

First, you should check in the wazuh-indexer alerts indexes if you have any desired alert indexed. If not, you should check if Filebeat is correctly configured and connected to the wazuh-indexer and that the status of the services is correct …

An important thing is that all the components are accessible through the network, and that there is no firewall blocking the communication through the communication ports in each of them.

This point 5 requires a more specific debugging that I will discuss later, if this is what is failing you.



Try everything I have indicated and let me know the results obtained in each of them to identify where the problem is and tell you how to proceed.

Regards.

Dinie Rosli

unread,
Oct 18, 2022, 7:16:22 AM10/18/22
to Wazuh mailing list
Hi Jonathan,  
First off, I'd like to thank you so much for such a comprehensive answer and guidance. I really appreciate your time on this.

Now, let's go through your points.
1.  Is the wazuh-agent monitoring the log file?

Yes, I've checked the agent.conf in the target agent and find that it is there and looking at the logs, I can see this as well:
2022/10/18 17:58:27 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/log/jenkins/jenkins.log'.

2. Is the wazuh-agent sending the events to the wazuh-manager?

Yes, the wazuh-agent is in active state to the wazuh-manager. This can also be seen by me as there are security alerts on wazuh dashboard from the agent regarding other stuff (such as syslog, PAM: Login session opened, etc.)

3. The wazuh-manager is receiving the events from the wazuh-agent

Seems like this is working as well as can be seen below.

2022 Oct 18 11:08:25 (My-Name) any->/var/log/jenkins/jenkins.log 2022-10-18 11:08:24.498+0000 [id=145]    WARNING hudson.security.csrf.CrumbFilter#doFilter: No valid crumb was included in request for /job/ISO_CLOUD_WATCH_AUTOMATION/configSubmit by xxx. Returning 403.
2022 Oct 18 11:08:25 ( My-Name  ) any->/var/log/jenkins/jenkins.log 2022-10-18 11:08:24.302+0000 [id=140]    WARNING hudson.security.csrf.CrumbFilter#doFilter: No valid crumb was included in request for /job/ISO_CLOUD_WATCH_AUTOMATION/descriptorByName/org.jenkinsci.plugins.workflow.cps.CpsFlowDefinition/checkScriptCompile by xxx. Returning 403.
2022 Oct 18 11:08:25 ( My-Name  ) any->/var/log/jenkins/jenkins.log 2022-10-18 11:08:24.498+0000 [id=145]    WARNING hudson.security.csrf.CrumbFilter#doFilter: Found invalid crumb 5627665c5c8fa2a5ecc22f6cf21d5cfd14b9d886282cb0d1d87ae0314fb46da9. If you are calling this URL with a script, please use the API Token instead. More information: https://www.jenkins.io/redirect/crumb-cannot-be-used-for-script
2022 Oct 18 11:08:29 ( My-Name  ) any->/var/log/jenkins/jenkins.log 2022-10-18 11:08:28.419+0000 [id=140]    WARNING hudson.security.csrf.CrumbFilter#doFilter: Found invalid crumb 5627665c5c8fa2a5ecc22f6cf21d5cfd14b9d886282cb0d1d87ae0314fb46da9. If you are calling this URL with a script, please use the API Token instead. More information: https://www.jenkins.io/redirect/crumb-cannot-be-used-for-script
2022 Oct 18 11:08:29 ( My-Name  ) any->/var/log/jenkins/jenkins.log 2022-10-18 11:08:28.421+0000 [id=140]    WARNING hudson.security.csrf.CrumbFilter#doFilter: No valid crumb was included in request for /job/ISO_CLOUD_WATCH_AUTOMATION/configSubmit by xxx. Returning 403.

4. Is the wazuh-manager generating alert for the desired use case from the received event?.

Based on my understanding of this question, yes it is receiving alert as can be seen to be answer of question 3.

5. Is the alert being sent and indexed correctly in the wazuh-indexer?

Filebeat is running fine as all other alerts from other agents are receiving on wazuh-indexer and showing on the dashboard. 

Jonathan Martín Valera

unread,
Oct 18, 2022, 9:46:41 AM10/18/22
to Wazuh mailing list

Hi,

Note that point (3) and (4) are not the same.

(3) checks that the wazuh-manager is receiving the events and (4) checks that these events are generating an alert. The condition for an event to generate an alert is that the event is decoded and patched with some level 3 or higher rule (ruleset).

What I ask you to check in this case, is that if the wazuh-manager has received the event (you have checked it in the file /var/ossec/logs/archives/archives.json), now check if it has generated alert corresponding to this event. For it, you have to check the same but in this other file /var/ossec/logs/alerts/alerts.json (notice that it is not the same, archives.json is for events and this one for alerts).

In case they are being stored in this file, then we continue with the debugging in the following steps, but in case they are not, we must review the syntax of the event, and the decoder and rule that you are using to generate that possible alert. To move forward, in case you are not generating the alert, share the raw log of the event (you can find it in the full_log field of the event found in the /var/ossec/logs/archives/archives.json file), as well as share the decoders and rules you are using to generate the alert. Finally, comment the condition that the event log has to fulfill for which you want an alert to be generated.

Dinie Rosli

unread,
Oct 18, 2022, 10:40:42 AM10/18/22
to Wazuh mailing list
Hi Jonathan,

It seems this is the issue. I can't seem to find any alerts related to jenkins.log or /var/log/jenkins/jenkins.log . Below is the raw log from archives.json file. However, do note that for the decoders and rules, I ended up not creating any new ones since I realized the decoders and rules for jenkins that I want has already exist in /var/ossec/ruleset/decoders and /var/ossec/ruleset/rules.

{"timestamp":"2022-10-18T14:22:32.564+0000","agent":{"id":"015","name":"My-Name","ip":"xxx.xx.x.xx"},"manager":{"name":"ip-xx-xx-x-xxx.ap-southeast-1.compute.internal"},"id":"1666102952.534920804","full_log":"2022-10-18 14:22:32.461+0000 [id=416]\tINFO\thudson.model.AsyncPeriodicWork#lambda$doRun$1: Started Periodic background build discarder","decoder":{},"location":"/var/log/jenkins/jenkins.log"}
{"timestamp":"2022-10-18T14:22:32.607+0000","agent":{"id":"015","name":"My-Name","ip":"xxx.xx.x.xx"},"manager":{"name":"ip-xx-xx-x-xxx.ap-southeast-1.compute.internal"},"id":"1666102952.534920804","full_log":"2022-10-18 14:22:32.471+0000 [id=416]\tINFO\thudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished Periodic background build discarder. 8 ms","decoder":{},"location":"/var/log/jenkins/jenkins.log"}

For the condition, I just want every event that happens in jenkins.log to show up as an event at the dashboard. I even set the log alert level to 1 to allow for effectively all events to show up. Below is the format I used in my agent.conf just to show again. 

<agent_config name="My-Name">
    <localfile>
        <location>/var/log/apt/term.log</location>
        <log_format>syslog</log_format>
    </localfile>
    <localfile>
        <location>/var/log/syslog</location>
        <log_format>syslog</log_format>
    </localfile>
    <localfile>
        <location>/var/log/auth.log</location>

        <log_format>syslog</log_format>
    </localfile>
    <localfile>
        <location>/var/log/jenkins/jenkins.log</location>
        <log_format>syslog</log_format>
    </localfile>
</agent_config>

It seems all the other three (term.log, syslog, auth.log) works fine except jenkins. 

Dinie Rosli

unread,
Oct 18, 2022, 9:06:40 PM10/18/22
to Wazuh mailing list
This is the logs that I want to parse through and show up as an event at the dashboard. 

2022-10-18 13:14:41.613+0000 [id=323]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished EC2 alive agents monitor. 0 ms
2022-10-18 13:22:32.461+0000 [id=326]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started Periodic background build discarder
2022-10-18 13:22:32.463+0000 [id=326]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished Periodic background build discarder. 1 ms
2022-10-18 13:24:41.612+0000 [id=327]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started EC2 alive agents monitor
2022-10-18 13:24:41.613+0000 [id=327]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished EC2 alive agents monitor. 0 ms
2022-10-18 13:34:41.612+0000 [id=337]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started EC2 alive agents monitor
2022-10-18 13:34:41.613+0000 [id=337]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished EC2 alive agents monitor. 0 ms
2022-10-18 13:44:41.612+0000 [id=340]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started EC2 alive agents monitor
2022-10-18 13:44:41.613+0000 [id=340]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished EC2 alive agents monitor. 0 ms
2022-10-18 13:54:41.612+0000 [id=343]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started EC2 alive agents monitor
2022-10-18 13:54:41.613+0000 [id=343]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished EC2 alive agents monitor. 0 ms
2022-10-18 14:01:41.811+0000 [id=18]    WARNING hudson.security.csrf.CrumbFilter#doFilter: Found invalid crumb 84ee9d5f604c5d912fb851187ae982340df793fde68cef231dbaafd1cac3b290. If you are calling this URL witha script, please use the API Token instead. More information: https://www.jenkins.io/redirect/crumb-cannot-be-used-for-script
2022-10-18 14:01:41.812+0000 [id=18]    WARNING hudson.security.csrf.CrumbFilter#doFilter: No valid crumb was included in request for /job/ISO_CLOUD_WATCH_AUTOMATION/configSubmit by xxx. Returning 403.
2022-10-18 14:04:41.612+0000 [id=410]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started EC2 alive agents monitor
2022-10-18 14:04:41.613+0000 [id=410]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished EC2 alive agents monitor. 0 ms

Jonathan Martín Valera

unread,
Oct 19, 2022, 4:10:42 AM10/19/22
to Wazuh mailing list

Hi,

The test to see if a log would match with a decoder and rule is to use the /var/ossec/bin/wazuh-logtest tool.

# /var/ossec/bin/wazuh-logtest

Starting wazuh-logtest v4.3.8
Type one log per line

2022-10-18 13:22:32.463+0000 [id=326]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished Periodic background build discarder. 1 ms

**Phase 1: Completed pre-decoding.
    full event: '2022-10-18 13:22:32.463+0000 [id=326]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished Periodic background build discarder. 1 ms'

**Phase 2: Completed decoding.
    No decoder matched.

In this case, notice that the log 2022-10-18 13:22:32.463+0000 [id=326] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished Periodic background build discarder. 1 ms of the event is not decoded or matched with any rule, so it will never generate an alert.

If we look for the decoders that wazuh has by default for Jenkins (see https://github.com/wazuh/wazuh/blob/v4.3.9/ruleset/decoders/0415-jenkins_decoders.xml), we observe that the format is different, both for the date and for the order (see the example comments).

For that reason, it is necessary to add decoders and custom rules for the cases you want. I will give you an example of how it would be created. For example, for the event log mentioned above.

First, I am going to look for a pattern in the log that allows me to identify that the log belongs to Jenkins. As I see, I can use the date format and the word hudson. for this, I will create the following decoder.

I create and add the following decoder, for example to the file /var/ossec/etc/decoders/local_decoder.xml:

<decoder name="custom_jenkins">
  <prematch type="pcre2">\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d+\+\d+ \[id=\d+\]\s+\w+\s+hudson.\.*</prematch>
  <regex>id=(\d+)]\s+(\w+)\s+(hudson.\.*): (\.*)</regex>
  <order>_id, level, hudson, log_description</order>
</decoder>

Next, I add in /var/ossec/etc/rules/local_rules.xml the following generic rule, wich will generate a level 3 alert when a log is decoded with our “custom_jenkins” decoder:

<group name="custom_jenkins,">
  <rule id="100051" level="3">
    <decoded_as>custom_jenkins</decoded_as>
    <description>Security event from Jenkins log: $(log_description)</description>
  </rule>
</group>

I check in /var/ossec/bin/wazuh-logtest tool that the log is now decoded and matched correctly.

# /var/ossec/bin/wazuh-logtest
Starting wazuh-logtest v4.3.8
Type one log per line

2022-10-18 13:22:32.463+0000 [id=326]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished Periodic background build discarder. 1 ms

**Phase 1: Completed pre-decoding.
    full event: '2022-10-18 13:22:32.463+0000 [id=326]   INFO    hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished Periodic background build discarder. 1 ms' 
**Phase 2: Completed decoding.
    name: 'custom_jenkins'
    _id: '326'
    hudson: 'hudson.model.AsyncPeriodicWork#lambda$doRun$1'
    level: 'INFO'
    log_description: 'Finished Periodic background build discarder. 1 ms'

**Phase 3: Completed filtering (rules).
    id: '100051'
    level: '3'
    description: 'Security event from Jenkins log: Finished Periodic background build discarder. 1 ms'
    groups: '['custom_jenkins']'
    firedtimes: '1'
    mail: 'False'
**Alert to be generated.

As you can see, this log is already decoded and would generate a level 3 alert. You can edit the decoder and rule contents as needed.

Finally, it is necessary to restart the wazuh-manager to apply the changes in the decoders and rules in the analysis engine.

systemctl restart wazuh-manager

And that would be all, from now on alerts of this type would be generated in the alerts.json and the flow would continue forward, until they are displayed on the dashboard.

I am going to recommend some references for rules and decoders:
   • Creating decoders and rules from scratch: https://wazuh.com/blog/creating-decoders-and-rules-from-scratch/
   • Sibling decoders: flexible extraction of information: https://wazuh.com/blog/sibling-decoders-flexible-extraction-of-information/
   • Custom rules and decoders: https://documentation.wazuh.com/current/user-manual/ruleset/custom.html
   • Testing decoders and rules: https://documentation.wazuh.com/current/user-manual/ruleset/testing.html

Try the above and let us know the results.

Regards.

Dinie Rosli

unread,
Oct 19, 2022, 6:38:26 PM10/19/22
to Jonathan Martín Valera, Wazuh mailing list
Hi Jonathan,

Thank you so much! This works perfectly and that wazuh-logtest tool is such a lifesaver. I've tried a few other custom decoders to play around and it works! 
I'll check every single link you've posted since I need to learn a few more things about custom decoders and rules, specifically on the <prematch> and <order> part of the decoders. Again, thanks a lot for your help on this. 

--
You received this message because you are subscribed to a topic in the Google Groups "Wazuh mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/wazuh/pZW0N34wMU0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/afe2f007-431c-4251-8db8-da617ffef2can%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages