Wazuh 4.12 doesn't generate Vulnerability Events

169 views
Skip to first unread message

MaP

unread,
Nov 10, 2025, 7:42:46 AMNov 10
to Wazuh | Mailing List

Hello everyone,

Our Wazuh cluster doesn't seem to be generating any vulnerability events. This is the case regardless of which server in the cluster is running.
The vulnerability dashboard displays vulnerabilities and the inventory list is also fully populated, but no events are being generated.


What I've done so far:

  • Checked osse.conf (and seems to be ok and is the same for all cluster-members):

          <vulnerability-detection>
           <enabled>yes</enabled>
           <index-status>yes</index-status>
            <feed-update-interval>2h</feed-update-interval>
            <offline-url>https://path_to_our_updatefilepath.zip</offline-url>
          </vulnerability-detection>

  • Check the ossec.log for errors related to vulnerability detection (data from 
             ................               
             wazuh-modulesd:vulnerability-scanner: INFO: Initiating update feed process.
            wazuh-modulesd:vulnerability-scanner: INFO: Triggered a re-scan after content update.
             wazuh-modulesd:vulnerability-scanner: INFO: Feed update process completed.
           ..........
           
           This log is always triggered when we receive a new update to the CVE database.

  • Next, we checked whether the vulnerability information in the  archives.json was arriving on the manager, which is done . Something liekt that is logged:
          "full_log": "{\"vulnerability\": \"assigner\":\"microsoft\",\"classification\":\"CVSS\",\"cve\":\"CVE-2..................



What is not generated is an alert with a level. Unfortunately, I have no further ideas on how to proceed to fix the error.

 Best regards

MaP

Gabriel Emanuel Valenzuela

unread,
Nov 10, 2025, 8:46:03 AMNov 10
to Wazuh | Mailing List

The Vulnerability Detection (VD) module generates alerts when new vulnerabilities are found or existing ones are resolved due to package installation, removal, or upgrade. However, not every detected change leads to an alert, generation depends on the context of detection.

1. Operating System Alerts
  • Alerts are not triggered during the initial scan.

  • When an agent syncs with the manager for the first time, it simply reports the current OS version and patch level — no “new event” is detected.

  • Alerts only appear in subsequent scans, when the OS version or patch state changes.

2. Package Alerts
  • Generated only when a package installation or removal adds or removes a vulnerability from the inventory.

  • The change must occur while the agent is running, and it must be captured during a scheduled Syscollector scan. (Deltas messages)

  • If the change happens while the agent is stopped or is only detected after a restart, no alert will be generated.

3. Additional Factors
  • Cluster environments:
    When an agent connects to a different manager node, the inventory syncs but no alerts are generated during that initial synchronization.

  • Content updates:
    When new CVE definitions or vulnerability mappings are downloaded, all agents are re-scanned to refresh their inventory. This re-scan does not generate alerts, even if changes are found.

Related with your log, your content update will not trigger an alert

MaP

unread,
Nov 11, 2025, 10:02:56 AMNov 11
to Wazuh | Mailing List
Hi Gabriel,

Thanks for the detailed explanation of how it works.

I still think something isn't working correctly in my setup.

To test this, I used a computer that still had an old kernel as a fallback and uninstalled the old kernel.
Then I waited until a syscollector scan was run on the server.
Now I can see, in the inventory list, that no vulnerabilities are reported for this specific kernel.
But the event list is still empty!

The agent was always connected to the same server from the cluster.

The following entry can be found in the ossec.log on the corresponding agent:

2025/11/11 14:14:58 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2025/11/11 14:15:02 wazuh-modulesd:syscollector: INFO: Evaluation finished.

What else can I do to find out what isn't working?

Best regardz 
MaP

Gabriel Emanuel Valenzuela

unread,
Nov 11, 2025, 3:44:23 PMNov 11
to Wazuh | Mailing List

To confirm that alerts are being forwarded correctly, we need to verify that Filebeat is functioning as expected.

Please run the following command on the Wazuh manager:

filebeat test output

This will check whether Filebeat can successfully connect to the Wazuh-Indexer instance.

Aditionally, can you enable debug mode by setting the following parameter in /var/ossec/etc/internal_options.conf:

wazuh.modules_debug=2

Restart the Wazuh Manager to apply the changes:

systemctl restart wazuh-manager

Then, share the ossec.log file for further analysis. Ensure that any sensitive information is removed before sharing. You can use my personal email.

Try please to reproduce this scenario of the kernel with the debug mode enabled

MaP

unread,
Nov 12, 2025, 8:49:20 AMNov 12
to Wazuh | Mailing List
Hi Gabriel,

filebeat test output looks good, i think:

filebeat test output
elasticsearch: https://192.168.xxx.xx2:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 192.168.xx.xx2
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.2
    dial up... OK
  talk to server... OK
  version: 7.10.2
elasticsearch: https://192.168.xxx.xy2:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 192.168.xxx.xy2
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.2
    dial up... OK
  talk to server... OK
  version: 7.10.2
elasticsearch: https://192.168.xxx.xz2:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 192.168.xxx.xy2
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.2
    dial up... OK
  talk to server... OK
  version: 7.10.2


Next i enabled the debug level 2 on one of our servers, on which another agent was connected also had an old kernerl version installed.

The logs from the ossec.log i'll send to you in private answer.
For my opinion the logs look ok

Regards 

MaP

Gabriel Emanuel Valenzuela

unread,
Nov 12, 2025, 5:00:57 PMNov 12
to Wazuh | Mailing List

Thanks for the logs!
I reviewed them, and everything appears correct — I can see at least 10,000 alerts being generated.

For example, one of the alert messages looks like this:

Report sent: 1:[003] (our_agent-name) any->vulnerability-detector:{ "vulnerability":{ "classification":"CVSS", "cve":"CVE-20XX-XXX64", "cvss":{"cvss3":{"base_score":5.3}}, "enumeration":"CVE", "package":{"architecture":"x86_64","name":"kernel-modules-core","version":"5.1XXXXXX"}, "published":"2024-06-20T12:15:14Z", "reference":"XXXX", "scanner":{"reference":"https://cti.wazuh.com/vulnerabilities/cves/CVE-20XX-XXXX4"}, "score":{"base":5.3,"version":"3.1"}, "severity":"Medium", "status":"Solved", "title":"CVE-20XX-XXXX64 affecting kernel-modules-core was solved", "type":"Packages", "updated":"2025-03-24T18:16:46Z" } }

This type of report should generate an alert in the following locations:

  • /var/ossec/logs/alerts/alerts.json

  • /var/ossec/logs/alerts/alerts.log

Additionally, it should trigger Filebeat to index the event, allowing you to find it under the wazuh-alerts-* indices in Wazuh-Indexer.

If you confirm that this entry exists in alerts.json or alerts.log, the next step is to verify the number of indices currently created in Wazuh Indexer. A shard or index limit may be preventing new data from being written.

You can review and tune these settings here: Wazuh Indexer Tuning — Shards and Replicas


MaP

unread,
Nov 13, 2025, 5:48:51 AMNov 13
to Wazuh | Mailing List
Hi Gabriel,

Unfortunately, I don't see the corresponding events in the ossec-alerts.json file of the day.
However, I do see the CVEs in the ossec.archives.json file for the relevant day, but without any alert entries. 

From the alert logs for that day, I only see one entry which has "vulnerability" inside:

{
  "timestamp": "2025-11-12T13:22:47.249+0100",
  "rule": {
    "level": 4,
    "id": "11",
    "mail": false,
    "groups": [
      "stats"
    ]
  },
  "agent": {
    "id": "003",
    "name": "ouragent"
  },
  "manager": {
    "name": "the -conneted-server"
  },
  "id": "1762950167.8398389",
  "cluster": {
    "name": "wazuhxxxx",
    "node": " the -conneted-serve  "
  },
  "full_log": "The average number of logs between 13:00 and 14:00 is 1081. We reached 2704.",
  "decoder": {
    "name": "json"
  },
  "location": "vulnerability-detector"
}


So i think the problem is not related to reaching index or shard maximum.

Regards

MaP

Gabriel Emanuel Valenzuela

unread,
Nov 13, 2025, 7:28:18 AMNov 13
to Wazuh | Mailing List
Can you check if the wazuh-alerts-* has information about this alert and the others? You can see it on Discovery tab in Dashboard

MaP

unread,
Nov 14, 2025, 12:00:19 AMNov 14
to Wazuh | Mailing List
Hi Gabiel,

yes i can see this alert in the discover tab, a simliar log i can see for the other agent i wrote in my second post.
But i can't see any other alerts, which are from the location "vulnerability detector", only thr rule.id 11 in level 4.

Regards
MaP

Gabriel Emanuel Valenzuela

unread,
Nov 14, 2025, 3:17:20 PMNov 14
to Wazuh | Mailing List

Did you make any recent changes to the alert configuration? Also, could you please verify if there is any filter or rule in place that might be blocking the alerts?

When you have a moment, could you repeat the test and share with me the alerts.json file corresponding to the day of the test? That will help me review the event flow in detail.

MaP

unread,
Nov 18, 2025, 9:36:53 AMNov 18
to Wazuh | Mailing List
Hi Gabriel,

Sorry for the late reply; Until now i was busy with other stuff
We haven't overwritten any built-in rules, meaning we don't have `overwrite=yes` in any rule.
Of course, we have custom rules, but they all have their own rule IDs.
I'm not aware of any filters, but what kind of filtesr are you referring to?

I'll have to check if I can send you the alerts.json file; sorry about that.

But I have a question regarding the structure of the vulnerability scanner logs.
We've written a JSON decoder that we need due to a specific processing requirement:


<decoder name="xyz">
 <parent>json</parent>
 <use_own_name>true</use_own_name>
 <prematch type="pcre2">^{\s*\"wi</prematch>
 <plugin_decoder>JSON_Decoder</plugin_decoder>
 <type>ossec</type>
</decoder>


I'm worried our prematch might be masking too much.
If I knew what a raw vulnerability detection log looks like, could I quickly check if the decoder might be the problem?
Because I saw that the rules in 0520-vulnerability-detector_rules.xml contain <decoded_as>json</decoded_as>

Regards

MaP

 

Gabriel Emanuel Valenzuela

unread,
Nov 24, 2025, 8:44:15 AM (8 days ago) Nov 24
to Wazuh | Mailing List

Hi MaP,

I was on PTO until today.

It’s likely that the decoder is causing the alerts to be blocked. Could you please check it and perform a test with the decoder disabled in order to confirm?


MaP

unread,
Nov 28, 2025, 7:44:00 AM (4 days ago) Nov 28
to Wazuh | Mailing List
Hi Gabriel,

I've been very busy this week, so I'm only getting around to replying now. Sorry about that.

I uncommented the decoder temporarily and found another client where I was able to fix vulnerabilities. Unfortunately, I'm still not getting any events, so that wasn't the issue i think.
It's strange that a second Wazuh instance we have here, which is also running version 4.12, is generating vulnerability events.

Furthermore, I don't seem to be the only one with this problem:

Any other ideas? Perhaps a faulty package installation? What else could I check?

Regards

MaP

Gabriel Emanuel Valenzuela

unread,
Nov 28, 2025, 11:36:43 AM (4 days ago) Nov 28
to Wazuh | Mailing List

Hi MaP,

The issue you mentioned does not appear to be related to your current problem. If you can run the test I mentioned earlier, it will help the team identify a possible root cause.

If alerts are being generated, the next step is to check Filebeat. If Filebeat is working correctly, we should verify the connectivity with the indexer. If connectivity is fine, then we should confirm that the indexer is allowing the creation of new documents.

Reply all
Reply to author
Forward
0 new messages