Vulnerabilities module does not report updates

520 views
Skip to first unread message

Henry Valero

unread,
Sep 24, 2024, 1:24:11 PM9/24/24
to Wazuh | Mailing List
Hi,
I have problems with the vulnerability module, I will take as reference only one package that is related to Firefox, in wazuh it is reporting in the dashboard that there are 07 high severity vulnerabilities, in the inventory section it shows the detail of the 07 vulnerabilities and in the events section it shows that the last event related to this package was on September 19.
In wazuh there has not been any change since it was updated to wazuh version 4.9.0
I have updated the Firefox version and it is currently at 130.0.1, when checking the log cat /var/ossec/logs/ossec.log | grep indexer-connector it shows me the following error: "indexer-connector: WARNING: Failed to sync agent '004' with the indexer."


Atte.:
01-dashboard-vulnerabilidades.png
03-inventory-vulnerabilites.png
06-indexer(ossec.conf).png
05-indexer-conector.png
02-eventos-vulnerabilities.png
07-ruta-certificados.png
04-version-firefox.png

Damian Alfredo Mangold

unread,
Sep 24, 2024, 3:14:35 PM9/24/24
to Wazuh | Mailing List
HI, Henry Valero

After analyzing the three CVEs associated with Firefox, I can confirm that they are false positives. All of these have already been identified: the ones from 2008 have been resolved, and the one from 2014 is currently in the process of being corrected, with the fix expected to be published shortly.

Given this, we should investigate why these false positives are still active. At first glance, two possibilities come to mind: the less likely one is that your manager may not be downloading the vulnerability content updates; the second possibility is that the agent is not synchronizing due to a warning related to the indexer.

I will provide you with a set of commands to run, so we can check the status of the modules and work towards identifying the root cause of the issue. Please check the Wazuh logs:

Wazuh Indexer:
  • To check the status of the wazuh-indexer, run the following command:  
    • systemctl status wazuh-indexer
  • To check the cluster health, use:  
  • To check for errors in the indexer log:  
    • cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"

Wazuh Manager:
  • To check for errors in the manager log:
    • cat /var/ossec/logs/ossec.log | grep -i -E "error|warn"

Henry Valero

unread,
Sep 24, 2024, 5:15:01 PM9/24/24
to Wazuh | Mailing List
Hi Damian,
I found the following errors attached in the files, how can I solve it? I don't understand what they mean, can you help me by reviewing them please?

Atte.:
Henry
wazuh-cluster.log
status-wazuh-indexer.png
warning-02.png
error-03.png
error-04.png
curl-XGET.png
error-01.png
ossec.log

Damian Alfredo Mangold

unread,
Sep 25, 2024, 6:42:52 AM9/25/24
to Wazuh | Mailing List
Hi Henry Valero,

Thank you very much for sharing the log files, I see that there are many warnings and errors.

While analyzing the log files, could you please give me details of your environment? Is it all in one or distributed?

Francisco Tuduri

unread,
Sep 25, 2024, 12:09:31 PM9/25/24
to Wazuh | Mailing List
Hello Henry,

We've been reviewing the logs and noticed multiple errors in different components of Wazuh. To help us get a clearer understanding of the situation, and in addition to what Damian has already requested, could you provide some extra background on your environment?

Here are a few points that would help us:
  • What version were you using initially? Was everything functioning correctly before the update?
  • When did you update to version 4.9, and what process did you follow for the update?
  • Were there any issues or challenges encountered during the upgrade process?
  • Is there any additional information you can share that might assist us in diagnosing the situation promptly?

Regards!

Henry Valero

unread,
Sep 25, 2024, 1:05:46 PM9/25/24
to Wazuh | Mailing List
the wazuh is deploy on all in one

Atte.:

Henry Valero

unread,
Sep 25, 2024, 1:22:16 PM9/25/24
to Wazuh | Mailing List
Hi Francisco:
Q: What version were you using initially? Was everything functioning correctly before the update?
A: the version was 4.7 and everything works perfectly

Q:When did you update to version 4.9, and what process did you follow for the update?
A: the upgrade was a month ago, and follow the detailed steps for updating each module.

Q: Were there any issues or challenges encountered during the upgrade process?
A: The vulnerability module did not load or did not display information, until i added the indexer user and pasword in the keytool.

Q: Is there any additional information you can share that might assist us in diagnosing the situation promptly?
A: i have noticed that every time i restart the wazuh-manager it is as if it forces the update and the vulnerability module is updated, but it still does not show the final status of the agents regarding the vulnerabilities

Atte.:

Francisco Tuduri

unread,
Sep 25, 2024, 5:30:43 PM9/25/24
to Wazuh | Mailing List
Hi Henry,

Thank you for the information provided, and I appreciate your patience while we troubleshoot this situation. Before we can address the vulnerabilities issues, we need to resolve several errors.

Primarily, we are encountering constant errors accessing the Wazuh DB. Let's try the following:

Share the output of the following command, this is to check if wazuh-db is running:

/var/ossec/bin/wazuh-control status

And this command to see the permissions and current files for the db:

ls -l /var/ossec/queue/db/


Additionally, regarding the indexer, please execute the following commands to gain a better understanding:

GET /_cat/shards

GET _cat/indices/?v


You can execute these commands from the dashboard. Go to Indexer Management -> Dev Tools.

Regards!

Henry Valero

unread,
Sep 25, 2024, 7:22:46 PM9/25/24
to Wazuh | Mailing List
Hi Francisco,

I attach the results obtained from the suggested actions. What else can I do to solve the problem?

Atte.:
wazuh-control-status.png
get-shards.log
queue-db.png
get-indices.log

Francisco Tuduri

unread,
Sep 26, 2024, 9:16:28 AM9/26/24
to Wazuh | Mailing List
Hi Henry,
I'm checking with the team to determine the best way to proceed. I’ll update you shortly.

Henry Valero

unread,
Sep 26, 2024, 4:20:48 PM9/26/24
to Wazuh | Mailing List
Hi, any ideas on how to fix the reported issue?

Atte.:

Francisco Tuduri

unread,
Sep 26, 2024, 5:21:19 PM9/26/24
to Wazuh | Mailing List
Hi,
There is some problem with the Wazuh DB daemon, it seems to be unresponsive. This service is crucial for the correct operation of many components.
I will ask you to run a few more test:

1-Enable debug logging for wazuh_db and wazuh_modules

Add the following lines to /var/ossec/etc/local_internal_options.conf:
wazuh_db.debug=2
wazuh_modules.debug=2


(You can refer to the documentation for more details: https://documentation.wazuh.com/current/user-manual/reference/internal-options.html)

2-Restart the manager

sudo systemctl restart wazuh-manager.service


After completing these steps, let's wait for about half an hour and then share the full /var/ossec/logs/ossec.log.
We are trying to see why and at what point Wazuh DB stops responding.

Additionally, at this time, please check that all the Wazuh services are running by executing:

ps -aux | grep -i wazuh-

Sorry for this inconvenience and thanks for your cooperation while we work on this.

Regards!

Henry Valero

unread,
Sep 26, 2024, 6:31:34 PM9/26/24
to Wazuh | Mailing List
Hi Francisco,
I attach the results of the last command after having enabled the debugin option

Atte.:
ps-aux.log

Francisco Tuduri

unread,
Sep 27, 2024, 7:53:40 AM9/27/24
to Wazuh | Mailing List
Hi,
Please share also the full ossec.log file: /var/ossec/logs/ossec.log
Thanks

Henry Valero

unread,
Sep 27, 2024, 10:05:32 AM9/27/24
to Wazuh | Mailing List
Hi Francisco, I attach the log.

Atte.

ossec.zip

Francisco Tuduri

unread,
Sep 27, 2024, 4:16:49 PM9/27/24
to Wazuh | Mailing List
Hi Henry,

Thanks for sending the files.

The latest log looks much better—there are no longer any errors related to the Wazuh DB. It seems that after the last restart, everything started working. However, we still don’t know what caused the "Unable to connect to socket 'queue/db/wdb'" errors in the previous log. Please keep an eye on the log periodically in case the issue resurfaces.

Also, continue to keep the debug options enabled in the log until all problems are fully resolved. This will allow us to review the log in detail should the issue reoccur.

In this new log, there are several messages like the following:

2024/09/27 03:18:26 indexer-connector[134657] indexerConnector.cpp:437 at operator()(): DEBUG: Syncing agent '012' with the indexer.
2024/09/27 03:18:26 indexer-connector[134657] indexerConnector.cpp:446 at operator()(): WARNING: Failed to sync agent '012' with the indexer.
2024/09/27 03:18:26 indexer-connector[134657] indexerConnector.cpp:447 at operator()(): DEBUG: Error: No available server
2024/09/27 03:18:27 wazuh-modulesd:vulnerability-scanner[134657] scanOrchestrator.hpp:299 at run(): DEBUG: Event type: 11 processed
2024/09/27 03:18:27 indexer-connector[134657] indexerConnector.cpp:129 at abuseControl(): DEBUG: Agent '012' sync omitted due to abuse control.


It seems the indexer-connector is failing to connect to the Indexer. One requirement for the manager to sync correctly with the Indexer is that the cluster health must be green. In one of you previous messages it appeared as yellow.

Please check the cluster health again. You can do this via the Dashboard under Indexer Management -> Dev Tools using the following command:

GET _cluster/health

This will provide some basic status information. For reference, here’s the documentation: https://opensearch.org/docs/latest/api-reference/cluster-api/cluster-health/
If there are unassigned shards, take a look at the commands in this guide: https://opensearch.org/docs/latest/api-reference/cluster-api/cluster-allocation/

Let me know what you find.

Regards!

Henry Valero

unread,
Sep 27, 2024, 6:10:40 PM9/27/24
to Wazuh | Mailing List
Francisco,

the status returns in yellow, how do I fix it?

Atte.:
wazuh-get-cluster.png

Francisco Tuduri

unread,
Sep 30, 2024, 7:59:43 AM9/30/24
to Wazuh | Mailing List
Hi Henry, 

That is most likely due to the 31 unassigned shards in your cluster.

Please run the following command to get detailed information about why the shards are unassigned:

GET _cluster/allocation/explain?pretty

Additionally, to check the available disk space in the system, run this command:

GET  _cat/allocation?v

These commands will give you insights into both the allocation reasons and your system's resource availability.

Regards

Henry Valero

unread,
Sep 30, 2024, 6:53:23 PM9/30/24
to Wazuh | Mailing List
Hi Francisco,

how do I fix it?

Atte.:

capture-01.png
capture-02.png

Francisco Tuduri

unread,
Oct 1, 2024, 8:39:13 AM10/1/24
to Wazuh | Mailing List
Hello Henry,

The key parts here are that the index .opendistro-ism-managed-index-history-2024.08.31-000058 has unassigned shards and that it "cannot allocate because allocation is not permitted to any of the nodes" and "a copy of this shard is already allocated to this node".
Since this is an AIO deployment, and it only has one node, we should set the replicas to 0.

Execute the following command to do that:

PUT /.opendistro-ism-managed-index-history-*/_settings
{
 "index": {
 "number_of_replicas": 0
 }
}



Then check the shards again:

GET /_cat/shards?v

They all have to be STARTED, there should be no UNASSIGNED shard.

And check again the health of the cluster:

GET _cluster/health


Let me know how it goes.

Henry Valero

unread,
Oct 1, 2024, 10:45:51 AM10/1/24
to Wazuh | Mailing List
Hello Francisco,

I did what you suggested and these are the results, there are two results that appear as unassigned, also to validate the vulnerabilities module I have uninstalled a software package and the event can be seen in the Threat Hunting, but the vulnerabilities module is not update. the information and when checking the latest events it has not reported events for days, what can I do to solve or correct the operation of the vulnerabilities module?

Atte.:
0-evento-vulnerabilidades.png
shards-unassigned.png
libre-office.png
eventos-threat Hunting.png

Jörg Schindler

unread,
Oct 1, 2024, 11:36:52 AM10/1/24
to Wazuh | Mailing List

Here’s the translation of your text into English:


Hi,

I had a similar problem last week, which is now resolved.

I also had unassigned shards. I stopped fluent-bit (in your case, it might be filebeat) to prevent new events from being written to the indexer. Then I deleted all yellow indices and fixed my index template, which was faulty (I had set the replicas too high. Replica 1 worked for me).

With the Vulnerability Module, I searched for a long time. Sometimes it worked, sometimes it didn’t. Eventually, I noticed that depending on which Wazuh manager cluster node the agent was connected to, it either worked or didn’t.

The solution was that I reissued all the certificates using the wazuh-certs-tool. Additionally, I created an index admin user with the name "indexer" and added it to the Wazuh manager keystore, as described in the documentation.

I restarted all the servers briefly, and everything worked again.

I hope this helps you.

Francisco Tuduri

unread,
Oct 1, 2024, 1:30:44 PM10/1/24
to Wazuh | Mailing List
Hello Henry,

Since there are still unassigned shards the cluster health is most likely yellow, and that is why you are not seeing the updates in the Vulnerability Detection Dashboard and Inventory. The events that you show (Application uninstalled...) take a different path to get to the manager. The data in VD Dashboard and Inventory are sent to the indexer directly by the manager, when the cluster is green.

Let's do this:

Change the system index template so that future indices start with 0 replicas:

PUT /_index_template/system_index_template
{
  "index_patterns": [".*"],
  "template": {
    "settings": {
      "number_of_replicas": 0
    }
  }
}



And set all current indices to have 0 number_of_replicas:

PUT /*/_settings
{
  "index": {
    "number_of_replicas": 0
  }
}



Then, as before, check the shards and the cluster health:


GET /_cat/shards?v

GET _cluster/health


For the vulnerability module to work correctly the cluster health must be green, hopefully these last commands will fix that.

Regards!

Henry Valero

unread,
Oct 1, 2024, 4:40:45 PM10/1/24
to Wazuh | Mailing List
Hello Francisco,
I get this error when executing the commands.

Atte.:
put-settings.png

Francisco Tuduri

unread,
Oct 2, 2024, 3:10:28 PM10/2/24
to Wazuh | Mailing List
Ok, let's do this:

Execute this to set the default number of replicas to 0 for new indices.
PUT _cluster/settings
{
"persistent" : {
"cluster.default_number_of_replicas": 0
}
}



Then, let's try again to set the replicas number to 0 to the existing indices.
We will have to use the admin certificates to do so.
From the terminal execute the following command:

curl --cacert /etc/wazuh-indexer/certs/root-ca.pem --cert /etc/wazuh-indexer/certs/admin.pem  --key /etc/wazuh-indexer/certs/admin-key.pem -X PUT https://127.0.0.1:9200/.opendistro*/_settings -H 'Content-Type: application/json' -d {"index":{"number_of_replicas":0}}'

After this, check again the shards, indices and health:

GET /_cat/shards?v
GET _cat/indices/?v
GET _cluster/health


Let me know how it goes.

John

unread,
Oct 14, 2024, 4:49:42 AM10/14/24
to Wazuh | Mailing List
I have the same config and the same issue, which brought me to this thread.

This command helps to fix the issue for one day:
PUT /.opendistro-ism-managed-index-history-*/_settings
{
 "index": {
 "number_of_replicas": 0
 }
}


The command in Francisco's last message did not fix repetition of the issue every day. Next day the cluster status is yellow again and I have to send the command again.
How to enforce number of replicas for the new indexes?..

Francisco Tuduri

unread,
Oct 14, 2024, 4:55:38 PM10/14/24
to Wazuh | Mailing List

Hi John!
To enforce the number of replicas for new indices you should define an index template.
For example, for .opendistro-ism-managed-index-history-* you can use the following command:

PUT _index_template/ism-managed-index-history-template
{
  "index_patterns": [
    ".opendistro-ism-managed-index-history-*"

  ],
  "template": {
    "settings": {
      "number_of_replicas": 0
    }
  }
}
You can see the reference for this here.

You can also view the current index templates with:
GET /_index_template/

Regards!

John

unread,
Oct 24, 2024, 1:21:53 AM10/24/24
to Wazuh | Mailing List
Thanks for suggestion,  Francisco! It did not help though:

Here's how my template for .opendistro-ism-managed-index-history-*  looks like:
{
  "index_templates": [
    {
      "name": "ism-managed-index-history-template",
      "index_template": {

        "index_patterns": [
          ".opendistro-ism-managed-index-history-*"
        ],
        "template": {
          "settings": {
            "index": {
              "number_of_replicas": "0"
            }
          }
        },
        "composed_of": [],
        "priority": 1,
        "_meta": {
          "flow": "simple"
        }
      }
    },

Yet, next day I have:
  "cluster_name": "wazuh-cluster",
  "status": "yellow",

GET /_cat/shards?v shows this problem:
.opendistro-ism-managed-index-history-2024.10.23-000096 0     p      STARTED                     127.0.0.1 node-1
.opendistro-ism-managed-index-history-2024.10.23-000096 0     r      UNASSIGNED                            


Then I ran this:
PUT /.opendistro-ism-managed-index-history-*/_settings
{
 "index": {
 "number_of_replicas": 0
 }
}

and got:
  "cluster_name": "wazuh-cluster",
  "status": "green",
and no issues with shards.

Do you have any suggestions as to how to fix it?

John

unread,
Nov 2, 2024, 12:56:55 AM11/2/24
to Wazuh | Mailing List
We are still facing this issue every day. I'd appreciate any leads to a fix.
Reply all
Reply to author
Forward
0 new messages