Vulnerability Detection: working only for one agent

262 views
Skip to first unread message

Giovanni

unread,
Oct 21, 2024, 6:26:58 AM10/21/24
to Wazuh | Mailing List
Good morning,
after upgrading to Wazuh 4.9, I re-enabled the “Vulnerability Detector” module in the ossec.conf file.
However,  at the moment, it seems that only one of them (ID 012) is utilizing the vulnerability detector functions, even though they all share the same configuration (pushed via the server).

Maybe is a problem related to the indexer connector?

cat /var/ossec/logs/ossec.log | grep indexer-connector
...
2024/10/21 12:21:40 indexer-connector[726004] indexerConnector.cpp:446 at operator()(): WARNING: Failed to sync agent '372' with the indexer.
2024/10/21 12:21:40 indexer-connector[726004] indexerConnector.cpp:447 at operator()(): DEBUG: Error: No available server
2024/10/21 12:21:40 indexer-connector[726004] indexerConnector.cpp:129 at abuseControl(): DEBUG: Agent '372' sync omitted due to abuse control.
...


cause the vulnerability scanner seems to work
cat /var/ossec/logs/ossec.log | grep vulnerability
..
2024/10/21 12:23:15 wazuh-modulesd:vulnerability-scanner[726004] scanOrchestrator.hpp:299 at run(): DEBUG: Event type: 3 processed
2024/10/21 12:23:16 wazuh-modulesd:vulnerability-scanner[726004] osScanner.hpp:346 at handleRequest(): DEBUG: Vulnerability scan for OS 'linux' on Agent '568' has completed.
2024/10/21 12:23:16 wazuh-modulesd:vulnerability-scanner[726004] eventDetailsBuilder.hpp:101 at handleRequest(): DEBUG: Building event details for component type: 2
2024/10/21 12:23:16 wazuh-modulesd:vulnerability-scanner[726004] scanOrchestrator.hpp:299 at run(): DEBUG: Event type: 3 processed

..

Do you have any idea what I can check?

hasitha.u...@wazuh.com

unread,
Oct 21, 2024, 7:49:55 AM10/21/24
to Wazuh | Mailing List
Hi Giovanni,

Just to validate, did you update the <vulnerability-detection> and <indexer> block in /var/ossec/etc/ossec.conf based on this document?
Ref: https://documentation.wazuh.com/current/upgrade-guide/upgrading-central-components.html#configuring-vulnerability-detection


2024/10/21 12:21:40 indexer-connector[726004] indexerConnector.cpp:129 at abuseControl(): DEBUG: Agent '372' sync omitted due to abuse control.
This log line appears because there is a cool-down time that prevents the same agent to re-sync with the indexer too often, this is to avoid overloading the indexer with requests. If everything is OK, the module will try again later.

Can you verify the health of your Wazuh Indexer cluster?
For example: GET _cluster/health knowing that you must have your cluster in green status because the indexer_module won't sync the vulnerabilities otherwise.
Navigate to Indexer management -> Dev Tools -> GET _cluster/health

Also, let us know the OS versions of the agent and let us know the Wazuh manager version if it's 4.9.0 or 4.9.1.
/var/ossec/bin/wazuh-control info

Let me know the update.

Regards,
Hasitha Upekshitha

Giovanni

unread,
Oct 21, 2024, 8:57:18 AM10/21/24
to Wazuh | Mailing List
Hi Hasitha,
thanks for the reply.

This is my vulnerability detection block:
  <vulnerability-detection>
    <enabled>yes</enabled>
    <index-status>yes</index-status>
    <feed-update-interval>60m</feed-update-interval>
  </vulnerability-detection>

An this is my indexer:
  <indexer>
    <enabled>yes</enabled>
    <hosts>
      <host>https://127.0.0.1:9200</host>
    </hosts>
    <ssl>
      <certificate_authorities>
        <ca>/etc/filebeat/certs/root-ca.pem</ca>
      </certificate_authorities>
      <certificate>/etc/filebeat/certs/wazuh-server.pem</certificate>
      <key>/etc/filebeat/certs/wazuh-server-key.pem</key>
    </ssl>
  </indexer>


It seems that the only difference is in certification section (filebeat.pem / wazuh-server.pem), but:
ll /etc/filebeat/certs/
total 12
root-ca.pem
wazuh-server-key.pem
wazuh-server.pem

I can't find Dev Tools in index management but in Server Management! (is not a clustered installation), however, giving GET _cluster/health i receive this:
{
  "error": "3013",
  "message": {
    "title": "Not Found",
    "detail": "404: Not Found"
  }
}


The Wazuh Manager is 4.9.0, the agents are mostly 4.3.x
The one whose vulnerability logs I get is a linux with agent v. 4.3.10, so like him I have many others with same os and agent, but whose vulnerabilities I don't have.

Giovanni

unread,
Oct 28, 2024, 4:14:20 AM10/28/24
to Wazuh | Mailing List
any help?

hasitha.u...@wazuh.com

unread,
Oct 28, 2024, 7:40:22 AM10/28/24
to Wazuh | Mailing List
Hi Giovanni,

To check the cluster health, you can follow these options.
GUI:

Navigate to Indexer management -> Dev Tools -> GET _cluster/health

CLI:
curl -XGET -k -u user:pass "https://<Indexer_IP>:9200/_cluster/health"

Provide the output of above-mentioned command to check cluster health.

And try to upgrade one of your agent and check if it solved the issue. If its working fine, then upgrade all agents..
Ref: https://documentation.wazuh.com/current/upgrade-guide/wazuh-agent/index.html

Additionally please share the OS and OS version of the endpoints.

Let me know the update on above mentioned commands and details to assist further.

Regards,
Hasitha Upekshitha

Screenshot 2024-10-28 170150.png

Giovanni

unread,
Nov 13, 2024, 11:33:23 AM11/13/24
to Wazuh | Mailing List
Here's my status:

{
  "cluster_name": "wazuh-cluster",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "discovered_master": true,
  "discovered_cluster_manager": true,
  "active_primary_shards": 244,
  "active_shards": 244,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 47,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 83.8487972508591
}

The most updated agents versione are 4.9.0 and 4.7.5, with Windows 11 Pro and Windows Server 2019
The most recent in term of date are 4.7.5 and are Oracle Linux Server 9.4

none of thoose have avents in vulnerability detection (impossible :D )

The only one working is a Centos 7.9 with agent version 4.3.10 
Registration date Jan 13, 2023

Giovanni

unread,
Nov 19, 2024, 8:41:26 AM11/19/24
to Wazuh | Mailing List
Any update on this?

Sebastian Falcone

unread,
Nov 19, 2024, 10:24:37 AM11/19/24
to Wazuh | Mailing List
Hi Giovanni, I will continue helping you with this issue

The problem can be seen in this log

2024/10/21 12:21:40 indexer-connector[726004] indexerConnector.cpp:447 at operator()(): DEBUG: Error: No available server

For the Vulnerability Detection module to work properly, the cluster should be in a green state. Your is in yellow:

{
  "cluster_name": "wazuh-cluster",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "discovered_master": true,
  "discovered_cluster_manager": true,
  "active_primary_shards": 244,
  "active_shards": 244,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 47,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 83.8487972508591
}

Here you can find a guide on how to determine why the cluster is yellow. My suspicion is the unassigned shards ("unassigned_shards": 47)

Once you have that information I will further assist you

Giovanni

unread,
Nov 27, 2024, 3:35:52 AM11/27/24
to Wazuh | Mailing List

Thanks Sebastian for helping me.

So, in Dev Tools, i gave the command to check the reason for the problem on shards (GET /_cluster/allocation/explain) and got this output:

{
  "index": ".opendistro-alerting-config",
  "shard": 0,
  "primary": false,
  "current_state": "unassigned",
  "unassigned_info": {
    "reason": "CLUSTER_RECOVERED",
    "at": "2024-11-26T15:17:29.047Z",
    "last_allocation_status": "no_attempt"
  },
  "can_allocate": "no",
  "allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions": [
    {
      "node_id": "-tcNKoecTxiBEvIpcCcJBQ",
      "node_name": "node-1",
      "transport_address": "127.0.0.1:9300",
      "node_attributes": {
        "shard_indexing_pressure_enabled": "true"
      },
      "node_decision": "no",
      "deciders": [
        {
          "decider": "enable",
          "decision": "NO",
          "explanation": "replica allocations are forbidden due to cluster setting [cluster.routing.allocation.enable=primaries]"
        },
        {
          "decider": "same_shard",
          "decision": "NO",
          "explanation": "a copy of this shard is already allocated to this node [[.opendistro-alerting-config][0], node[-tcNKoecTxiBEvIpcCcJBQ], [P], s[STARTED], a[id=DETCwYoTRWO8iF8GNHhqIg]]"
        }
      ]
    }
  ]
}

the shard in question seems to be from yesterday, so I assume that it is gradually going to be updated from day to day.
What could i do?

Sebastian Falcone

unread,
Nov 27, 2024, 7:13:00 AM11/27/24
to Wazuh | Mailing List
First of all, in 4.10.0 we will enable the indexing in clusters with yellow status. Which will mitigate this kind of problem:

To solve your problem you will need to disable replica shards to 0:
https://youdidwhatwithtsql.com/elasticsearch-turn-index-replicas/2058/

Giovanni

unread,
Nov 27, 2024, 11:21:34 AM11/27/24
to Wazuh | Mailing List
Hi Sebastian,
thank you so much but, sorry to annoy you more!
I have tried giving the command given in the link but the status does not change, or rather, specifically the command would seem to fail.

curl -XPUT http://127.0.0.1:9200/_settings -d '
> {
>     "index" : {
>         "number_of_replicas" : 0
>     }
> }
> '
curl: (52) Empty reply from server

so i tryed on dev tool:
PUT _settings
{
  "index": {
    "number_of_replicas": 0
  }
}

but with that, it seems like i have no permission:

{
  "error": {
    "root_cause": [
      {
        "type": "security_exception",
        "reason": "no permissions for [] and User <REDACTED-my account with root privileges>"
      }
    ],
  "status": 403
}

So i've tryed:
PUT wazuh-alerts-4.x-*/_settings
{
  "index": {
    "number_of_replicas": 0
  }
}

and recived:
{
  "acknowledged": true
}


but, even after restarting the wazuh-manager, the status doesn't change and still yellow!

Sebastian Falcone

unread,
Nov 28, 2024, 7:05:23 AM11/28/24
to Wazuh | Mailing List
Hi Giovanni. It's no problem, we are here to help

Please try this (using the admin certificates)

curl --cacert /etc/wazuh-indexer/certs/root-ca.pem --cert /etc/wazuh-indexer/certs/admin.pem  --key /etc/wazuh-indexer/certs/admin-key.pem -X PUT "https://localhost:9200/*/_settings" -H 'Content-Type: application/json' -d '{
  "index" : {
    "number_of_replicas" : 0
  }
}'

Giovanni

unread,
Dec 3, 2024, 5:22:12 AM12/3/24
to Wazuh | Mailing List
Hi Sebastian,
thank you for your patience,
I gave the following command from the terminal:
curl --cacert /etc/wazuh-indexer/certs/root-ca.pem --cert /etc/wazuh-indexer/certs/admin.pem --key /etc/wazuh-indexer/certs/admin-key.pem -X PUT “https://127.0.0.1:9200/*/_settings” -H 'Content-Type: application/json' -d '{
  “index” : {
    “number_of_replicas” : 0
  }
}'

And I received, as feedback, {“acknowledged”:true}

However, I don't notice any difference with either
GET _cluster/health
and with
GET /_cluster/allocation/explain

Giovanni

unread,
Dec 3, 2024, 6:27:07 AM12/3/24
to Wazuh | Mailing List
I made a backup copy of the machine to which I connected only one device.

So, there were several indexes that were wrong, all from opendistro, so I set the number of replicas with this command:
curl --cacert /etc/wazuh-indexer/certs/root-ca.pem --cert /etc/wazuh-indexer/certs/admin.pem --key /etc/wazuh-indexer/certs/admin-key.pem -X PUT “https://127.0.0.1:9200/.opendistro*/_settings?pretty” -H 'Content-Type: application/json' -d' { “number_of_replicas”: 0 }'

Now the result is this:

Translated with DeepL.com (free version)
curl --cacert /etc/wazuh-indexer/certs/root-ca.pem --cert /etc/wazuh-indexer/certs/admin.pem --key /etc/wazuh-indexer/certs/admin-key.pem -X GET “https://127.0.0.1:9200/_cluster/allocation/explain?pretty
{
  “error” : {
    “root_cause” : [
      {
        “type” : ‘illegal_argument_exception’,
        “reason” : ‘unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]’.
      }
    ],
    “type” : ‘illegal_argument_exception’,
    “reason” : ‘unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]’.
  },
  “status” : 400
}

Which, despite what it looks like, should be right.
In fact, the status is now green.

I am now seeing if the logs from the vulnerability detector module give any important information but so far nothing....

Sebastian Falcone

unread,
Dec 5, 2024, 7:00:19 AM12/5/24
to Wazuh | Mailing List
Hi! Sorry for the delay

If you want to share with me the ossec.log file I could analyze it for you

It would be for best if you increase the debug of the modulesd module first:
- edit /var/ossec/etc/local_internal_options.conf
- set wazuh_modules.debug=2
- Restart the manager 


Reply all
Reply to author
Forward
0 new messages