Get vulnerabilities via api

1,527 views
Skip to first unread message

Jaime

unread,
Jul 9, 2024, 8:00:28 AM7/9/24
to Wazuh | Mailing List
Hi, how could I get the vulnerabilities via API on wazuh 4.8.0? All help is welcome

Matias Braida

unread,
Jul 9, 2024, 6:33:53 PM7/9/24
to Wazuh | Mailing List
Hello Jaime,

On Wazuh version 4.8.0 the wazuh-manager API endpoint "/vulnerability" has been deprecated. Now the vulnerabilities are indexed.

So you can query your vulnerabilities directly from the wazuh-indexer. The index: "wazuh-states-vulnerabilities-*" is the one you need to use for this query.

Since the vulnerabilities are indexed, you have to use the ElasticSearch API to make queries on them. Not the wazuh-manager API.

To build a query on ElasticSearch you need the following information:
* Wazuh indexer IP
* Wazuh vulnerabilities index name: "wazuh-states-vulnerabilities-*"
* Elastic admin user credentials: The credentials you use to access the Wazuh dashboard (user: "admin", password: "admin" are the default ones)

The basic query request command is the following:
curl -k -u <USER>:<PASSWORD> -X GET "https://<WAZUH_INDEXER_IP>:9200/wazuh-states-vulnerabilities-*/_search"

For doing filtering you have several options that are more detailed in ElasticSearch documentation. Here is an example of filtering vulnerabilities based on the agent id (001 for this example):
curl -k -u admin:admin -X GET "https://localhost:9200/wazuh-states-vulnerabilities-*/_search?pretty" -H 'Content-Type: application/json' -d '{
    "query": {
        "match": {
            "agent.id": "001"
        }
    }
}'


In the response, you will have the items that match the query in the key "hits.hits". This key is an array of items similar to the following:
{
    "_index": "wazuh-states-vulnerabilities-matias-x580vd",
    "_id": "005_macOS_CVE-2024-27817",
    "_score": 1.0044698E-4,
    "_source": {
        "agent": {
            "id": "001",
            "name": "This-MacBook-Pro.local",
            "type": "wazuh",
            "version": "v4.8.0"
        },
        "host": {
            "os": {
                "full": "macOS Sierra",
                "kernel": "16.7.0",
                "name": "macOS",
                "platform": "darwin",
                "type": "macos",
                "version": "10.12.6.16G29"
            }
        },
        "package": {
            "architecture": "x86_64",
            "name": "macOS Sierra",
            "type": "macos",
            "version": "10.12.6.16G29"
        },
        "vulnerability": {
            "category": "OS",
            "classification": "CVSS",
            "description": "The issue was addressed with improved checks. This issue is fixed in macOS Ventura 13.6.7, macOS Monterey 12.7.5, iOS 16.7.8 and iPadOS 16.7.8, tvOS 17.5, visionOS 1.2, iOS 17.5 and iPadOS 17.5, macOS Sonoma 14.5. An app may be able to execute arbitrary code with kernel privileges.",
            "detected_at": "2024-07-02T15:52:16.831Z",
            "enumeration": "CVE",
            "id": "CVE-2024-27817",
            "published_at": "2024-06-10T21:15:50Z",
            "reference": "http://seclists.org/fulldisclosure/2024/Jun/5, https://support.apple.com/en-us/HT214100, https://support.apple.com/en-us/HT214101, https://support.apple.com/en-us/HT214102, https://support.apple.com/en-us/HT214105, https://support.apple.com/en-us/HT214106, https://support.apple.com/en-us/HT214107, https://support.apple.com/en-us/HT214108, https://support.apple.com/kb/HT214100, https://support.apple.com/kb/HT214101, https://support.apple.com/kb/HT214102, https://support.apple.com/kb/HT214105, https://support.apple.com/kb/HT214106, https://support.apple.com/kb/HT214107, https://support.apple.com/kb/HT214108",
            "scanner": {
                "vendor": "Wazuh"
            },
            "score": {
                "base": 7.8,
                "version": "3.1"
            },
            "severity": "High"
        },
        "wazuh": {
            "cluster": {
                "name": "matias-X580VD"
            },
            "schema": {
                "version": "1.0.0"
            }
        }
    }
}


Please tell me if this helps.
Regards

Jaime

unread,
Jul 10, 2024, 10:31:01 AM7/10/24
to Wazuh | Mailing List
yup it worked thanks!! Also, on the wazuh dashboard vulnerability module, it shows nothing on the dashboard, but on events shows some vulnerabilities:
Captura de pantalla 2024-07-10 162838.png
Captura de pantalla 2024-07-10 162930.png

Matias Braida

unread,
Jul 10, 2024, 4:18:07 PM7/10/24
to Wazuh | Mailing List
Let me ask the team. I will be back as soon as possible.

Matias Braida

unread,
Jul 11, 2024, 9:03:56 AM7/11/24
to Wazuh | Mailing List
Hello again,

On the "Vulnerability Detection" screen, the tabs "Dashboard" and "Inventory" use a different index than in the "Events" tab. So, we need to check that this index is correctly configured.

To check the correct working of this index I will need some information:

1. Go to the menu "Dashboard Management" -> "App Settings". Find the section "Vulnerabilities". There you get the name of the index pattern that the dashboard uses for vulnerability information. Please take a screenshot and share it. I have attached my screenshot to guide you.
vulnerabilities_index_pattern.png

2. Go to the menu "Indexer Management" -> "Index Management" -> "indices". Type in the search filter the name of the index pattern you got in the previous step. If this index is correct, it must exist, must be in green health, and also have some documents in it. Please take a screenshot and share it. I have attached my screenshot to guide you.
index_management_indices.png

I will be waiting for this information.

Jaime

unread,
Jul 11, 2024, 12:15:04 PM7/11/24
to Wazuh | Mailing List
Captura de pantalla 2024-07-11 181324.png
Captura de pantalla 2024-07-11 181426.png

Matias Braida

unread,
Jul 11, 2024, 2:02:40 PM7/11/24
to Wazuh | Mailing List
The index exists, but it has no documents. We need to know the reason for this.
The Wazuh manager's logs could contain some error or warning message to guide us.
Please execute this command on the manager's host and share the output:
cat /var/ossec/logs/ossec.log | grep -E "ERROR|WARN"

Jaime

unread,
Jul 12, 2024, 4:06:49 AM7/12/24
to Wazuh | Mailing List
2024/07/12 02:57:37 indexer-connector: WARNING: Failed to sync agent '645' with the indexer.
2024/07/12 02:57:37 indexer-connector: WARNING: Failed to sync agent '462' with the indexer.
2024/07/12 02:57:37 indexer-connector: WARNING: Failed to sync agent '722' with the indexer.
2024/07/12 02:57:37 indexer-connector: WARNING: Failed to sync agent '336' with the indexer.
2024/07/12 02:57:37 indexer-connector: WARNING: Failed to sync agent '354' with the indexer.
2024/07/12 02:57:37 indexer-connector: WARNING: Failed to sync agent '252' with the indexer.
2024/07/12 02:57:37 indexer-connector: WARNING: Failed to sync agent '207' with the indexer.
2024/07/12 02:57:37 indexer-connector: WARNING: Failed to sync agent '600' with the indexer.
2024/07/12 02:57:37 indexer-connector: WARNING: Failed to sync agent '016' with the indexer.
....
2024/07/12 02:57:38 wazuh-modulesd:vulnerability-scanner: ERROR: VulnerabilityScannerFacade::initEventDispatcher: [json.exception.parse_error.101] parse error at line 1, column 37: syntax error while parsing value - invalid literal; last read: '"no-index":false}<U+0006>'; expected end of input
....
2024/07/12 02:57:42 indexer-connector: WARNING: Failed to sync agent '325' with the indexer.
2024/07/12 02:57:42 indexer-connector: WARNING: Failed to sync agent '555' with the indexer.
2024/07/12 02:57:42 indexer-connector: WARNING: Failed to sync agent '230' with the indexer.
2024/07/12 02:57:42 indexer-connector: WARNING: Failed to sync agent '634' with the indexer.
2024/07/12 02:57:42 indexer-connector: WARNING: Failed to sync agent '288' with the indexer.
2024/07/12 02:57:42 indexer-connector: WARNING: Failed to sync agent '648' with the indexer.
2024/07/12 02:57:42 indexer-connector: WARNING: Failed to sync agent '582' with the indexer.
...
2024/07/12 04:01:11 wazuh-db: ERROR: SQLite: UNIQUE constraint failed: sca_scan_info.id
...
2024/07/12 06:07:54 wazuh-db: WARNING: After vacuum, the database '775' has become just as fragmented or worse
2024/07/12 06:07:54 indexer-connector: WARNING: Failed to sync agent '335' with the indexer.
2024/07/12 06:07:54 wazuh-db: WARNING: After vacuum, the database '790' has become just as fragmented or worse

Jaime

unread,
Jul 12, 2024, 4:07:25 AM7/12/24
to Wazuh | Mailing List
The "..." are lots of Failed to sync agent with the indexer alerts

Matias Braida

unread,
Jul 12, 2024, 8:02:02 AM7/12/24
to Wazuh | Mailing List
Let me ask the team. I will be back as soon as possible.

Matias Braida

unread,
Jul 12, 2024, 9:48:10 AM7/12/24
to Wazuh | Mailing List
Hi,

A possible reason for the message "WARNING: Failed to sync agent '...' with the indexer." is that the health of the indexer is not GREEN.

So, we need to check the indexer health by running this command: https://opensearch.org/docs/latest/api-reference/cluster-api/cluster-health/

Please share the response.

In case the indexer status is not GREEN, try to correct the indexer status to get a working connection from the manager.
Message has been deleted

Jaime

unread,
Jul 15, 2024, 3:15:50 AM7/15/24
to Wazuh | Mailing List
bash-5.2# /var/ossec/bin/cluster_control -i more
ERROR: Cluster is not running.
bash-5.2#

Matias Braida

unread,
Jul 15, 2024, 9:09:16 AM7/15/24
to Wazuh | Mailing List
Hello Jaime,

With the Wazuh tool "/var/ossec/bin/cluster_control" you can check the health of the Wazuh-manager cluster, only if you have configured your managers to work in a cluster (more than one manager).
If you only have one manager in your environment, then it is ok that the tool "/var/ossec/bin/cluster_control" says "ERROR: Cluster is not running.".

According to the issue we are discussing, we need to check the health of the "Wazuh indexer" not the "Wazuh manager" cluster.

This is the command you need to run to get the indexer health status:
curl -k -u <USER>:<PASSWORD> -X GET "https://<WAZUH_INDEXER_IP>:9200/_cluster/health?pretty"

If you have default passwords for admin, and you are running the command on the same host where the indexer is installed, then the command is the following:
curl -k -u admin:admin -X GET "https://localhost:9200/_cluster/health?pretty"

You should get an answer like this:
# curl -k -u admin:admin -X GET "https://localhost:9200/_cluster/health?pretty"
{
  "cluster_name" : "wazuh-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "discovered_master" : true,
  "discovered_cluster_manager" : true,
  "active_primary_shards" : 189,
  "active_shards" : 189,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}


The important field in the response is the "status". It must be GREEN so that the manager can connect to the indexer.

Jaime

unread,
Jul 18, 2024, 3:49:29 AM7/18/24
to Wazuh | Mailing List

{
  "cluster_name" : "opensearch",
  "status" : "yellow",

  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "discovered_master" : true,
  "discovered_cluster_manager" : true,
  "active_primary_shards" : 105,
  "active_shards" : 105,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 2,

  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 98.13084112149532

Jaime

unread,
Jul 18, 2024, 3:51:14 AM7/18/24
to Wazuh | Mailing List
I checked the disk and it's at its 97% capacity, would be that the problem?

Matias Braida

unread,
Jul 18, 2024, 8:45:59 AM7/18/24
to Wazuh | Mailing List

Hi,

The yellow state in your Wazuh-indexer is because you have unassigned shards on it.
The first thing you need to know is why you have them. For that, you have to run the following command:
curl -k -u <USER>:<PASSWORD> -X GET "https://<WAZUH_INDEXER_IP>:9200/_cluster/allocation/explain?pretty&include_disk_info=true"

As the issue is in the indexer, here are the documentation links of OpenSearch related to the commands to analyze its status:
https://opensearch.org/docs/latest/api-reference/cluster-api/cluster-health/
https://opensearch.org/docs/latest/api-reference/cluster-api/cluster-allocation/

Yes, the insufficient disk space could be a reason for not allocating a shard.

Also, here is a link you could use as a guide to check different reasons for the yellow status. Take into account that this is not an official link:
https://opster.com/guides/opensearch/opensearch-basics/yellow-status/

Jaime

unread,
Jul 19, 2024, 3:25:21 AM7/19/24
to Wazuh | Mailing List
I got this with the command above

{
  "index" : ".opendistro-alerting-alerts",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2024-07-19T06:56:14.556Z",
    "last_allocation_status" : "no_attempt"
  },
  "cluster_info" : {
    "nodes" : { },
    "shard_sizes" : { },
    "shard_paths" : { },
    "reserved_sizes" : [ ]
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "xPG_eBdOTeKcVpxZ_naZpQ",
      "node_name" : "wazuh.indexer",
      "transport_address" : "10.10.41.2:9300",
      "node_attributes" : {
        "shard_indexing_pressure_enabled" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "a copy of this shard is already allocated to this node [[.opendistro-alerting-alerts][0], node[xPG_eBdOTeKcVpxZ_naZpQ], [P], s[STARTED], a[id=NAPEtnijSnegqwbGupaWIw]]"
        }
      ]
    }
  ]
}


also, trying  curl -k -u <user>:<passwd> -X GET "https://localhost:9200/_settings?pretty" | jq 'to_entries | .[-1].value' gets me this:

....
{
  "settings": {
    "index": {
      "replication": {
        "type": "DOCUMENT"
      },
      "refresh_interval": "5s",
      "number_of_shards": "1",
      "provided_name": "wazuh-statistics-2024.25w",
      "creation_date": "1718702400851",
      "number_of_replicas": "0",
      "uuid": "nF4iIKyJSoexalK3IQL7mw",
      "version": {
        "created": "136317827"
      }
    }
  }
}


I put the latest element, but on all replication fields value is "DOCUMENT"

Matias Braida

unread,
Jul 19, 2024, 9:44:39 AM7/19/24
to Wazuh | Mailing List
Please let me analyze this data, and I will get back to you as soon as possible.

Matias Braida

unread,
Jul 19, 2024, 4:52:26 PM7/19/24
to Wazuh | Mailing List
How many indexer servers do you have in your deployment?

Seems to be that the unassigned shard is a replica (field "primary" : false of the information) and a shard is already allocated for this index.

Please get the settings of the index ".opendistro-alerting-alerts" using the following command and share the response:
curl -k -u <USER>:<PASSWORD> -X GET "https://<WAZUH_INDEXER_IP>:9200/.opendistro-alerting-alerts/_settings?pretty"

Jaime

unread,
Jul 22, 2024, 4:14:04 AM7/22/24
to Wazuh | Mailing List
I have a sigle node deployment, so just one indexer, manager and dashboard

The output of the command is this:

{
  ".opendistro-alerting-alerts" : {

    "settings" : {
      "index" : {
        "replication" : {
          "type" : "DOCUMENT"
        },
        "hidden" : "true",
        "number_of_shards" : "1",
        "provided_name" : ".opendistro-alerting-alerts",
        "creation_date" : "1718726322811",
        "number_of_replicas" : "1",
        "uuid" : "qcZdb4U0RvWyUVVDYwXnNg",
        "version" : {
          "created" : "136317827"

Jaime

unread,
Jul 22, 2024, 6:41:41 AM7/22/24
to Wazuh | Mailing List
Also, trying to return the info from the indexer with:
curl -k -u <user>:<password>-X GET "https://localhost:9200/wazuh-states-vulnerabilities-*/_search?pretty" -H 'Content-Type: application/json'

it outputs me this:

{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 0,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [ ]

Matias Braida

unread,
Jul 22, 2024, 9:10:16 AM7/22/24
to Wazuh | Mailing List
Hello Jaime,

You can try changing the index configuration so that there are no replicas.

Please execute these commands:

curl -k -u <USER>:<PASSWORD> -X PUT "https://<WAZUH_INDEXER_IP>:9200/*/_settings" -H 'Content-Type: application/json' -d '{
  "index" : {
    "number_of_replicas" : 0
  }
}'


curl -k -u <USER>:<PASSWORD> -X POST "https://<WAZUH_INDEXER_IP>:9200/_cluster/reroute?retry_failed"

Wait a little time and check the cluster status to see if, after these changes, all the shards are allocated.

Jaime

unread,
Jul 22, 2024, 9:55:29 AM7/22/24
to Wazuh | Mailing List
bash-5.2$ curl -k -u admin:SecretPassword -X PUT "https://localhost:9200/*/_settings" -H 'Content-Type: application/json' -d '{
  "index" : {
    "number_of_replicas" : 0
  }
}'
{
  "error": {
    "root_cause": [
      {
        "type": "security_exception",
        "reason": "no permissions for [] and User [name=admin, backend_roles=[admin], requestedTenant=null]"
      }
    ],
    "type": "security_exception",
    "reason": "no permissions for [] and User [name=admin, backend_roles=[admin], requestedTenant=null]"
  },
  "status": 403
Message has been deleted

Matias Braida

unread,
Jul 22, 2024, 10:52:14 AM7/22/24
to Wazuh | Mailing List
Try using the admin certificates instead of the "<USER>:<PASSWORD>" option.

The command should be:

curl --cacert /etc/wazuh-indexer/certs/root-ca.pem --cert /etc/wazuh-indexer/certs/admin.pem  --key /etc/wazuh-indexer/certs/admin-key.pem -X PUT "https://<WAZUH_INDEXER_IP>:9200/*/_settings" -H 'Content-Type: application/json' -d '{
  "index" : {
    "number_of_replicas" : 0
  }
}'


Use the <WAZUH_INDEXER_IP> as it is present in the certificates. Check the value corresponding to the key "network.host" of the file "/etc/wazuh-indexer/opensearch.yml" .

Jaime

unread,
Jul 23, 2024, 3:39:37 AM7/23/24
to Wazuh | Mailing List
This is what I get with the value of network.host of the file  /etc/wazuh-indexer/opensearch.yml:

bash-5.2$ curl --cacert /usr/share/wazuh-indexer/certs/root-ca.pem --cert /usr/share/wazuh-indexer/certs/admin.pem  --key /usr/share/wazuh-indexer/certs/admin-key.pem -X PUT "https://0.0.0.0:9200/*/_settings" -H 'Content-Type: application/json' -d '{
  "index" : {
    "number_of_replicas" : 0
}'
curl: (60) SSL: no alternative certificate subject name matches target host name '0.0.0.0'
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.


And this is what I get with the value of the server hostname:

bash-5.2$ curl --cacert /usr/share/wazuh-indexer/certs/root-ca.pem --cert /usr/share/wazuh-indexer/certs/admin.pem  --key /usr/share/wazuh-indexer/certs/admin-key.pem -X PUT "https://<WAZUH_DOMAIN>:9200/*/_settings" -H 'Content-Type: application/json' -d '{
  "index" : {
    "number_of_replicas" : 0
}'
Unauthorizedbash-5.2$

Matias Braida

unread,
Jul 23, 2024, 7:50:19 AM7/23/24
to Wazuh | Mailing List
Sorry.
You should use the same IP that was used when you created the certificate.
Generally, it is the same IP that is present in the file  /etc/wazuh-indexer/opensearch.yml.

Jaime

unread,
Jul 25, 2024, 3:48:16 AM7/25/24
to Wazuh | Mailing List
But it only works with the hostname for the indexer on the docker-compose.yml

Matias Braida

unread,
Jul 25, 2024, 8:10:07 AM7/25/24
to Wazuh | Mailing List
Sorry, I don't understand what you mean.

The point is that the command "curl" command with the settings for the indexes must be executed using the certificates instead of "<user>:<password>".
And also the IP or hostname you use in the command must match the certificate you have generated for the indexer.

If you are working inside a docker environment, maybe, you should execute the command inside the environment. I don't know your specific environment.
Reply all
Reply to author
Forward
0 new messages