thanks for having time to review.
From previous posts you can see that I deleted vd and vd_updater folders to see if recreation would take effect in resolving this. Also yesterday I deleted again vd, vd_updater and
indexer folder. All these folders are in /var/ossec/queue alongside with vulnerability index. After that action I was only able to see empty vulnerability index ~200 bytes with only one Wazuh indexer host configured in indexer block in ossec.conf.
After making suggested changes in Wazuh master and Wazuh worker it seems that I can see inventory only for one agent and events are empty.
ossec.log2024/07/02 11:41:31 indexer-connector[2220215] indexerConnector.cpp:319 at initialize(): INFO: IndexerConnector initialized
successfully for index: wazuh-states-vulnerabilities-mycluster.
---
2024/07/02 12:14:36 indexer-connector[2220215] indexerConnector.cpp:446 at operator()(): WARNING: Failed to sync agent '1086' with the indexer.
2024/07/02 12:14:36 indexer-connector[2220215] indexerConnector.cpp:447 at operator()(): DEBUG: Error: No available server
2024/07/02 12:14:36 wazuh-modulesd:vulnerability-scanner[2220215] scanOrchestrator.hpp:299 at run(): DEBUG: Event type: 11 processed
2024/07/02 12:14:36 indexer-connector[2220215] indexerConnector.cpp:129 at abuseControl(): DEBUG: Agent '1086' sync omitted due to abuse control.
2024/07/02 12:14:46 wazuh-modulesd:vulnerability-scanner[2220215] scanOrchestrator.hpp:299 at run(): DEBUG: Event type: 11 processed
2024/07/02 12:14:46 indexer-connector[2220215] indexerConnector.cpp:437 at operator()(): DEBUG: Syncing agent '1103' with the indexer.
2024/07/02 12:14:46 indexer-connector[2220215] indexerConnector.cpp:446 at operator()(): WARNING: Failed to sync agent '1103' with the indexer.
2024/07/02 12:14:46 indexer-connector[2220215] indexerConnector.cpp:447 at operator()(): DEBUG: Error: No available server
2024/07/02 12:14:47 wazuh-modulesd:vulnerability-scanner[2220215] scanOrchestrator.hpp:299 at run(): DEBUG: Event type: 11 processed
2024/07/02 12:14:47 indexer-connector[2220215] indexerConnector.cpp:129 at abuseControl(): DEBUG: Agent '1103' sync omitted due to abuse control.
Differences I made:
- Changes in ossec.conf (indexer block) are on both master and worker:
OLD:
<indexer>
<enabled>yes</enabled>
<hosts>
<host>https://10.10.10.11:9200</host>
</hosts>
<ssl>
<certificate_authorities>
<ca>/etc/filebeat/certs/intermed-ca.pem</ca>
<ca>/etc/filebeat/certs/root-ca.pem</ca>
</certificate_authorities>
<certificate>/etc/filebeat/certs/filebeat.pem</certificate>
<key>/etc/filebeat/certs/filebeat-key.pem</key>
</ssl>
</indexer>
NEW:
<indexer>
<enabled>yes</enabled>
<hosts>
<host>https://10.10.10.11:9200</host>
<host>https://10.10.10.12:9200</host>
<host>https://10.10.10.13:9200</host>
</hosts>
<ssl>
<certificate_authorities>
<ca>/etc/filebeat/certs/intermed-ca.pem</ca>
<ca>/etc/filebeat/certs/root-ca.pem</ca>
</certificate_authorities>
<certificate>/etc/filebeat/certs/filebeat.pem</certificate>
<key>/etc/filebeat/certs/filebeat-key.pem</key>
</ssl>
</indexer>
- Check filebeat cert files on both master and worker:
Wazuh - Filebeat configuration file
output.elasticsearch:
hosts: ["10.10.10.11 :9200","10.10.10.12:9200","10.10.10.13 :9200"]
# hosts: ["10.10.10.11:9200"] #Tried with on or all hosts in cluster
protocol: https
username: ${username}
password: ${password}
ssl.certificate_authorities: ["/etc/filebeat/certs/intermed-ca.pem","/etc/filebeat/certs/root-ca.pem"]
ssl.certificate: "/etc/filebeat/certs/filebeat.pem"
ssl.key: "/etc/filebeat/certs/filebeat-key.pem"
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.ilm.overwrite: true
setup.ilm.enabled: false
- Wazuh-keystore (I did it again to be sure):
/var/ossec/bin/wazuh-keystore -f indexer -k username -v {same_user_for_login_to_wazuh_or_curl_wazuh_indexer} (default: admin)
/var/ossec/bin/wazuh-keystore -f indexer -k password -v {same_password_for_login_to_wazuh_or_curl_wazuh_indexer} (default: admin)
- Chuck cluster health (did curl on all three nodes to be sure every node have expected response):
curl -u admin:admin --cacert /etc/filebeat/certs/intermed-ca.pem --cert /etc/filebeat/certs/filebeat.pem --key /etc/filebeat/certs/filebeat-key.pem -X GET "https://10.10.10.11:9200/_cluster/health?pretty"
{
"cluster_name" : "MyCluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"discovered_master" : true,
"discovered_cluster_manager" : true,
"active_primary_shards" : 236,
"active_shards" : 600,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
One last thing regarding certificates:
Every certificate is signed by intermed-ca in my case. (intermed-ca is signed by root-ca)
Ceritifcates for every component have CN hostname and SAN as IP address of the host where this component should be.
So filebeat certs (/etc/filebeat/certs/filebeat.yml) in:
wazuh master is:
CN: master01
SAN: 10.10.10.11
wazuh worker is:
CN: worker01
SAN: 10.10.10.14
Wazuh indexer cert for indexer01:
CN: indexer01
SAN: 10.10.10.11 (on same server as wazuh master)
Wazuh indexer cert for indexer02:
CN: indexer01
SAN: 10.10.10.12
and so on..
I apologize for longer message but I am trying to specify as much details as possible for someone who will see these afterwards
Thank you