wazuh real HA

11 views
Skip to first unread message

jack

unread,
Feb 24, 2026, 6:12:03 AM (yesterday) Feb 24
to Wazuh | Mailing List
Hello
I needed to have wazuh as HA. But what wazuh offers is not what I want for this case. In fact, my requirement is that if the master goes down, the other node can do the work that the master was doing.

For this requirement, I used this scenario:
I have two vms, n1 is currently the master and n2 is the worker.
All agents connect to the master and send data to it.
At this point, I created a user for the manager to use the api.
Then I took down n1 and changed n2 and set it as master and set n1 as a worker and brought it up again.
At this time, the user I created to use the manager was not available either through n1 or through n2.

What I noticed is that manager users are not shared with workers.

Has anyone had experience with this path that I took?
Or is this a requirement for someone else?
And what else does the master not share with the worker? Or what does the worker just receive from the master?

hasitha.u...@wazuh.com

unread,
Feb 24, 2026, 6:33:16 AM (yesterday) Feb 24
to Wazuh | Mailing List

Hi Jack,

Please allow me some time; I’m currently looking into this and will get back to you with an update as soon as possible.

hasitha.u...@wazuh.com

unread,
Feb 24, 2026, 7:00:59 AM (yesterday) Feb 24
to Wazuh | Mailing List

Hi Jack,

What you experienced is actually expected behavior based on how the Wazuh server cluster is designed.

The Wazuh cluster is primarily built for scalability and centralized configuration management, not for full active-passive failover of everything on the manager. In a cluster, the master node is responsible for synchronizing specific data to the worker nodes, but not all components of the manager are shared.

According to the Wazuh server cluster documentation (architecture overview, types of nodes, how the cluster works, and data synchronization sections), the master synchronizes operational and configuration data that allows workers to properly process events. This includes things like Wazuh agent registration details, shared configuration, CDB list, custom SCA policies, custom decoders, and rules are synchronized from the master to the workers. This ensures consistency in detection logic and agent handling across the cluster.

However, API users and API configuration are not part of the synchronized data. They are stored locally on each node. That is why, when you created an API user on n1 (the original master), and then shut it down and promoted n2 to master, that user did not exist on n2. The cluster does not replicate API users between nodes, so this behavior is expected.

Check this section to understand what the master node does in the cluster, and check this for worker responsibilities.

However, if your master goes down and you have configured a load balancer or failover mode according to this documentation. Then your agents will report to worker nodes without any issue. After the fix or master node is back online, you can see the logs until the current time, because agents reported to the worker nodes, if you have configured one of the options mentioned earlier.

Currently, Wazuh clustering supports a single master. The enhancement in #811 proposes adding multiple masters to provide HA for master-only tasks (e.g., agent registration), including master election/failover, split-brain avoidance, data consistency across masters, and the required changes to the Distributed API. Which is currently in progress, you can review that issue later for updates about this topic. 

Additional resources that help you to understand more about the Cluster.
If you need any further updates on this topic, you can share a comment on the GitHub issue I shared.

Let me know if you need further assistance on this.
Reply all
Reply to author
Forward
0 new messages