Problem with wazuh kubernetes deployment

16 views
Skip to first unread message

Amin

unread,
Nov 21, 2025, 4:43:32 AM (yesterday) Nov 21
to Wazuh | Mailing List

Hello everyone,

I have set up Wazuh as a monolith and now with Kubernetes. However, I am currently having problems getting it to run. 

My environment (as I imagined it) should look like this:
Cluster Architecture:
3-node Kubernetes cluster (v1.34.1):

  • sm400 (192.1.4.110): Control plane, 4 CPU, 12GB RAM, sdb 150GB mounted at /mnt/wazuh-master
  • sw401 (192.1.4.111): Worker node, 10 CPU, 20GB RAM, sda 250GB mounted at /mnt/wazuh-indexer
  • sw402 (192.1.4.112): Worker node, 12 CPU, 24GB RAM, sdb 300GB mounted at /mnt/wazuh-worker

CNI: Cilium with WireGuard encryption

Load Balancer: MetalLB (192.1.4.113-114)

Gateway: Cilium Gateway API

TLS: cert-manager with company CA certificates


Planned Wazuh Deployment:

Wazuh 4.14.1 deployment with 1 worker (approximately 180 agents in the end)

  • wazuh-indexer-0:
    • Target: sw401
    • Storage: 230GB PV on /mnt/wazuh-indexer
    • PVC: wazuh-indexer-wazuh-indexer-0
  • wazuh-manager-master-0:
    • Target: sm400 (control plane)
    • Storage: 140GB PV on /mnt/wazuh-master
    • PVC: wazuh-manager-master-wazuh-manager-master-0
  • wazuh-manager-worker-0:
    • Target: sw402
    • Storage: 140GB PV on /mnt/wazuh-worker/worker-0
    • PVC: wazuh-manager-worker-wazuh-manager-worker-0


StorageClass: wazuh-storage (kubernetes.io/no-provisioner, WaitForFirstConsumer)

PersistentVolumes implemented with:

  • nodeAffinity: kubernetes.io/hostname matchExpressions
  • claimRef: Explicit binding to namespace/PVC name
  • local: path Volumes on mounted disks

Current Stats:
PVC Bindings (Correct):

  • wazuh-indexer-wazuh-indexer-0 → Bound to wazuh-indexer-0-pv
  • wazuh-manager-master-wazuh-manager-master-0 → Bound to wazuh-manager-master-0-pv
  • wazuh-manager-worker-wazuh-manager-worker-0 → Bound to wazuh-manager-worker-0-pv
Pod Status:
  • wazuh-indexer-0: 1/1 Running on sw401 (SUCCESS)
  • wazuh-manager-master-0: Pending (NOT SCHEDULED)
  • wazuh-manager-worker-0: ContainerCreating on sw402
  • wazuh-dashboard: ContainerCreating on sw402


Specific Problem:
  • wazuh-manager-master-0 pod remains in “Pending” status and is not scheduled on sm400 (control plane node), even though:
  • PVC is correctly bound to PV
  • PV is configured with nodeAffinity for sm400
  • Storage directory /mnt/wazuh-master exists on sm400 and is accessible

Assumption: Control Plane Taint (node-role.kubernetes.io/control-plane:NoSchedule) prevents scheduling of workload pods on sm400.


Question: Is it best practice to deploy Wazuh Manager Master on the control plane, or should it be moved to a worker node? If the latter, what adjustments are required for PV nodeAffinity and storage paths?



Luciano Valinotti

unread,
Nov 21, 2025, 11:37:39 AM (yesterday) Nov 21
to Wazuh | Mailing List
Hi Amin,

Thanks for the detailed environment description as it makes the situation much clearer and give us some specific context.

First of all your assumption is correct, the wazuh-manager-master-0 is stuck in `Pending` because Kubernetes control plane nodes are tainted by default with:

`node-role.kubernetes.io/control-plane:NoSchedule`

This prevents regular workload pods from being scheduled on the control plane unless you explicitly tolerate that taint.
Even though your PV and PVC bindings are correct, the scheduler will not place the pod on sm400 because of this taint.

Regarding your questions:
Should the Wazuh Manager Master run on the control plane?
No, it is generally not recommended to run Wazuh Manager (or any application workload) on the control plane unless strictly necessary for specific reasons.

Control plane nodes should ideally:

- run only Kubernetes system components (kube-apiserver, etcd, scheduler…)

- remain stable and isolated from application load

- avoid unnecessary CPU/memory pressure

So I would recommend you to move wazuh-manager-master-0 to a worker node instead of running it on sm400.

In order to do that, the adjustments you require would be mainly:

* PV nodeAffinity

Right now your PV for the master should be something like:

nodeAffinity:
  required:
    nodeSelectorTerms:
      - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
              - sm400


Move it to the new target worker node by adjusting:

values:
  - <new-worker-hostname>

* Storage path on the new worker

The PV local.path must exist on the worker:

local:
  path: /mnt/wazuh-master


Also on the new worker node:

mkdir -p /mnt/wazuh-master
chmod 755 /mnt/wazuh-master

Keep in mind that if you use a different disk or mount point, you will have to update the path accordingly.

Best regards!

Reply all
Reply to author
Forward
0 new messages