Hi Gabriele,
Thanks for the detailed context and for raising this question so clearly. You’re touching on an important architectural and operational aspect of Wazuh, and I want to make sure the answer is accurate and properly reflects both the product design and real-world behaviour.
I’m reviewing this in more depth and will follow up shortly with a more comprehensive response that clarifies the supported methods for running Wazuh in a lightweight HIDS-focused setup, while maintaining stability for the indexer and dashboard over time.
I appreciate your patience, and thank you again for the thoughtful analysis.
Best regards,
Natalia
--
You received this message because you are subscribed to the Google Groups "Wazuh | Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/wazuh/8ce0ae00-fbb2-414d-9625-f0673bf30b69n%40googlegroups.com.
Hi Gabriele,
Based on what you outlined, this behaviour is expected in small, single-node Wazuh deployments and is not the result of a simple misconfiguration.
In an all-in-one setup, the Wazuh Manager, OpenSearch, and Dashboard share the same disk and memory. When disk usage approaches OpenSearch watermarks, indices are automatically set to read-only and eventually blocked at flood stage. At that point, the manager continues to generate alerts (visible in alerts.json), but OpenSearch stops accepting new data. The dashboard then becomes empty or unstable, filters can break, and intermittent 401/authentication issues may appear. This behaviour is tied directly to OpenSearch disk-protection mechanisms.
The official requirements describe minimum installation values, but in practice, a stable all-in-one deployment requires significantly more disk and memory headroom than the documented minimums, due to OpenSearch disk watermarks and JVM memory behaviour.
For an all-in-one deployment, the documentation indicates 4 vCPU, 8 GB RAM (minimum), and disk space that “depends on data retention.” These are installation minimums, not stability guarantees. On a 38 GB root filesystem, 85% usable space is roughly 32 GB. After the OS, logs, and packages consume around 8–12 GB, the OpenSearch data path is left with ~20 GB or less. That margin is not sufficient for alert indices, security plugin indices, dashboard saved objects, segment merges, and translog growth. This is why, in practice, 50–100 GB is the real lower bound for stable operation, even at very low EPS.
Wazuh can run in small environments, but only with strict operational limits. Index growth must be controlled using Index State Management with short retention periods; without this, instability over time is unavoidable.
https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/index-lifecycle-management.html
Additionally, modules not required for the intended use case (for example, vulnerability detection, cloud integrations, or very broad FIM paths) should be disabled to reduce index churn and disk pressure.
Regarding your specific question about running Wazuh primarily as a lightweight HIDS while maintaining stability over time for the dashboard and indexer: the short answer is that this is supported, but it is achieved operationally rather than architecturally. It requires intentionally limiting enabled features, data volume, and retention so the indexer and dashboard remain within safe resource boundaries. I want to dig a bit deeper into this point to give you a more precise and complete answer that clearly distinguishes what is officially supported versus what is only possible in practice.
I’ll follow up shortly with a more detailed response focused specifically on that question.