Dashboard errors

12 views
Skip to first unread message

Julien Bard

unread,
1:08 PM (7 hours ago) 1:08 PM
to Wazuh | Mailing List
Hi everyone,

For 3 days now, I have very frequent Dashboard errors : ({"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred."}) that make the Dashboard crash and make me restart the service or even reboot the whole system. Except that it always comes back anyway so the SIEM is unusable.

In the Dashboard log I find a lot of errors like this one :

mars 04 14:15:00 wazuh2 opensearch-dashboards[950]: {"type":"log","@timestamp":"2026-03-04T14:15:00Z","tags":["error","opensearch","data"],"pid":950,"message":"[circuit_breaking_exception]: [parent] Data too large, data for [<http_request>] would be [1035033480/987mb], which is larger than the limit of [1020054732/972.7mb], real usage: [1035033480/987mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=69916/68.2kb, in_flight_requests=0/0b]"}


I looked in the mailing list and I tried everything, I pumped up the system memory (16Gb) and the JVM memory (8Gb). The disk usage is fine at around 50%. The CPU usage can spike a lot (200%) but only for a second and it's quite rare. On average it is around 30%.

I use a all-in-one 4.14.2 Wazuh on Ubuntu for only 3 agents on WEC servers collecting about 200 servers (~ 30Gb of events per day).

Thanks for your help.


Juan Sebastián Saldarriaga Arango

unread,
4:25 PM (3 hours ago) 4:25 PM
to Wazuh | Mailing List

Hi, this error is from the OpenSearch parent circuit breaker, not the Dashboard itself.
From your log (limit ... 972.7mb), it likely means your indexer heap is effectively around 1 GB (inference: parent breaker defaults to ~95% of JVM heap), so your 8 GB change may not be applied to the correct service/config.

I’d check, in order:

  1. Verify -Xms/-Xmx in Wazuh indexer JVM config and restart indexer.
  2. Check breaker stats (_nodes/stats/breaker) and cluster health in Dev Tools.
  3. Upgrade from 4.14.2 (Jan 14, 2026) to 4.14.3 (Feb 11, 2026).
  4. Consider moving from all-in-one to distributed deployment for this ingestion volume (200 servers / ~30 GB/day).

Official refs:

Reply all
Reply to author
Forward
0 new messages