Help!! Wazuh Updated and Broke.

122 views
Skip to first unread message

Kenny

unread,
Feb 26, 2025, 8:26:51 PM2/26/25
to Wazuh | Mailing List
It's so bad, I just had to check and see If I could ping the server. 
It's still in DNS, I can Still SSH to the box. I had to load my Indeces to a different volume months before the upgrade because I only discovered that path grows along with the logs and alerts consuming hundreds of Gigs of storage as well. My /var/ossec live on a separate volume as well.

When I check for running elasticsearch service I get nothing 

The current version is:
----------------------------Version Check--------------------------
wazuh-indexer/stable,now 4.11.0-1 amd64 [installed]
N: There are 38 additional versions. Please use the '-a' switch to see them.
Listing... Done
wazuh-manager/stable,now 4.11.0-1 amd64 [installed]
N: There are 57 additional versions. Please use the '-a' switch to see them.
Listing... Done
wazuh-dashboard/stable,now 4.11.0-1 amd64 [installed]
N: There are 38 additional versions. Please use the '-a' switch to see them.
-----------------------------End Version Check-----------------------------
Wazuh Manager loads with no errors. Still unable to navigate to url.
Check of Wazuh Dashboard status shows the following error.
----------------------------Begin Error-------------------------------
Feb 26 18:23:23 wazuh opensearch-dashboards[1424]: {"type":"log","@timestamp":"2025-02-26T18:23:23Z","tags":["error","opensearch","data"],"pid":1424,"message":"[search_phase_execution_exception]: all shards failed"}
Feb 26 18:23:23 wazuh opensearch-dashboards[1424]: {"type":"log","@timestamp":"2025-02-26T18:23:23Z","tags":["warning","savedobjects-service"],"pid":1424,"message":"Unable to connect to OpenSearch. Error: search_phase_execution_exception: "}
Feb 26 18:23:25 wazuh opensearch-dashboards[1424]: {"type":"log","@timestamp":"2025-02-26T18:23:25Z","tags":["error","opensearch","data"],"pid":1424,"message":"[circuit_breaking_exception]: [parent] Data too large, data for [<http_request>] would be [1029895630/982.1mb], which is larger than the l>
Feb 26 18:23:25 wazuh opensearch-dashboards[1424]: {"type":"log","@timestamp":"2025-02-26T18:23:25Z","tags":["warning","savedobjects-service"],"pid":1424,"message":"Unable to connect to OpenSearch. Error: circuit_breaking_exception: [circuit_breaking_exception] Reason: [parent] Data too large, dat>
Feb 26 18:23:25 wazuh opensearch-dashboards[1424]: {"type":"log","@timestamp":"2025-02-26T18:23:25Z","tags":["fatal","root"],"pid":1424,"message":"ResponseError: circuit_breaking_exception: [circuit_breaking_exception] Reason: [parent] Data too large, data for [<http_request>] would be [1029895630>
Feb 26 18:23:25 wazuh opensearch-dashboards[1424]: {"type":"log","@timestamp":"2025-02-26T18:23:25Z","tags":["info","plugins-system"],"pid":1424,"message":"Stopping all plugins."}
Feb 26 18:23:27 wazuh opensearch-dashboards[1424]:  FATAL  {"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [1029895630/982.1mb], which is larger than the limit of [1020054732/972.7mb], real usage: [1029894016/982.1m>
Feb 26 18:23:27 wazuh systemd[1]: wazuh-dashboard.service: Main process exited, code=exited, status=1/FAILURE
Feb 26 18:23:27 wazuh systemd[1]: wazuh-dashboard.service: Failed with result 'exit-code'.
Feb 26 18:23:27 wazuh systemd[1]: wazuh-dashboard.service: Consumed 12.336s CPU time.
------------------------------------End Error-------------------------------------------------

Bony V John

unread,
Feb 27, 2025, 12:41:07 AM2/27/25
to Wazuh | Mailing List
Hi,

From your logs, I can see that you are encountering a circuit breaking exception due to the data size limit being exceeded:  
Feb 26 18:23:25 wazuh opensearch-dashboards[1424]: {"type":"log","@timestamp":"2025-02-26T18:23:25Z","tags":["error","opensearch","data"],"pid":1424,"message":"[circuit_breaking_exception]: [parent] Data too large, data for [<http_request>] would be [1029895630/982.1mb], which is larger than the

To resolve this issue, we need to ensure that the JVM heap size is adequate to handle the data.  

In such a case, you need to increase the JVM heap limits in your indexer nodes. Keep in mind these restrictions:

  • Use no more than 50% of available RAM.
  • Use no more than 32 GB.

First, let’s check the memory of your indexer nodes:

free -h

Then, edit the /etc/wazuh-indexer/jvm.options file and change the JVM flags.

For example, if your server has 12GB of RAM, you can set the limits to 6GB as shown below:

-Xms6g
-Xmx6g

Once the heap limit is updated, you need to restart the Wazuh Indexer to apply the changes:  
systemctl daemon-reload
systemctl restart wazuh-indexer
systemctl restart wazuh-dashboard


You can refer Wazuh indexer memory locking documentation for further details.
Reply all
Reply to author
Forward
0 new messages