Auto-logoff on login and no events in Wazuh plugin

443 views
Skip to first unread message

Frank

unread,
Mar 14, 2022, 10:28:07 AM3/14/22
to Wazuh mailing list
Hello,

Recently my wazuh manager server went down because disk got full of logs, after cleaning the logs and rebooting I was able to log in back into kibana. 

But I have noticed few things now:
  • When logging in, sometimes user gets auto-logged off.
  • When I do manage to login successfully, after pressing wazuh plugin and checking events, you sometimes get auto-logged off.
  • If I do manage to login successfully, there are no security alerts in wazuh plugin but they are present in kibana.
At first there were error "429 error: too many requests"  but after restarting wazuh-manager service few times I was not able to replicate the error anymore. 

I didn't see any suspicious error logs.
 
Current main issue - no events present in wazuh plugin. Any idea what could be causing this?

Thanks

Frank

unread,
Mar 14, 2022, 10:47:37 AM3/14/22
to Wazuh mailing list
update.

Just noticed that events stopped generating all together.

Restarting services didn't seem to help.

Federico Rodriguez

unread,
Mar 14, 2022, 2:06:05 PM3/14/22
to Wazuh mailing list

Hi!

Could you please provide which Wazuh version you are using? It could be wazuh-registry.json file is corrupted.
It is possible that after running out of space, the wazuh-registry.json file may be in an invalid state.

1- Stop the kibana service:
service kibana stop

2- Remove wazuh-registry.json:
rm /usr/share/kibana/data/wazuh/config/wazuh-registry.json

3- Start the kibana service (wazuh-registry.json will be re-created):
service kibana start

Finally, remove the browser cache, cookies, local storage and try to access to Wazuh APP again.

Frank

unread,
Mar 15, 2022, 3:24:46 AM3/15/22
to Wazuh mailing list

Hello,

Thanks for suggestion. That seemed to fix login issues. At the moment I am running v4.2.5.

Now I am getting "Too Many Request" error when trying to filter more events (followed by auto log-off later). Example error:

Error: Too Many Requests
at Fetch._callee3$ (https://192.168.0.2/36136/bundles/core/core.entry.js:6:59535)
at tryCatch (https://192.168.0.2/36136/bundles/plugin/opendistroQueryWorkbenchKibana/opendistroQueryWorkbenchKibana.plugin.js:1:32004)
at Generator.invoke [as _invoke] (https://192.168.0.2/36136/bundles/plugin/opendistroQueryWorkbenchKibana/opendistroQueryWorkbenchKibana.plugin.js:1:35968)
at Generator.forEach.prototype.<computed> [as next] (https://192.168.0.2/36136/bundles/plugin/opendistroQueryWorkbenchKibana/opendistroQueryWorkbenchKibana.plugin.js:1:33129)
at fetch_asyncGeneratorStep (https://192.168.0.2/36136/bundles/core/core.entry.js:6:52652)
at _next (https://192.168.0.2/36136/bundles/core/core.entry.js:6:52968)

circuit_breaking_exception [parent] Data too large, data for [<reduce_aggs>] would be [1022298104/974.9mb], which is larger than the limit of [1020054732/972.7mb], real usage: [1022298104/974.9mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=292/292b, in_flight_requests=3016/2.9kb, accounting=95850428/91.4mb]

Should I try increasing heap size as mentioned here https://groups.google.com/g/wazuh/c/AwrP77SzAXs?

Federico Rodriguez

unread,
Mar 15, 2022, 2:20:40 PM3/15/22
to Wazuh mailing list

The circuit_breaking_exception is a mechanism used to prevent operations from causing an OutOfMemoryError. It seems like Elasticsearch was using most of the JVM heap configured, and the total memory required for all operations was superior to the memory available, so the operation you requested was aborted.

I'll suggest to increase the heap size as Elasticsearch forums suggest:

If you want to increase the JVM heap, remember that the min and max value should be the same. To do that add the following lines to your /etc/elasticsearch/jvm.options . In this example we will increase it to 6GB:

-Xms6g

-Xmx6g

Then, to apply the changes:

Restart Elasticsearch:

systemctl restart elasticsearch

Bear in mind that the value to be configured is not recommended to be greater than 50% of the available RAM.

This webinar about optimizing resources will probably come in handy:

https://www.elastic.co/webinars/optimizing-storage-efficiency-in-elasticsearch


Hope it helps

Frank

unread,
Mar 16, 2022, 7:02:39 AM3/16/22
to Wazuh mailing list
After changing JVM heap to 8 GB the issues that I have encountered before are not present anymore. 

Logs are few hours behind but seem to be catching up as well. 

Thank you for information and help.

Take care!

Reply all
Reply to author
Forward
0 new messages