Alerts files stop archiving

31 views
Skip to first unread message

Roman

unread,
Feb 26, 2026, 4:55:32 AM (7 days ago) Feb 26
to Wazuh | Mailing List
Hi,

I've noticed some problems with /var/ossec/log/archive  /var/ossec/log/alerts archivation. Sometimes wazuh stops creating archives. For example, you can see, that ossec-alerts-11.json was archived, but not deleted; ossec-alerts-24.json and ossec-alerts-24.log was not archived at all. 
Screenshot 2026-02-26 095444.png

Same happened to archive ossec-archive-24.json: it was archived, but not deleted
Screenshot 2026-02-26 095808.png

Sometimes it can be fixed with restart.
ossec.log is bloated with "Too many fields for JSON decoder", so it's hard to analyze. (analysisd.decoder_order_size is set to 1024, but is doesn't help. The issue is with suricata. When suricata is stopped there is no flood in the ossec.log)
I can try "grep -vF 'wazuh-analysisd: ERROR: Too many fields for JSON decoder.'" to clear it but it takes time. 

Could you please help me with the fix?

Bony V John

unread,
Feb 26, 2026, 5:09:06 AM (7 days ago) Feb 26
to Wazuh | Mailing List
Hi,

Please allow me some time, I'm working on this and will get back to you with an update as soon as possible.

Bony V John

unread,
Feb 26, 2026, 6:06:18 AM (7 days ago) Feb 26
to Wazuh | Mailing List
Hi,

For checking the root cause, we need to review the Wazuh Manager ossec.log file to see if there are any issues related to Wazuh monitord. The Wazuh Monitord process is responsible for compressing and deleting archived logs. So we need to check whether there are any issues in Monitord while trying to rotate the logs. Could you please share the Wazuh Manager log file with us?

For that, you can run the below command to generate the log file:

grep -vF "wazuh-analysisd: ERROR: Too many fields for JSON decoder." /var/ossec/logs/ossec.log > manager-log.txt

After executing the command, a manager-log.txt file will be created. Please share it with us.

Also, please share the last two days of archives ossec.log files with us, located at:
/var/ossec/logs/wazuh/2026/Feb/

These details will help us analyze the issue better.

Also, please ensure that the Wazuh Manager has enough resources to run the services smoothly. Kindly share the output of the following commands:

Disk usage: df -h
Memory usage: free -h
CPU usage - top

Also, please let us know the current version of your Wazuh environment. Is this server running 24×7?

For now, as a workaround, you can also consider using a custom script configured via cron job to check whether any uncompressed files are present in the archives directory. If found, the script can compress them and generate the .sum file automatically.

Please share these details with us for further analysis.

Roman

unread,
Feb 26, 2026, 7:25:18 AM (7 days ago) Feb 26
to Wazuh | Mailing List
Server is running 24x7. It's virtual server on Proxmox. 
Current version is - 4.13.1
Logs and output of the commands is in the attachment.
ossec_log.zip

Bony V John

unread,
Feb 27, 2026, 1:39:14 AM (6 days ago) Feb 27
to Wazuh | Mailing List
Hi,

From the shared logs, there is nothing unusual. However, when checking the resources, it appears that most of the memory is being used. This could indicate a resource issue, which might be the reason the process is failing to compress the log files.

In this case, you can consider using a custom script with a cron job to handle the process. I have created a custom script that helps compress the log files, create the .sum file, and delete the original file. This will ensure that if monitord fails to compress the file due to resource issues, the script will perform the action afterward.

To configure this:

Create a custom script file on the Wazuh Manager:

vi /usr/local/bin/wazuh_rearchive.sh

Copy and paste the custom script that I have attached here.

Then update the permissions:

sudo chmod 755 /usr/local/bin/wazuh_rearchive.sh

Next, create a cron job to run the script every day at 1 AM.

Edit the crontab:

sudo crontab -e

Add the following configuration:
0 1 * * * /usr/bin/flock -n /var/run/wazuh-archive-fix.lock /usr/local/bin/wazuh_rearchive.sh >> /var/log/wazuh-archive-fix.log 2>&1
This will run the script daily at 01:00 AM. 
log-compress.txt

Roman

unread,
Mar 2, 2026, 5:06:58 AM (3 days ago) Mar 2
to Wazuh | Mailing List
Ok, thank you for detailed answer. We'll try to give wazuh vm more memory first and then try your script.
Reply all
Reply to author
Forward
0 new messages