File Integrity Monitoring not alert realtime with directory high

110 views
Skip to first unread message

Nguyên Nguyễn Thế

unread,
Apr 24, 2025, 12:26:42 AM4/24/25
to Wazuh | Mailing List
Hello,
Recently I am integrating Wazuh into my system with version 4.11, I realized that there is a problem with my FIM
- With FIM feature, when I configure realtime scanning with light folders, when I create a file directly on the system, the warning will be generated immediately when setting real_time="yes".
- But when I leave this feature with heavy folders (> 20GB), I create the file directly but do not see the warning even though setting real_time="yes". I also use whodata="yes" but the warning is not generated even though the auditd log catches the file I created
Has anyone encountered this case, and how to handle it, please share
---
<agent_config>
  <syscheck>
    <disabled>no</disabled>
    <frequency>43200</frequency>
    <scan_on_start>yes</scan_on_start>
    <directories>/etc,/opt,/usr/bin,/usr/sbin</directories>
    <directories>/bin,/sbin,/boot</directories>
    <directories check_all="yes"  whodata="yes" scan_on_start="no" >/var/www</directories>
   
    <ignore>/etc/mtab</ignore>
    <ignore>/etc/hosts.deny</ignore>
    <ignore>/etc/mail/statistics</ignore>
    <ignore>/etc/random-seed</ignore>
    <ignore>/etc/random.seed</ignore>
    <ignore>/etc/adjtime</ignore>
    <ignore>/etc/httpd/logs</ignore>
    <ignore>/etc/utmpx</ignore>
    <ignore>/etc/wtmpx</ignore>
    <ignore>/etc/cups/certs</ignore>
    <ignore>/etc/dumpdates</ignore>
    <ignore>/etc/svc/volatile</ignore>
    <ignore type="sregex">\.log$|\.logs$|\.swp$</ignore>

    <ignore>/var/log</ignore>
    <ignore>/var/www/customers</ignore>
    <ignore>/nfs/customers</ignore>

    <auto_ignore frequency="10" timeframe="3600">yes</auto_ignore>
    <alert_new_files>yes</alert_new_files>

    <nodiff>/etc/ssl/private.key</nodiff>
   
    <skip_nfs>yes</skip_nfs>
    <skip_dev>yes</skip_dev>
    <skip_proc>yes</skip_proc>
    <skip_sys>yes</skip_sys>

    <process_priority>0</process_priority>
    <max_eps>50</max_eps>

    <file_limit>
      <enabled>no</enabled>
      <entries>65535</entries>
    </file_limit>
   
    <synchronization>
      <enabled>yes</enabled>
      <interval>5m</interval>
      <max_eps>30</max_eps>
      <queue_size>131072</queue_size>
      <thread_pool>4</thread_pool>
    </synchronization>
 
  </syscheck>
</agent_config>

Bony V John

unread,
Apr 24, 2025, 1:16:42 AM4/24/25
to Wazuh | Mailing List

Hi,

Based on your input, it seems you are trying to monitor a large directory in real-time using the whodata function.

Please follow the steps below to investigate and troubleshoot the issue:

Check the Wazuh agent log for errors, run this command on the agent system that is showing the issue:
cat /var/ossec/logs/ossec.log | grep -iE "error|warn|crit|fatal|syscheck"

In Wazuh FIM, there is a recursion level, which is set to 256 by default.
If the directory you are monitoring exceeds this level, it can lead to issues with monitoring or performance.

Run the following command to check how deep your monitored directory structure goes:  

find /your/directory/path -type d | awk -F/ '{print NF}' | sort -n | tail -1

Replace /your/directory/path with the actual path you're monitoring.  

You can refer to the Wazuh syscheck configuration documentation for more details on adjusting the recursion level.

If possible, avoid monitoring entire directories recursively. Instead, focus on the most critical subdirectories.  

Check the Wazuh manager logs
cat /var/ossec/logs/ossec.log | grep -iE "error|warn|crit|fatal"

By default, the maximum queue depth for storing audit dispatcher events is 16,384.
If this queue gets full, some events may be dropped. The next scheduled scan will still generate alerts, but without the full audit trail.  
You can refer Wazuh FIM documentation for more details.

Please share the full outputs of the above commands for further evaluation.  

Nguyên Nguyễn Thế

unread,
Apr 24, 2025, 3:10:50 AM4/24/25
to Wazuh | Mailing List
Hi  Bony V John,
Thanks for the support, I saw my problem
It seems that every time I push the configuration, the agent will automatically rescan all, which causes the realtime process to hang, so there is no warning

Vào lúc 12:16:42 UTC+7 ngày Thứ Năm, 24 tháng 4, 2025, Bony V John đã viết:

Nguyên Nguyễn Thế

unread,
Apr 25, 2025, 1:13:42 AM4/25/25
to Wazuh | Mailing List
Hi  Bony V John,
About the realtime issue above, after customizing some system parameters I found it worked, but after I switched the centralized configuration to about 20 servers in a group, it seemed it didn't work as I tested, is there a problem with some limited parameters?

Vào lúc 14:10:50 UTC+7 ngày Thứ Năm, 24 tháng 4, 2025, Nguyên Nguyễn Thế đã viết:

Bony V John

unread,
Apr 25, 2025, 3:58:27 AM4/25/25
to Wazuh | Mailing List

Hi,

If you are configuring the agent using Wazuh centralized agent configuration, make sure that remote commands for agent modules are enabled.
This can be done by adding the following line to the agent’s local_internal_options.conf file:

/var/ossec/etc/local_internal_options.conf

Add the following line:

wazuh_command.remote_commands=1

You can refer to the Wazuh centralized agent configuration documentation for more details.  

If the issue still persists, please share your configuration files with us so we can validate them from our side. Also, provide both the Wazuh agent and Wazuh manager /var/ossec/etc/ossec.log files for further analysis.

You can also refer to the Wazuh FIM configuration documentation for more information on configuration and validation steps.

Nguyên Nguyễn Thế

unread,
Apr 28, 2025, 1:22:03 AM4/28/25
to Wazuh | Mailing List
Hi,
Regarding the issue I mentioned above, it seems that too many events are pushed at the same time, causing wazuh-manager to not have time to create warning events. Along with this issue, I am currently seeing a bigger problem that I often encounter, which is the problem of connecting to wazuh-api. I am thinking that it is due to the problem of multiple threads processing accumulated records without concurrent access protection, leading to API congestion. This is the log I get when accessing wazuh-dashboard
---
INFO: Current API id [default]
INFO: Checking current API id [default]...
INFO: Current API id [default] has some problem: 3002 - Request failed with status code 500
INFO: Getting API hosts...
INFO: API hosts found: 1
INFO: Checking API host id [default]...
INFO: Could not connect to API id [default]: 3099 - ERROR3099 - Some Wazuh daemons are not ready yet in node "node01" (wazuh-remoted->failed)
INFO: Removed [navigate] cookies
ERROR: No API available to connect
---
Do you have any ideas, because right now I'm only running for 20 servers to monitor, and my model needs 100 servers
Vào lúc 14:58:27 UTC+7 ngày Thứ Sáu, 25 tháng 4, 2025, Bony V John đã viết:

Bony V John

unread,
Apr 30, 2025, 12:11:41 AM4/30/25
to Wazuh | Mailing List

Hi,

I apologize for the delayed response. The error you're encountering on the Wazuh dashboard appears to be due to the Wazuh manager service being down.

To assist you better, please provide the following details:

  • The configuration you are attempting to apply to the agents.

  • Your Wazuh deployment type (e.g., OVA, single-node, or distributed).

  • The version of the Wazuh service you are using.


Please run the following commands on your Wazuh manager server to check system resource utilization and the service status:
Disk usage: - df -h
Memory usage: - free -h
CPU usage: - top

Check the status of the Wazuh manager service:
systemctl status wazuh-manager

Restart the Wazuh manager service (this may resolve temporary issues):
systemctl restart wazuh-manager

Check the Wazuh manager logs for any error, warning, or critical messages:
cat /var/ossec/logs/ossec.log | grep -iE "error|warn|crit|fatal"

You may also refer to the Wazuh troubleshooting documentation for additional guidance.

Kindly share the requested details and the full output of the above commands for further investigation.

Reply all
Reply to author
Forward
0 new messages