ossec: Real-time inotify kernel queue is full.

45 views
Skip to first unread message

Veera

unread,
Jan 5, 2026, 12:21:23 AM (6 days ago) Jan 5
to Wazuh | Mailing List
Team,

We have configured FIM  for NFS mounts and increased the kernel inotify to 1000000
# sysctl -a |grep inotify
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 1024
fs.inotify.max_user_watches = 1000000
user.max_inotify_instances = 1024
user.max_inotify_watches = 1000000

However still the FIM fails with the error like

full_log

ossec: Real-time inotify kernel queue is full. Some events may be lost. Next scheduled scan will recover lost data.

Also , before 24 hours I can see the below logs


full_log

wazuh: FIM DB: {"fim_db_table":"file_entry","file_limit":100000,"file_count":100000,"alert_type":"full"}

rule.description

The maximum limit of files monitored has been reached. At this moment there are 100000 files and the limit is 100000. From this moment some events can be lost. You can modify this setting in the centralized configuration or locally in the agent.
How can I increase the limit if I have to Increase it from the agent side? or from the master side? Is the kernel changes still require for the inotify?


Himanshu Sharma

unread,
Jan 5, 2026, 3:02:14 AM (6 days ago) Jan 5
to Wazuh | Mailing List
Hi Team,

By default, agents stop adding files to the database once 100k files have been scanned, in order to increase this number, you need to add the <file_limit> section to your configuration as described in this link. Allowed values in the entries (number of files to be monitored) option allow numbers between 1 and 2147483647.

The configuration would look something like this, and we can update this from centralized configuration or locally on the agent.

<syscheck>
  <!-- Maximum number of files to be monitored -->
    <file_limit>
        <enabled>yes</enabled>
        <entries>1000000</entries>
    </file_limit>
</syscheck>

Reference:

This error Real-time inotify kernel queue is full. is a kernel-side inotify queue overflow, Its expected when you are monitoring files more than 100000 files

Given these situations, we have two paths to follow:

  • Either reduce the amount of files being monitored using realtime/whodata in the Wazuh configuration for the syscheck module.

  • Or increase the inotify-watch-limit for below in/etc/sysctl.conf:

    fs.inotify.max_user_watches fs.inotify.max_queued_event

https://medium.com/@at15/ubuntu-change-fs-inotify-max-user-watches-for-idea-f5f5d6651e7f

Regards,


Veera

unread,
Jan 6, 2026, 1:10:19 AM (5 days ago) Jan 6
to Wazuh | Mailing List

I have set the limit to 25,00,000 in  file_limit, under syscheck  and also in the  fs.inotify.max_user_watches . 
There are no events of FIM reported and no errors related to the limit reached in the ossec.log
 #du -sh /var/ossec/queue/fim/db/fim.db
   2.2G    /var/ossec/queue/fim/db/fim.db

How can we debug futher on the  cause for why FIM events are not reported?  

Veera

unread,
Jan 6, 2026, 6:00:16 AM (4 days ago) Jan 6
to Wazuh | Mailing List
Also is there any size Limit for the NFS  volumes  to be under FIM module?
or number of files in one NFS volume?

Himanshu Sharma

unread,
Jan 9, 2026, 12:34:32 PM (yesterday) Jan 9
to Wazuh | Mailing List
Hi Team,

If the <file_limit> has been increased to 2,500,000 and the kernel parameter fs.inotify.max_user_watches has also been increased accordingly, Wazuh will no longer reject files due to file count limits, and related errors should no longer appear.

To further investigate why FIM events are not being reported, we need additional diagnostic information from the agent. Please enable debug level 2 for the following components in the agent’s configuration file:

/var/ossec/etc/internal_options.conf

execd.debug=2 
remoted.debug=2 
agent.debug=2 
wazuh_modules.debug=2

After applying these changes, please restart the Wazuh agent and allow it to run until the issue is reproduced. Then, share the updated agent ossec.log file and agent.conf file.

This debug output will help us identify where the process is failing and determine the next steps.

Reply all
Reply to author
Forward
0 new messages