Hi Julian Jorge,
Hope you are doing well. Thank you for using Wazuh.
The option email_maxperhour sets the maximum number of emails we are going to receive within a one-hour interval. Once this limit has been reached, the subsequent alerts are queued and will be sent in a single email during the next time interval. Therefore, a smaller value for this option could lead to losing the real-time alerting if the limit is reached. For this reason, we suggest you set a high value for this option.
Default value: 12
Allowed values: Any number from 1 to 1000000
Update the configuration inside ossec.conf
<ossec_config>
<global>
<email_maxperhour>60</email_maxperhour>
</global>
</ossec_config>
Also if you are using a Google SMTP server. Rember that.
You may see this message “You have reached a limit for sending mail"if you send more than 500 emails in a day.
When you get this error, you should be able to send emails again within 1 to 24 hours.
Check this document:
https://support.google.com/mail/answer/22839
Also, check this document. Here you will find an example of sending alerts in real-time.
https://wazuh.com/blog/how-to-send-email-notifications-with-wazuh/
I hope this answers your questions. Please let me know if you need any further help or assistance.
Regards
Md. Nazmur Sakib
Hello Nazmur,
I believe I may not be explaining myself correctly.
Our issue doesn't stem from email, but rather from another scenario:
We are monitoring a set of folders that are constantly being modified by different users, and we have rules we've created, for example, "Document deleted," "Document added," "File name modification." Then, we have rules that check if there are a certain number of "Document deleted" and "Document added" events within a specific time frame to trigger a third rule that says "Possible ransomware."
Now, the problem is that this is not being monitored in real-time (we understand it may be due to the volume of changes), but it only notifies us precisely at midnight with all the changes as if it were "Possible ransomware."
Is there anything we can do to address this?
Best regards.
Hi Julian Jorge,
Hope you are doing well.
Are you using the realtime option in your FIM configuration?
<syscheck>
<directories realtime="yes">FILEPATH/OF/MONITORED/DIRECTORY</directories>
</syscheck>
realtime Allows real-time/continuous monitoring of directories on Windows and Linux endpoints.
If you are still facing issues with real time. You can share the custom rules you have written for FIM. So that I can have a better understanding of your problem and guide you accordingly.
Regards
Md. Nazmur Sakib
Hi Julian Jorge,
Hope you are doing well. Sorry for the late response.
Looking at the log it seems like based on the rules timeframe and frequency and co-relation the log is triggering at a specific time.
Rule 500163 will at least need 20 minutes to trigger because of the previous rule and the next rule will trigger 20*15 = 300 minutes/ 5 hours. But I am guessing the logs are not generating continuously, so some rules are taking more time to trigger which is affecting the next rule.
I hope you find the information useful.
Regards
Md. Nazmur Sakib
Hi Julian Jorge,
Sorry for earlier. I made some mistakes to understand your rules. I was doing some tests to create a similar situation.
<group name="custom_rules">
<rule id="100502" level="8" frequency="3" timeframe="520">
<if_matched_sid>5501</if_matched_sid>
<description>Rule ID 5501 multiple time</description>
</rule>
</group>
Yes the information about frequency and time frame is correct. As you can see 6 logs were generated for 14 events. The only way to find if your rules are creating any anomaly is to check if there are more than 20 times rule 500162 and 500163 triggered yet the rule 500164 did not trigger.
Let me know your findings
Regards
Md. Nazmur Sakib