I have worked around the showstopper Windows events time stamp issue in logstash as follows:
Created a custom grok pattern in file /etc/logsash/patterns/custom-patterns with the following content:
WINEVTDATE \d\d\d\d \w\w\w \d\d \d\d:\d\d:\d\d
And added the following to /etc/logstash/conf.d/01-ossec-wazuh-filter.conf:
grok {
patterns_dir => [ "/etc/logstash/patterns/custom-patterns" ]
match => [ "full_log", "%{WINEVTDATE:winevtdate}" ]
}
date {
match => [ "winevtdate", "yyyy MMM dd HH:mm:ss" ]
target => "@timestamp"
}
This works for Windows events. In elasticsearch, the event timestamp for Windows events now matches the event generation time, but I presume only because in my setup elasticsearch, logstash, ossec server and ossec agent are all in the same time zone.
There may be gotchas. What would happen in the following cases:
- What if there were multiple ossec servers each in a different timezone, and logstash/elasticsearch in yet another time zone?
- What if the event in ossec was not a windows event, but included the date in a different format with no timezone information?
- What if the event itself did not have a time stamp?
I think the solution to 1 & 2 is:
- If the event includes a date, have the OSSEC server use the time stamp in the received event as the OSSEC log time stamp.
- If time zone information is present along the date in the event, make use of it.
- If time zone information is not present along the date in the event, provide a mechanism to manually specify the time zone of the agent in the agent config on the server: /var/ossec/etc/shared/agent.conf?
For 3:
- Using the time that the ossec server receives and set the timezone to the server's timezone remain the only option.
(I believe this is the currently adopted approach -- irrespective of cases 1 or 2, correct me if I am mistaken).
The ONLY time stamp that is usable for security correlations, investigations and reporting is the time the event was generated, with the assumption that the clock of the system that generated it is synchronized with a reliable reference (NTP). Given that every regulatory mandate explicitly insists on the integrity of the time stamp throughout the processing chain, Wazuh OSSEC fails dismally in this respect - it does work as designed, but the design is severely flawed! The information time stamp of events in elasticsearch can be meaningless, depending on how much time it took between generation time and reception time (highly variable, depending on how reliable/busy the infrastructure may be). With the OSSEC agent's default behavior of picking up events where it left off if there was any interruption in communication between itself and the ossec server, the setup as currently designed fails to satisfy - among many others - PCI requirement 10.4 and therefore "as is" cannot be utilized for PCI purposes.
I have managed to work around the limitation under very specific circumstances (this remains a limited workaround, THIS IS NOT A SOLUTION). The solution is to redesign the log output time stamp so under all circumstances it matches event generation time and time zone if they are present in the event.
On the second issue, stopping the aent, manually deleting the files under bookmarks, and restarting the agent works.
A configuration item should be added to the localfile section of the OSSEC windows agent config that toggles the behavior between the current default, and start capturing from the time the agent starts, ignoring anything that preceded. There is a need for both scenarios, depending on the circumstances we may want to proceed with the current behavior, or ignore what was not logged during an interruption for specific logs (especially if a huge backlog has built up), and start fresh.
Implementing an option as follows:
<localfile>
<location>Microsoft-Windows-DNS-Client/Operational</location>
<log_format>eventchannel</log_format>
<keep_bookmark>yes/no</keep_bookmark>
</localfile>
"yes" will behave as is currently the case, "no" will delete the bookmark for the log file file in question upon agent start (or restart).