I noticed that the location tag value in the shared configuration is misspelled. According to Microsoft-Windows-DNSServer/Analytical, it should be Windows-DNSServer, not Windows-DSNServer.
Therefore, please configure it as shown below in the agent ossec.conf file. <localfile> <location>Microsoft-Windows-DNSServer/Analytical</location> <log_format>eventchannel</log_format> </localfile>Hi Yap,
It would be helpful if you could share sample JSON logs from the DNS Server logs. By default, Windows event logs are stored in EVTX format, so you’ll need to convert them to JSON first. You can follow the steps below to do that.
Step 1: Export DNS Server logs (EVTX)Open Event Viewer.
Navigate to:
Applications and Services Logs → Microsoft → Windows → DNS-Server.
Select the relevant Analytical log (if available).
Choose a few events, right-click, and select Save Selected Events.
Provide a file name and choose a location to save the .evtx file.
Open PowerShell as Administrator.
Create a working directory and navigate to it:
Create and activate a Python virtual environment:
Install the wazuhevtx tool:
Verify the installation:
Convert the EVTX file to JSON:
Example:
After conversion, please share the JSON output.
Alternatively, you can share the EVTX file, and I can review it on my side.
Since we are testing JSON logs (not event_channel format), a small workaround is required.
Navigate to:
Update rule ID 60000 as shown below:
Use the wazuh-logtest utility to test the Windows JSON logs.
After testing, make sure to:
Roll back this rule to its default configuration
Restart the Wazuh manager
Please let me know once you’ve completed these steps or if you face any issues.

If you need it in this format
_ldap._tcp.0be0c738-69f3-4a51-af5b-caba684379d1.domains._msdcs.final.local,
then I believe you will need to write a custom script to transform those logs into the expected format and write them into a new log file.
After that, configure monitoring on this new log file instead of the original one. The script should continuously read the source log, wait until new entries are written, and then process and write them in the required format so they can be handled correctly.
You can make further changes to your decoder following these documents.

You can make further changes to your decoder following these documents.
I have modified the child decoder and updated the regex to accurately capture all log formats. I used another child decoder with the same field names to capture the values using proper regex to handle the different log types, as some logs contain multiple spaces while others do not, and one log contains a numeric value, whereas the other contains a string.
Therefore, you can add the below decoders after the previous child decoder.
You can make further changes to your decoder following these documents.
I’ve improved the decoder to capture the Query_Response, whether it is present or appears as a blank space, as you suggested.
Please replace the last two decoders for the mentioned log types with the updated versions below.


You can make further changes to your decoder following these documents.
I have simplified the decoders as shown below. I noticed that all logs up to 1/5/2026 10:33:15 AM 1350 PACKET 00000236D2977CC0 UDP Rcv 172.18.3.34 this point, and match the same regex pattern. After this point, the value structure is changing.
Therefore, I have rewritten the decoders as you suggested. With this approach, they will match all the expected values shared so far belongs to logs.