serv2:# ls -lh /var/log/172.16.5.111.log
-rw-rw-rw- 1 syslog adm 12M Dec 4 09:24 /var/log/172.16.5.111.log
How can this local log be viewed and analyzed through the Wazuh dashboard?
Thank you!!!
Hi A Bobrov,
Are these log files in the Wazuh server?
If yes, you can use localfile to forward these logs to Wazuh.
Add this configuration under <ossec_config>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/10.14.3.103.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>var/log/10.9.0.1.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/172.16.5.111.log</location>
</localfile>
If the log files are on a different server. Install a Wazuh agent on that server and add the same localfile configuration to the agent’s ossec.conf
Next, to see alerts from your logs, you need to write decoders and rules if your logs are not triggering alerts on the Dashabord.
Check this document to learn more about writing decoders and rules:
https://documentation.wazuh.com/current/user-manual/ruleset/index.html
Let me know if you need any further information.
Restart the Wazuh manager and share the output of this command from your agent endpoint.
cat /var/ossec/logs/ossec.log | grep -iE "wazuh-logcollector"
With this command, we can verify if the files are monitored or not.
If you see a log like this
2024/12/03 11:39:28 wazuh-Logcollector: INFO: (1950): Analyzing file: '/var/log/logfile"
You might need to write decoders and rules to trigger alerts on the Dashboard. Check this document to learn more about writing decoders and rules: https://documentation.wazuh.com/current/user-manual/ruleset/index.html
Good afternoon, Nazmur!!!
We were able to obtain the data you requested
serv2:~# cat /var/ossec/logs/ossec.log | grep -iE "wazuh-logcollector"
2024/12/05 17:00:49 wazuh-logcollector: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2024/12/05 17:01:01 wazuh-logcollector: INFO: Monitoring output of command(360): df -P
2024/12/05 17:01:01 wazuh-logcollector: INFO: Monitoring full output of command(360): netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d
2024/12/05 17:01:01 wazuh-logcollector: INFO: Monitoring full output of command(360): last -n 20
2024/12/05 17:01:01 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/ossec/logs/active-responses.log'.
2024/12/05 17:01:01 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/log/dpkg.log'.
2024/12/05 17:01:01 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/log/10.16.0.100.log'.
2024/12/05 17:01:01 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/log/10.14.3.103.log'.
2024/12/05 17:01:01 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/log/10.9.0.1.log'.
2024/12/05 17:01:01 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/log/172.16.5.111.log'.
2024/12/05 17:01:01 wazuh-logcollector: INFO: Started (pid: 3397875).
2024/12/05 17:01:04 wazuh-logcollector: INFO: (9203): Monitoring journal entries.
CheckPoint <-----> 172.16.5.111
Cisco 10.16.0.100,10.14.3.103,10.9.0.1
can you tell me what request should be made in the Wazuh dashboard,
for CheckPoint, by ip 172.16.5.111 - we don’t find it. :((четверг, 5 декабря 2024 г. в 15:41:56 UTC+3, A Bobrov:
The log you have shared does not match with any rules.
Can you enable archives.json log and share some sample logs?
For this, You can try the following steps:
For this, you can enable archive JSON format log from your manager's ossec.conf
<ossec_config>
<global>
___________________
<logall_json>yes</logall_json>
_______________
After making the changes make sure to restart the manager.
The log you have shared does not match with any rules.
You need to test the log inside the full_log and write decoders and rules based on that.
172.16.5.111.log: Dec 6 14:22:50 172.16.5.111 time="1733484164" action="Drop" ifdir="inbound" ifname="eth0" logid="0" loguid="{0x6752de88,0x1,0x6f0510ac,0x1857a371}" origin="172.16.5.11" originsicname="CN=gwtest,O=TESTSMS..u2xvp5" sequencenum="1" time="1733484164" version="5" dst="172.16.63.255" inzone="External" layer_name="Network" layer_uuid="8a994dd3-993e-4c0c-92a1-a8630b153f4c" match_id="3" parent_rule="0" rule_action="Drop" rule_name="Cleanup rule" rule_uid="60e2929c-4371-402b-bf57-3f2efca62cad" outzone="Local" product="VPN-1 & FireWall-1" proto="17" s_port="137" service="137" service_id="nbname" src="172.16.1.38" log_link="https://172.16.5.111/smartview/#external-nav%3DOpenLogCard&domain-id%3D41e821a0-3720-11e3-aa6e-0800200c9fde&args%3DbWFya2VyPUBBQEBCQDE3MzM0NjM5MzRAQ0AyMTAxNSZvcmlnX2xvZ19zZXJ2ZXJfaWQ9NDBkZTZhNzQtYmE1Ny1jODQ4LTlmZGUtMjMzMjRhYjdhOTVi"
Bason on your JSON log, I can see it is not tripped by any decoder
You need to write decoders and rules for this rule.
I am sharing some sample decoders and rules
Decoders:
<decoder name="network_device">
<prematch>^172.16.5.111.log: </prematch>
</decoder>
<decoder name="network_device_child">
<parent>network_device</parent>
<regex >\S+ (\w+\s*\d+\s*\d+:\d+:\d+) (\.+) \.+action="(\.+)"</regex>
<order>logtimestamp, ip_address, action</order>
</decoder>
Rules:
<group name="network_device_rule,">
<rule id="110000" level="3">
<decoded_as>network_device</decoded_as>
<description>Mikrotik-Event</description>
</rule>
</group>
You can follow these documents for writing decoders and rules: https://documentation.wazuh.com/current/user-manual/ruleset/index.html
I hope you find this useful.
One log has the next line before 5 the log and one doesn't. Check this screenshot
I was wondering why there is one space and two space differences before the logs. Can you recheck if this is because of your <localfile> configuration?
If you are not able to remove the extra space by adjusting the localfile configuration I am afraid the JSON decoder will not work for your log. You might need to write custom decoders for your logs.
Decoders
I hope you find this information useful.
I was wondering why there is one space and two space differences before the logs. Can you recheck if this is because of your <localfile> configuration?
I believe this is happening because the original log has this extra space in the log. In this situation, I suggest you use a custom decoder as the default JSON decoder is not working for your log. This documents will be helpful for writing custom decoders
Decoders
I hope you find this helpful.