Issue in parsing Json logs from Snort

287 views
Skip to first unread message

Moiz lakdawala

unread,
Jul 27, 2023, 7:59:26 PM7/27/23
to Wazuh mailing list
Hi, I am facing a strange problem which I do not understand how to trouble shoot.

My scenario is as follows:

1) I have snort installed on one of the vms which stores all the logs of snort in Json.
2) On this same vm I installed wazuih agent and pointed the ossec.conf of the agent to   read the alert_json.txt file to send these alerts back to wazuh_manager.

3) On wazuh_manager I created a small catch all alert rule .
4) My logs started getting ingested to wazuh manager now.

Problem : The incoming logs are appearing in alerts.json and archive.json at the same time But not showing up on the dashboard

if you look at the log sample it even parses it to the correct and expected rule still does not show up on the dashboard.

Log Sample:
{"timestamp":"2023-07-27T19:25:25.754+0000","rule":{"level":3,"description":"snort_log","id":"799107","firedtimes":8,"mail":false,"groups":["snort-ids"]},"agent":{"id":"006","name":"uat-usa","ip":"10.94.0.109"},"manager":{"name":"wazhu-01"},"id":"1690485925.443971335","full_log":"{\"seconds\":1690485923,\"action\":\"allow\",\"class\":\"none\",\"dir\":\"C2S\",\"dst_addr\":\"169.254.169.254\",\"dst_ap\":\"169.254.169.254:80\",\"dst_port\":80,\"gid\":119,\"iface\":\"ens4\",\"msg\":\"(http_inspect) URI path contains consecutive slash characters\",\"mpls\":0,\"pkt_gen\":\"stream_tcp\",\"pkt_len\":109,\"pkt_num\":287505,\"priority\":3,\"proto\":\"TCP\",\"rev\":1,\"rule\":\"119:8:1\",\"service\":\"http\",\"sid\":8,\"src_addr\":\"10.94.0.109\",\"src_ap\":\"10.94.0.109:52882\",\"src_port\":52882,\"vlan\":0,\"timestamp\":\"07/27-19:25:23.827743\"}","decoder":{"name":"json"},"data":{"action":"allow","seconds":"1690485923","class":"none","dir":"C2S","dst_addr":"169.254.169.254","dst_ap":"169.254.169.254:80","dst_port":"80","gid":"119","iface":"ens4","msg":"(http_inspect) URI path contains consecutive slash characters","mpls":"0","pkt_gen":"stream_tcp","pkt_len":"109","pkt_num":"287505","priority":"3","proto":"TCP","rev":"1","rule":"119:8:1","service":"http","sid":"8","src_addr":"10.94.0.109","src_ap":"10.94.0.109:52882","src_port":"52882","vlan":"0","timestamp":"07/27-19:25:23.827743"},"location":"/var/log/snort/alert_json.txt"}


Rule file created on Wazuh-manager:
<group name="snort-ids">
<rule id="799107" level="3">
   <decoded_as>json</decoded_as>
   <field name="sid">\.+</field>
   <field name="vlan">0</field>
   <description>snort_log</description>
</rule>
</group>


I have tried every possible scenario I can think of to tshoot this, any help at this point is greatly apprciated.


Thanks in advance

Sebastian Dario Bustos

unread,
Jul 28, 2023, 12:20:33 AM7/28/23
to Wazuh mailing list
Hello Moiz,
Thank you for using Wazuh!!!
I've tested the rule and the log and it is triggering an alert, this should be working.
- Do you see any other current alerts on your dashboard?  

- If not, can you please check that filebeat is running with 'systemctl status filebeat' (if is not running please start it), and if it is running please try the command 'filebeat test output' from the manager node's CLI.
Filebeat is the one that ingest the alerts.json and alert.log to the Indexer nodes, and these indices are the ones where you see the alerts on your dashboard. If there is an error with the output test, it may indicate a certificate or connectivity issue.

- Also please check the disk space on the indexer node not to be at or above 90%, this is the default watermark level and once there, Indexer will go into read only mode not allowing filebeat to ingest further records.

- If all of the above is correct you may want to check the health of your cluster by running on the Dev Tools (Dashboard menu -> Dev Tools), for example, this API call:
 'GET /_cluster/health' (or `GET /_cat/health?v` which will give similar results).

If your active shards have reached 1000 (which is the default max) then your cluster won't create new shards therefore not indexing more data.

If this is the case then you can just delete old indices (you can do it from Dev Tools as well with 'DELETE wazuh-alerts-4.x-2022-12-*' for example to delete all the indices from December 2022) or as a quick fix, increase the max shard number for example to 1300 (is always preferred to delete old indices):
PUT /_cluster/settings
{
"transient": {
"cluster.routing.allocation.total_shards_per_node": 1300
}
}

Please let me know.
Regards.

Moiz lakdawala

unread,
Jul 28, 2023, 11:40:01 AM7/28/23
to Wazuh mailing list
Hey Sebastian Dario Bustos, first of all let me just start by thanking you for investing some time into this , I really appreciate your help as I am stuck on this issue for days and was not able to get any help.

For the File beat poortion I did check it again and earlier as well everything looked okay:
root@wazhu-01:/var/ossec/ruleset/rules# systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
     Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2023-07-27 17:15:18 UTC; 22h ago
       Docs: https://www.elastic.co/products/beats/filebeat
   Main PID: 2859647 (filebeat)
      Tasks: 12 (limit: 27006)
     Memory: 147.9M
     CGroup: /system.slice/filebeat.service
             └─2859647 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat

Jul 27 17:15:18 wazhu-01 systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..
root@wazhu-01:/var/ossec/ruleset/rules# filebeat test output
elasticsearch: https://10.250.0.9:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 10.250.0.9
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.2
    dial up... OK
  talk to server... OK
  version: 7.10.2

for the disk space it is only used up till 15%

for the shards every thing looks good as it is at 188 right now
{
  "cluster_name": "wazuh-indexer-cluster",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "discovered_master": true,
  "discovered_cluster_manager": true,
  "active_primary_shards": 188,
  "active_shards": 188,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 23,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 89.0995260663507
}


I do have an index management policy set to remove the old indexes after 60 days

also something interesting. it is literally just these logs that do not show up on the dashboard. all the other alerts and logs are appearing properly in the dashboard.


One other thing that confused me is I have slack alerting set if the log level is 10 , and if change the log level to 10 in the rule I would get alerts on slack but still they would not show up on the dashboard.
Reply all
Reply to author
Forward
0 new messages