Custom Rule not showing in kibana

367 views
Skip to first unread message

riiky devils

unread,
Apr 25, 2022, 10:07:06 PM4/25/22
to Wazuh mailing list
Hi Teams,

Currently i have issue about my custom rule not showing in kibana
no alert for custom rule.png

However if i'm searching through alert.json there is exactly alert match with my custom rule.id presented
there is alert with match custom rule id.png

If i'm look up for my index pattern wazuh-alert-* there is no refresh button to update mapping field with my custom decoder to extract field from CEF log. Some of custom field not available too at wazuh-alerts-*.
no refresh button to update field.png

sample field not available.png

How to resolved this issue?

Thanks,

Jonathan Martín Valera

unread,
Apr 26, 2022, 4:24:02 AM4/26/22
to Wazuh mailing list

Hi,

As I understand it, you have created a decoder and rule for a specific use case, and you can’t get it to display in Kibana, right?

Can you see any other recent alerts for this agent in Kibana?

There is no need to modify anything in the index pattern since the decoder is used to decode the fields you need and then use them in the rules themselves to set your alert conditions. Note that when the alert is generated and indexed in Elasticsearch, the decoded fields are stored inside a data structure like the following:

{"data"{"field1": "value1", "field2": "value2"}}

And also such information is displayed in Kibana under the names of data.field1, data.field2

Having clarified this, it is necessary to find out why the alerts you mention are not being displayed.

First of all, you should check if the wazuh-manager is generating alerts using your decoder and custom rule. For that, you would have to generate events that match with that decoder and rule, and see if the wazuh-manager stores that alert in the alerts.json. As far as I can see, you say that they are stored there, so we are going to assume that everything is correct up to this point.

The next step is to verify if the alerts stored in the alerts.json in Elasticsearch are being indexed correctly. The component that is in charge of sending the alerts is called Filebeat.

Check Filebeat communication

  • Check that Filebeat is running

    systemctl status filebeat
    
  • Check the communication between Filebeat and Elasticsearch:

    filebeat test output
    

    The example of a successful test:

    elasticsearch: https://127.0.0.1:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 127.0.0.1
        dial up... OK
      TLS...
        security: server's certificate chain verification is enabled
        handshake... OK
        TLS version: TLSv1.3
        dial up... OK
      talk to server... OK
      version: 7.10.2
    
  • Check Filebeat log for errors

    journalctl -u filebeat | egrep -i "ERROR"
    

If all this is correct, let’s check if the alert information is being stored in the corresponding index.

Check Elasticsearch status and indices

  • Check that Elasticsearch service is running.

    systemctl status elasticsearch
    
  • Check if there is any error in the Elasticsearch log

    egrep -i "error|warn" /var/log/elasticsearch/elasticsearch.log
    
  • Check if any alert with id x has been stored in a wazuh-alerts index.

    curl -X GET -k --user <elasticsearch_user>:<elasticsearch_password> --header 'Content-Type: application/json' -d "{\"track_total_hits\": false,   \"query\": {\"match\":   {\"rule.id\": \"x\"}}}" https://localhost:9200/<your_wazuh_alerts_index>/_search
    

    Note: Replace the tags by their value according to your use case, and the rule id x by yours (400012).

Try all of the above and let us know the results :)

Best regards.

riiky devils

unread,
Apr 26, 2022, 5:08:10 AM4/26/22
to Wazuh mailing list
Hi Jonathan,

After searching at filebeat log i found this issue

Source:"/var/ossec/logs/alerts/alerts.json", Offset:3623101305, Timestamp:time.Time{wall:0xc092077ca9fca3ae, ext:1311619782035221, loc:(*time.Location)(0x55dac59f3320)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x90d, Device:0x23}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data.time] of type [keyword] in document with id 'aA4YZYABrcVX0dMTfSOC'. Preview of field's value: '{month=Apr, hour=08:36:06, day=2022}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:168"}}

i dont understand why failed to parse field data.time.

This is sample log of event

Apr 20 2022 15:56:05 hakusi.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|WB:7|7|3|deviceExternalId=38 rt=Nov 15 2017 08:43:57 GMT+00:00 app=17 cntLabel=AggregatedCount cnt=1 dpt=80 act=1 src=10.1.128.46 cs1Label=SLF_PolicyName cs1=External User Policy deviceDirection=2 cat=7 dvchost=ApexOneClient08 fname=test.txt request=http://www.violetsoft.net/counter/insert.php?dbserver\=db1&c_pcode\=25&c_pid\=funpop1&c_kind\=4&c_mac\=FE-ED-BE-EF-0C-E1 deviceFacility=Apex One shost=ABC-HOST-WKS12

The custom decoder also attached

Can you help me to reproduce this issue?

Thank You,
custom_apexone_wb.xml

Jonathan Martín Valera

unread,
Apr 26, 2022, 8:41:05 AM4/26/22
to Wazuh mailing list

Hi,

Indeed, I have been testing with your decoders and the use case you have proposed, and the alert is not indexed in Elasticsearch. For this I have created the following simple rule to test your use case:

<rule id="100050" level="3">
    <decoded_as>trend_micro_wb</decoded_as>
    <field name="action_wb">1</field>
    <description>Testing rule when action.wb is 1</description>
</rule>

The error I get in Filebeat is as follows:

...
(status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data.action] of type [keyword] in document with id 'rH7GZYABgcCzr7onEg09'. Preview of field's value: '{wb=1}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:223"}}

If you notice, the bug is that it cannot parse the data.action field, since in your decoders you have a field called action.wb.

The problem is that by using the . as a separator in your field name, it is trying to parse that field as a data structure instead of the field name, i.e:

data{
    "action": {
        "wb": value
    }
}

Instead of

data{
    "action.wb": value
}

The solution is to change the . separators in the field names. For example, replace them with _. i.e:

<!--Replace this-->
<order>action.wb</order> 

<!--To this-->
<order>action_wb</order> 
...

After updating the decoders by changing these names and restarting wazuh-manager (systemctl restart wazuh-manager), this error no longer occurs and the alert is indexed in Elasticsearch and displayed with Kibana.


2.png

Note: Remember to update the rules if necessary (if you mention the name of the fields in the rule itself).

Try it and let us know the results.

decoders.txt

riiky devils

unread,
Apr 28, 2022, 12:30:05 AM4/28/22
to Wazuh mailing list
Hi Jonathan,

Thank you for your recommendation. This issue already resolved and rule already indexed at ELK.

Best regards,
Reply all
Reply to author
Forward
0 new messages