Hi,
As I understand it, you have created a decoder and rule for a specific use case, and you can’t get it to display in Kibana, right?
Can you see any other recent alerts for this agent in Kibana?
There is no need to modify anything in the index pattern since the decoder is used to decode the fields you need and then use them in the rules themselves to set your alert conditions. Note that when the alert is generated and indexed in Elasticsearch, the decoded fields are stored inside a data structure like the following:
{"data"{"field1": "value1", "field2": "value2"}}
And also such information is displayed in Kibana under the names of data.field1
, data.field2
…
Having clarified this, it is necessary to find out why the alerts you mention are not being displayed.
First of all, you should check if the wazuh-manager
is generating alerts using your decoder and custom rule. For that, you would have to generate events that match with that decoder and rule, and see if the wazuh-manager
stores that alert in the alerts.json
. As far as I can see, you say that they are stored there, so we are going to assume that everything is correct up to this point.
The next step is to verify if the alerts stored in the alerts.json
in Elasticsearch
are being indexed correctly. The component that is in charge of sending the alerts is called Filebeat
.
Check Filebeat communication
Check that Filebeat is running
systemctl status filebeat
Check the communication between Filebeat and Elasticsearch:
filebeat test output
The example of a successful test:
elasticsearch: https://127.0.0.1:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 127.0.0.1
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.3
dial up... OK
talk to server... OK
version: 7.10.2
Check Filebeat log for errors
journalctl -u filebeat | egrep -i "ERROR"
If all this is correct, let’s check if the alert information is being stored in the corresponding index.
Check Elasticsearch status and indices
Check that Elasticsearch service is running.
systemctl status elasticsearch
Check if there is any error in the Elasticsearch log
egrep -i "error|warn" /var/log/elasticsearch/elasticsearch.log
Check if any alert with id x
has been stored in a wazuh-alerts index.
curl -X GET -k --user <elasticsearch_user>:<elasticsearch_password> --header 'Content-Type: application/json' -d "{\"track_total_hits\": false, \"query\": {\"match\": {\"rule.id\": \"x\"}}}" https://localhost:9200/<your_wazuh_alerts_index>/_search
Note: Replace the tags by their value according to your use case, and the rule id
x
by yours (400012).
Try all of the above and let us know the results :)
Best regards.
Hi,
Indeed, I have been testing with your decoders and the use case you have proposed, and the alert is not indexed in Elasticsearch
. For this I have created the following simple rule to test your use case:
<rule id="100050" level="3">
<decoded_as>trend_micro_wb</decoded_as>
<field name="action_wb">1</field>
<description>Testing rule when action.wb is 1</description>
</rule>
The error I get in Filebeat
is as follows:
...
(status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data.action] of type [keyword] in document with id 'rH7GZYABgcCzr7onEg09'. Preview of field's value: '{wb=1}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:223"}}
If you notice, the bug is that it cannot parse the data.action
field, since in your decoders you have a field called action.wb
.
The problem is that by using the .
as a separator in your field name, it is trying to parse that field as a data structure instead of the field name, i.e:
data{
"action": {
"wb": value
}
}
Instead of
data{
"action.wb": value
}
The solution is to change the .
separators in the field names. For example, replace them with _
. i.e:
<!--Replace this-->
<order>action.wb</order>
<!--To this-->
<order>action_wb</order>
...
After updating the decoders by changing these names and restarting wazuh-manager
(systemctl restart wazuh-manager
), this error no longer occurs and the alert is indexed in Elasticsearch
and displayed with Kibana
.
Note: Remember to update the rules if necessary (if you mention the name of the fields in the rule itself).
Try it and let us know the results.