mapper_parsing_exception 4.12.0

45 views
Skip to first unread message

Stopko0

unread,
Oct 29, 2025, 11:30:35 AM (7 days ago) Oct 29
to Wazuh | Mailing List
hello,trying to parse JSON log from suricata+bro-IDS but have trouble with fields name mapping (so it isnt showing in Opensearch. what exactly i need to modify in wazuh template for fix this issue? Wazuh AIO v4.12.0; log from filebeat attached. thanks.
-------------------------------------------------------------
WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc238a51d25dbb29a, ext:3630233305849034, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-alerts-pipeline"}, Fields:{"agent":{"ephemeral_id":"32f2a491-d507-47a0-9c34-61f84c403566","hostname":"wazuh-siem-hostname","id":"00dbb780-b803-46d8-9fb3-fd84ea5f3204","name":"wazuh-siem-hostname","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.alerts","module":"wazuh"},"fields":{"index_prefix":"wazuh-alerts-4.x-"},"fileset":{"name":"alerts"},"host":{"name":"wazuh-siem-hostname"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/alerts/alerts.json"},"offset":10307460768},"message":"{\"timestamp\":\"2025-10-29T13:56:03.692+0000\",\"rule\":{\"level\":3,\"description\":\"Suricata messages.\",\"id\":\"86600\",\"firedtimes\":2,\"mail\":false,\"groups\":[\"ids\",\"suricata\"]},\"agent\":{\"id\":\"000\",\"name\":\"wazuh-siem-hostname\"},\"manager\":{\"name\":\"wazuh-siem-hostname\"},\"id\":\"1761746163.1512122495\",\"decoder\":{\"name\":\"json\"},\"data\":{\"action\":\"allowed\",\"id\":\"work-dd2-zdd3q3taqy7g94fq.2025.10.28.08.43.52.658873.04897\",\"timestamp\":\"2025-10-28T08:43:52.658873+0000\",\"host\":\"zdd3q3taqy7g94fq\",\"company_id\":\"0\",\"event_type\":\"Possible Lateral Movement\",\"event_classifier\":\"nimbus\",\"integration_system\":\"bro-ids\",\"target\":{\"ip_addresses\":[\"172.99.99.99\"],\"mac_addresses\":[\"00:01:01:01:01:01\"]},\"message\":\"Permission Groups Discovery (samr::SamrGetAliasMembership) (3)\",\"ioc\":{\"event\":{\"ip_addresses\":[\"172.99.99.88\"]}},\"severity\":\"1\",\"verdict\":\"true\",\"data\":{\"network_info\":{\"transport_protocol\":\"tcp\",\"target\":\"destination\",\"source\":{\"ip\":\"172.99.99.88\",\"port\":\"59268\",\"mac\":\"a0:51:51:81:c1:51\",\"nbn\":[\"hostname\",\"DOMAIN\"]},\"destination\":{\"ip\":\"172.99.99.99\",\"port\":\"445\",\"mac\":\"00:01:01:01:01:01\",\"nbn\":[\"SRVNAME\",\"DOMAIN\"]},\"payload\":\"endpoint: samr\\noperation: SamrGetAliasMembership\"}}},\"location\":\"/var/log/rsyslog/logtest.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::318789305-65025", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc000f98750), Source:"/var/ossec/logs/alerts/alerts.json", Offset:10307461962, Timestamp:time.Time{wall:0xc2387421376c470e, ext:3580073600536921, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x130056b9, Device:0xfe01}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data.data] of type [keyword] in document with id 'H9hBMJoBM3r0nkk6HIsB'. Preview of field's value: '{network_info={payload=endpoint: samr\noperation: SamrGetAliasMembership, destination={port=445, ip=172.99.99.99, nbn=[HOSTNAME, BANKI], mac=00:01:01:01:01:01}, transport_protocol=tcp, source={port=59268, ip=172.99.99.99, nbn=[HOSTNAME, DOMAIN], mac=a1:51:51:81:c1:51}, target=destination}}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:149"}}

Marcos Sanchez Delgado

unread,
Oct 29, 2025, 12:23:38 PM (7 days ago) Oct 29
to Wazuh | Mailing List

Hello.

Your index has the field data.data mapped as type keyword. The indexer expects that this field was a keyword, but the value in your log is a JSON object: ":{\"network_info\":{\"transport_protocol\":\"tcp\",\"target\":\"destination\",\"source\":{\"ip\":\"172.99.99.88\",\"port\":\"59268\",\"mac\":\"a0:51:51:81:c1:51\",\"nbn\":[\"hostname\",\"DOMAIN\"]}. So, when filebeat tries to index this document, it fails with a mapper_parsing_exception error.

To solve this problem, you have different approaches:

  • You can ensure that the field data.data is a valid keyword.
  • You can change the type of the data.data field in the filebeat configuration.
  • You can disable indexing for the data.data field.

For changing the type of the field: you must to stop the filebeat service (systemctl stop filebeat), and edit the /etc/filebeat/wazuh-template.json file to change the type of the data.data field to object type. You will have something like this:

"data": { "properties": { "data": { "type": "object", "dynamic": true } } }

If you want to diable the indexing of this field, you can configure the same file as before to something like this:

"data": { "properties": { "data": { "enabled": false } } }

Then, to save the file, you must run these commands: filebeat setup -index-managment and systemctl restart filebeat.

Then, you need to re-index the index. You can do this from the Wazuh dashboard, in Index Managment -> Dev tools:

  • Backup the existing index (the date of the index is an example):
POST _reindex { "source": { "index": "wazuh-alerts-4.x-2025.01.31" }, "dest": { "index": "wazuh-alerts-4.x-backup" } }
  • Delete the old index:
DELETE /wazuh-alerts-4.x-2025.01.31
  • Re-index the data from the backup:
POST _reindex { "source": { "index": "wazuh-alerts-4.x-backup" }, "dest": { "index": "wazuh-alerts-4.x-2025.01.31" } }
  • Delete the backup index:
DELETE /wazuh-alerts-4.x-backup

Stopko0

unread,
Oct 31, 2025, 8:44:28 AM (5 days ago) Oct 31
to Wazuh | Mailing List
hello Marcos and thank you for your approach. btw i have a question about `For changing the type of the field` if can it  broke anything in parsing process of events from another sources (json or syslog logs) with same field?

среда, 29 октября 2025 г. в 19:23:38 UTC+3, Marcos Sanchez Delgado:

Marcos Sanchez Delgado

unread,
Nov 3, 2025, 3:30:39 AM (2 days ago) Nov 3
to Wazuh | Mailing List

Hello.
Yes, if you receive events that are going to be indexed in the same index as the data.data field is not JSON after changing the type of that field, you will receive the same error again. Another possible solution to avoid this specific problem could be to create a different index to store those events in which the data.data field is different from the rest. This way, you will avoid this problem, in which the rest of the events have the correct field type.
For this, you need to create a new template in which you have to specify the correct type of the data.data field, so that logs are indexed in a new index. You can refer to this documentation https://docs.opensearch.org/latest/im-plugin/index-templates/

Reply all
Reply to author
Forward
0 new messages