Missing logs/alerts in the Events

137 views
Skip to first unread message

Martin Krastev

unread,
Jan 18, 2026, 11:06:40 AMJan 18
to Wazuh | Mailing List
Hello all,

I see my logs from Suricata and Zeek are coming in /var/ossec/logs/alerts/alerts.json and /var/ossec/logs/archives/archives.json

When I test them with /var/ossec/bin/wazuh-logtest they are recognized properly. I followed official Wazuh's blog post for configuring Zeek's part, but modified slightly decoders as Frederico suggested here: https://groups.google.com/g/wazuh/c/-hBKgXRHqAs/m/kuDRZe9ACQAJ

Suricata alerts are fine and I can see them in Events (Threat Hunting), but by some reason I don't see anything from Zeek. Neither in wazuh-alerts-* or wazuh-archives-*. I use Wazuh 4.14.2 on Ubuntu 24.04 with Filebeat 7.10.2.

Here's what I see in /var/log/filebeat/filebeat:
2026-01-18T13:49:46.548+0200 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc25350165fb53bae, ext:61353312592788, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, Fields:{"agent":{"ephemeral_id":"82c47338-95bb-4480-9f4a-7c249f21cbca","hostname":"siem","id":"679fdd3e-3df8-423f-872a-f72aa83e76eb","name":"siem","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},"host":{"name":"siem"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/archives/archives.json"},"offset":668917320},"message":"{\"timestamp\":\"2026-01-18T13:49:43.604+0200\",\"agent\":{\"id\":\"001\",\"name\":\"nids\",\"ip\":\"192.168.1.203\"},\"manager\":{\"name\":\"siem\"},\"id\":\"1768736983.6110679\",\"full_log\":\"{\\\"ts\\\":1768736967.5621719,\\\"uid\\\":\\\"CUh67n2tOugfiZC6Gg\\\",\\\"id.orig_h\\\":\\\"192.168.1.201\\\",\\\"id.orig_p\\\":59018,\\\"id.resp_h\\\":\\\"1.1.1.1\\\",\\\"id.resp_p\\\":853,\\\"proto\\\":\\\"tcp\\\",\\\"service\\\":\\\"ssl\\\",\\\"duration\\\":10.04309606552124,\\\"orig_bytes\\\":1651,\\\"resp_bytes\\\":11556,\\\"conn_state\\\":\\\"SF\\\",\\\"local_orig\\\":true,\\\"local_resp\\\":false,\\\"missed_bytes\\\":0,\\\"history\\\":\\\"ShADadFf\\\",\\\"orig_pkts\\\":26,\\\"orig_ip_bytes\\\":3015,\\\"resp_pkts\\\":25,\\\"resp_ip_bytes\\\":12864,\\\"ip_proto\\\":6,\\\"log_source\\\":\\\"zeek\\\"}\",\"decoder\":{\"name\":\"json\"},\"data\":{\"protocol\":\"tcp\",\"srcip\":\"192.168.1.201\",\"srcport\":\"59018\",\"dstip\":\"1.1.1.1\",\"dstport\":\"853\",\"timestamp\":\"1768736967\",\"uid\":\"CUh67n2tOugfiZC6Gg\",\"application_layer_protocol\":\"ssl\",\"duration_of_the_connection\":\"10.04309606552124\",\"byte_send_by_originator\":\"1651\",\"byte_sent_by_responder\":\"11556\",\"connection_state\":\"SF\",\"local_origin\":\"true\",\"local_response\":\"false\",\"missed_bytes_might_packet_loss\":\"0\",\"packet_sent_by_origin\":\"26\",\"ip_layer_bytes_from_origin\":\"3015\",\"packet_sent_by_responder\":\"25\",\"ip_layer_bytes_sent_by_responder\":\"12864\",\"protocol_number_ip_header\":\"6\",\"ts\":\"1768736967.562172\",\"id\":{\"orig_h\":\"192.168.1.201\",\"orig_p\":\"59018\",\"resp_h\":\"1.1.1.1\",\"resp_p\":\"853\"},\"proto\":\"tcp\",\"service\":\"ssl\",\"duration\":\"10.043096\",\"orig_bytes\":\"1651\",\"resp_bytes\":\"11556\",\"conn_state\":\"SF\",\"local_orig\":\"true\",\"local_resp\":\"false\",\"missed_bytes\":\"0\",\"history\":\"ShADadFf\",\"orig_pkts\":\"26\",\"orig_ip_bytes\":\"3015\",\"resp_pkts\":\"25\",\"resp_ip_bytes\":\"12864\",\"ip_proto\":\"6\",\"log_source\":\"zeek\"},\"location\":\"/opt/zeek/logs/current/conn.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::3149586-64513", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc000882750), Source:"/var/ossec/logs/archives/archives.json", Offset:668919050, Timestamp:time.Time{wall:0xc2531f78b03fc8d8, ext:11570590108240, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x300f12, Device:0xfc01}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data.id] of type [keyword] in document with id '0Hnw0JsBEHY4K2jtldyn'. Preview of field's value: '{orig_p=59018, resp_h=1.1.1.1, orig_h=192.168.1.201, resp_p=853}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:536"}}

2026-01-18T13:50:35.570+0200    WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc2535022b47795ff, ext:61402660874074, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-alerts-pipeline"}, Fields:{"agent":{"ephemeral_id":"82c47338-95bb-4480-9f4a-7c249f21cbca","hostname":"siem","id":"679fdd3e-3df8-423f-872a-f72aa83e76eb","name":"siem","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.alerts","module":"wazuh"},"fields":{"index_prefix":"wazuh-alerts-4.x-"},"fileset":{"name":"alerts"},"host":{"name":"siem"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/alerts/alerts.json"},"offset":7349508},"message":"{\"timestamp\":\"2026-01-18T13:50:31.257+0200\",\"rule\":{\"level\":5,\"description\":\"Zeek: DNS Query mask.apple-dns.net attempted from source ip 192.168.0.138 source port 51294 resolved to IP(s) [\\\"17.248.176.9\\\",\\\"17.248.176.4\\\",\\\"17.248.176.7\\\",\\\"17.248.176.10\\\",\\\"17.248.176.70\\\",\\\"17.248.176.6\\\",\\\"17.248.176.73\\\",\\\"17.248.176.8\\\"]\",\"id\":\"100901\",\"firedtimes\":469,\"mail\":false,\"groups\":[\"zeek\"]},\"agent\":{\"id\":\"001\",\"name\":\"nids\",\"ip\":\"192.168.1.203\"},\"manager\":{\"name\":\"siem\"},\"id\":\"1768737031.6112467\",\"full_log\":\"{\\\"ts\\\":1768737030.0471251,\\\"uid\\\":\\\"CrGFwr2h4YzoFmdvjg\\\",\\\"id.orig_h\\\":\\\"192.168.0.138\\\",\\\"id.orig_p\\\":51294,\\\"id.resp_h\\\":\\\"192.168.1.201\\\",\\\"id.resp_p\\\":53,\\\"proto\\\":\\\"udp\\\",\\\"trans_id\\\":18088,\\\"rtt\\\":0.037436008453369141,\\\"query\\\":\\\"mask.apple-dns.net\\\",\\\"qclass\\\":1,\\\"qclass_name\\\":\\\"C_INTERNET\\\",\\\"qtype\\\":1,\\\"qtype_name\\\":\\\"A\\\",\\\"rcode\\\":0,\\\"rcode_name\\\":\\\"NOERROR\\\",\\\"AA\\\":false,\\\"TC\\\":false,\\\"RD\\\":true,\\\"RA\\\":true,\\\"Z\\\":0,\\\"answers\\\":[\\\"17.248.176.9\\\",\\\"17.248.176.4\\\",\\\"17.248.176.7\\\",\\\"17.248.176.10\\\",\\\"17.248.176.70\\\",\\\"17.248.176.6\\\",\\\"17.248.176.73\\\",\\\"17.248.176.8\\\"],\\\"TTLs\\\":[33,33,33,33,33,33,33,33],\\\"rejected\\\":false,\\\"opcode\\\":0,\\\"opcode_name\\\":\\\"query\\\",\\\"log_source\\\":\\\"zeek\\\"}\",\"decoder\":{\"name\":\"json\"},\"data\":{\"protocol\":\"udp\",\"srcip\":\"192.168.0.138\",\"srcport\":\"51294\",\"dstip\":\"192.168.1.201\",\"dstport\":\"53\",\"timestamp\":\"1768737030\",\"uid\":\"CrGFwr2h4YzoFmdvjg\",\"DNS_transaction_id\":\"18088\",\"dnsquery\":\"mask.apple-dns.net\",\"dns_response_code\":\"NOERROR\",\"authoritative_answer\":\"false\",\"truncate_flag\":\"false\",\"recursion_desired_flag\":\"true\",\"recursion_avalable_flag\":\"true\",\"reserved_for_future_use\":\"0\",\"resolved_by\":[\"17.248.176.9\",\"17.248.176.4\",\"17.248.176.7\",\"17.248.176.10\",\"17.248.176.70\",\"17.248.176.6\",\"17.248.176.73\",\"17.248.176.8\"],\"query_rejected\":\"false\",\"ts\":\"1768737030.047125\",\"id\":{\"orig_h\":\"192.168.0.138\",\"orig_p\":\"51294\",\"resp_h\":\"192.168.1.201\",\"resp_p\":\"53\"},\"proto\":\"udp\",\"trans_id\":\"18088\",\"rtt\":\"0.037436\",\"query\":\"mask.apple-dns.net\",\"qclass\":\"1\",\"qclass_name\":\"C_INTERNET\",\"qtype\":\"1\",\"qtype_name\":\"A\",\"rcode\":\"0\",\"rcode_name\":\"NOERROR\",\"AA\":\"false\",\"TC\":\"false\",\"RD\":\"true\",\"RA\":\"true\",\"Z\":\"0\",\"answers\":[\"17.248.176.9\",\"17.248.176.4\",\"17.248.176.7\",\"17.248.176.10\",\"17.248.176.70\",\"17.248.176.6\",\"17.248.176.73\",\"17.248.176.8\"],\"TTLs\":[33,33,33,33,33,33,33,33],\"rejected\":\"false\",\"opcode\":\"0\",\"opcode_name\":\"query\",\"log_source\":\"zeek\"},\"location\":\"/opt/zeek/logs/current/dns.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::3150074-64513", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc001b889c0), Source:"/var/ossec/logs/alerts/alerts.json", Offset:7351955, Timestamp:time.Time{wall:0xc2534d07a6bf96f7, ext:58222430712327, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x3010fa, Device:0xfc01}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data.id] of type [keyword] in document with id 'r3nx0JsBEHY4K2jtVd8v'. Preview of field's value: '{orig_p=51294, resp_h=192.168.1.201, orig_h=192.168.0.138, resp_p=53}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:757"}}

2026-01-18T13:27:31.724+0200    WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc2534ec8aa1eeb3c, ext:60018487291329, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, Fields:{"agent":{"ephemeral_id":"82c47338-95bb-4480-9f4a-7c249f21cbca","hostname":"siem","id":"679fdd3e-3df8-423f-872a-f72aa83e76eb","name":"siem","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},"host":{"name":"siem"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/archives/archives.json"},"offset":649271201},"message":"{\"timestamp\":\"2026-01-18T13:27:30.693+0200\",\"agent\":{\"id\":\"001\",\"name\":\"nids\",\"ip\":\"192.168.1.203\"},\"manager\":{\"name\":\"siem\"},\"id\":\"1768735650.5707049\",\"full_log\":\"{\\\"ts\\\":1768735649.458575,\\\"uid\\\":\\\"ComeCT1Y6Jdhm6gm2\\\",\\\"id.orig_h\\\":\\\"192.168.0.138\\\",\\\"id.orig_p\\\":62591,\\\"id.resp_h\\\":\\\"34.49.17.193\\\",\\\"id.resp_p\\\":443,\\\"version\\\":\\\"TLSv13\\\",\\\"cipher\\\":\\\"TLS_AES_256_GCM_SHA384\\\",\\\"curve\\\":\\\"x25519\\\",\\\"server_name\\\":\\\"urlite.ff.avast.com\\\",\\\"resumed\\\":true,\\\"established\\\":true,\\\"ssl_history\\\":\\\"CsiI\\\",\\\"log_source\\\":\\\"zeek\\\"}\",\"decoder\":{\"name\":\"json\"},\"data\":{\"srcip\":\"192.168.0.138\",\"srcport\":\"62591\",\"dstip\":\"34.49.17.193\",\"dstport\":\"443\",\"timestamp\":\"1768735649\",\"uid\":\"ComeCT1Y6Jdhm6gm2\",\"ssl_version\":\"TLSv13\",\"ssl_cipher\":\"TLS_AES_256_GCM_SHA384\",\"ssl_curve\":\"x25519\",\"ssl_server_name\":\"urlite.ff.avast.com\",\"ssl_established\":\"true\",\"ssl_resumed\":\"true\",\"ssl_history\":\"CsiI\",\"ts\":\"1768735649.458575\",\"id\":{\"orig_h\":\"192.168.0.138\",\"orig_p\":\"62591\",\"resp_h\":\"34.49.17.193\",\"resp_p\":\"443\"},\"version\":\"TLSv13\",\"cipher\":\"TLS_AES_256_GCM_SHA384\",\"curve\":\"x25519\",\"server_name\":\"urlite.ff.avast.com\",\"resumed\":\"true\",\"established\":\"true\",\"log_source\":\"zeek\"},\"location\":\"/opt/zeek/logs/current/ssl.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::3149586-64513", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc000882750), Source:"/var/ossec/logs/archives/archives.json", Offset:649272411, Timestamp:time.Time{wall:0xc2531f78b03fc8d8, ext:11570590108240, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x300f12, Device:0xfc01}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"object mapping for [data.version] tried to parse field [version] as object, but found a concrete value"}

Here's also a screenshot from Wazuh Index Patterns:
wazuh-index.png

There's no conflict in wazuh-alerts-*, only in wazuh-archives-*

Do you have any idea what is the actual reason for my issue and how can I fix it?

Thank you!

Bony V John

unread,
Jan 18, 2026, 10:38:15 PMJan 18
to Wazuh | Mailing List
Hi,

Please allow me some time, I'm working on this and will get back to you with an update as soon as possible.

Bony V John

unread,
Jan 18, 2026, 11:50:07 PMJan 18
to Wazuh | Mailing List
Hi,

Based on the sample logs you shared, it looks like your Zeek logs are hitting a field mapping conflict while Filebeat tries to index events into the Wazuh Indexer. The issue is with the data.id field: the Filebeat template expects it as a keyword, but in your Zeek events it is coming as an object. Because of this mismatch, Filebeat fails to index the events, and you see indexing errors for both the wazuh-alerts-* and wazuh-archives-* indices.

To resolve this, you can rename data.id only for Zeek logs (where data.log_source == "zeek") by creating a new string field like data.zeek_id, and then removing data.id to avoid the mapping conflict.

Below are the steps.

Fix indexing for wazuh-alerts-*

1. Backup the existing pipeline file:
cp /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json /tmp/pipeline.json

2. Open the pipeline file:
vi /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json

3. Locate the processors section, and insert the following snippet after the "date_index_name" section and before the first remove block.
{
  "script": {
    "if": "ctx?.data?.log_source == 'zeek' && ctx?.data?.id != null && (ctx.data.id instanceof Map)",
    "lang": "painless",
    "source": "def id = ctx.data.id; def oh = id.containsKey('orig_h') ? id.orig_h : null; def op = id.containsKey('orig_p') ? id.orig_p : null; def rh = id.containsKey('resp_h') ? id.resp_h : null; def rp = id.containsKey('resp_p') ? id.resp_p : null; ctx.data.zeek_id = (oh != null ? oh : '') + ':' + (op != null ? op : '') + '->' + (rh != null ? rh : '') + ':' + (rp != null ? rp : '');"
  }
},
{
  "remove": {
    "field": "data.id",
    "ignore_missing": true,
    "ignore_failure": true,
    "if": "ctx?.data?.log_source == 'zeek' && ctx?.data?.id != null && (ctx.data.id instanceof Map)"
  }
},
{
  "convert": {
    "field": "data.zeek_id",
    "type": "string",
    "ignore_missing": true,
    "if": "ctx?.data?.log_source == 'zeek'"
  }
},

4. Save the configuration and apply the pipeline:
filebeat setup --pipelines
systemctl restart filebeat

This will:

  • Detect Zeek logs using data.log_source == "zeek"

  • Convert the Zeek data.id object into a single string (data.zeek_id)

  • Remove data.id only when it is an object to prevent the mapping conflict


For indexing for wazuh-archives:

1. Backup the existing archives pipeline file:
cp /usr/share/filebeat/module/wazuh/archives/ingest/pipeline.json /tmp/archives-pipeline.json

2. Open the pipeline file:
vi /usr/share/filebeat/module/wazuh/archives/ingest/pipeline.json

3. Locate the processors section, and insert the following snippet after the "date_index_name" section and before the first remove block.
{
  "script": {
    "if": "ctx?.data?.log_source == 'zeek' && ctx?.data?.id != null && (ctx.data.id instanceof Map)",
    "lang": "painless",
    "source": "def id = ctx.data.id; def oh = id.containsKey('orig_h') ? id.orig_h : null; def op = id.containsKey('orig_p') ? id.orig_p : null; def rh = id.containsKey('resp_h') ? id.resp_h : null; def rp = id.containsKey('resp_p') ? id.resp_p : null; ctx.data.zeek_id = (oh != null ? oh : '') + ':' + (op != null ? op : '') + '->' + (rh != null ? rh : '') + ':' + (rp != null ? rp : '');"
  }
},
{
  "remove": {
    "field": "data.id",
    "ignore_missing": true,
    "ignore_failure": true,
    "if": "ctx?.data?.log_source == 'zeek' && ctx?.data?.id != null && (ctx.data.id instanceof Map)"
  }
},
{
  "convert": {
    "field": "data.zeek_id",
    "type": "string",
    "ignore_missing": true,
    "if": "ctx?.data?.log_source == 'zeek'"
  }
},

4. Save the configuration and apply the pipeline:
filebeat setup --pipelines
systemctl restart filebeat

I tested this approach on my side and it works as expected: Zeek events index successfully without breaking the mapping.  

Martin Krastev

unread,
Jan 19, 2026, 7:09:47 AMJan 19
to Wazuh | Mailing List
Hello Bony,

Thanks a lot for your fast reply!

I followed the steps you proposed and they fixed the issue. Is this some kind of problem with Wazuh/filebeat, which is going to be fixed in the next versions, or it's something which has to be configured manually every time on a new setup?

Now I see the following in filebeat log, but I guess it's not related:

2026-01-19T13:11:19.301+0200 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc253a23591be9946, ext:580873018254, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, Fields:{"agent":{"ephemeral_id":"777537a4-cfa2-47b7-81dd-0aac7ee045d3","hostname":"siem","id":"679fdd3e-3df8-423f-872a-f72aa83e76eb","name":"siem","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},"host":{"name":"siem"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/archives/archives.json"},"offset":683046053},"message":"{\"timestamp\":\"2026-01-19T13:11:17.175+0200\",\"agent\":{\"id\":\"001\",\"name\":\"nids\",\"ip\":\"192.168.1.203\"},\"manager\":{\"name\":\"siem\"},\"id\":\"1768821077.4460800\",\"full_log\":\"{\\\"ts\\\":1768821076.3215089,\\\"uid\\\":\\\"CZHoLf25bywJbXdTef\\\",\\\"id.orig_h\\\":\\\"192.168.1.190\\\",\\\"id.orig_p\\\":50181,\\\"id.resp_h\\\":\\\"142.251.31.84\\\",\\\"id.resp_p\\\":443,\\\"version\\\":\\\"TLSv13\\\",\\\"cipher\\\":\\\"TLS_AES_256_GCM_SHA384\\\",\\\"curve\\\":\\\"X25519MLKEM768\\\",\\\"server_name\\\":\\\"accounts.google.com\\\",\\\"resumed\\\":false,\\\"established\\\":true,\\\"ssl_history\\\":\\\"CsiI\\\",\\\"log_source\\\":\\\"zeek\\\"}\",\"decoder\":{\"name\":\"json\"},\"data\":{\"srcip\":\"192.168.1.190\",\"srcport\":\"50181\",\"dstip\":\"142.251.31.84\",\"dstport\":\"443\",\"timestamp\":\"1768821076\",\"uid\":\"CZHoLf25bywJbXdTef\",\"ssl_version\":\"TLSv13\",\"ssl_cipher\":\"TLS_AES_256_GCM_SHA384\",\"ssl_curve\":\"X25519MLKEM768\",\"ssl_server_name\":\"accounts.google.com\",\"ssl_established\":\"true\",\"ssl_resumed\":\"false\",\"ssl_history\":\"CsiI\",\"ts\":\"1768821076.321509\",\"id\":{\"orig_h\":\"192.168.1.190\",\"orig_p\":\"50181\",\"resp_h\":\"142.251.31.84\",\"resp_p\":\"443\"},\"version\":\"TLSv13\",\"cipher\":\"TLS_AES_256_GCM_SHA384\",\"curve\":\"X25519MLKEM768\",\"server_name\":\"accounts.google.com\",\"resumed\":\"false\",\"established\":\"true\",\"log_source\":\"zeek\"},\"location\":\"/opt/zeek/logs/current/ssl.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::3150387-64513", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc0005d4750), Source:"/var/ossec/logs/archives/archives.json", Offset:683047296, Timestamp:time.Time{wall:0xc253a1a46f55b847, ext:369461630, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x301233, Device:0xfc01}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"object mapping for [data.version] tried to parse field [version] as object, but found a concrete value"}

2026-01-19T13:11:20.301+0200 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc253a235d1cad867, ext:581873820847, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, Fields:{"agent":{"ephemeral_id":"777537a4-cfa2-47b7-81dd-0aac7ee045d3","hostname":"siem","id":"679fdd3e-3df8-423f-872a-f72aa83e76eb","name":"siem","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},"host":{"name":"siem"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/archives/archives.json"},"offset":683047296},"message":"{\"timestamp\":\"2026-01-19T13:11:19.168+0200\",\"agent\":{\"id\":\"001\",\"name\":\"nids\",\"ip\":\"192.168.1.203\"},\"manager\":{\"name\":\"siem\"},\"id\":\"1768821079.4460800\",\"full_log\":\"{\\\"ts\\\":1768820875.196866,\\\"uid\\\":\\\"CfBTyR3AwXKLDlR6ui\\\",\\\"id.orig_h\\\":\\\"192.168.1.190\\\",\\\"id.orig_p\\\":50138,\\\"id.resp_h\\\":\\\"142.250.187.131\\\",\\\"id.resp_p\\\":443,\\\"proto\\\":\\\"tcp\\\",\\\"service\\\":\\\"ssl\\\",\\\"duration\\\":198.04462003707886,\\\"orig_bytes\\\":2428,\\\"resp_bytes\\\":40548,\\\"conn_state\\\":\\\"SF\\\",\\\"local_orig\\\":true,\\\"local_resp\\\":false,\\\"missed_bytes\\\":650,\\\"history\\\":\\\"ShADadgFf\\\",\\\"orig_pkts\\\":26,\\\"orig_ip_bytes\\\":3744,\\\"resp_pkts\\\":43,\\\"resp_ip_bytes\\\":42142,\\\"ip_proto\\\":6,\\\"log_source\\\":\\\"zeek\\\"}\",\"decoder\":{\"name\":\"json\"},\"data\":{\"protocol\":\"tcp\",\"srcip\":\"192.168.1.190\",\"srcport\":\"50138\",\"dstip\":\"142.250.187.131\",\"dstport\":\"443\",\"timestamp\":\"1768820875\",\"uid\":\"CfBTyR3AwXKLDlR6ui\",\"application_layer_protocol\":\"ssl\",\"duration_of_the_connection\":\"198.04462003707886\",\"byte_send_by_originator\":\"2428\",\"byte_sent_by_responder\":\"40548\",\"connection_state\":\"SF\",\"local_origin\":\"true\",\"local_response\":\"false\",\"missed_bytes_might_packet_loss\":\"650\",\"packet_sent_by_origin\":\"26\",\"ip_layer_bytes_from_origin\":\"3744\",\"packet_sent_by_responder\":\"43\",\"ip_layer_bytes_sent_by_responder\":\"42142\",\"protocol_number_ip_header\":\"6\",\"ts\":\"1768820875.196866\",\"id\":{\"orig_h\":\"192.168.1.190\",\"orig_p\":\"50138\",\"resp_h\":\"142.250.187.131\",\"resp_p\":\"443\"},\"proto\":\"tcp\",\"service\":\"ssl\",\"duration\":\"198.044620\",\"orig_bytes\":\"2428\",\"resp_bytes\":\"40548\",\"conn_state\":\"SF\",\"local_orig\":\"true\",\"local_resp\":\"false\",\"missed_bytes\":\"650\",\"history\":\"ShADadgFf\",\"orig_pkts\":\"26\",\"orig_ip_bytes\":\"3744\",\"resp_pkts\":\"43\",\"resp_ip_bytes\":\"42142\",\"ip_proto\":\"6\",\"log_source\":\"zeek\"},\"location\":\"/opt/zeek/logs/current/conn.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::3150387-64513", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc0005d4750), Source:"/var/ossec/logs/archives/archives.json", Offset:683049060, Timestamp:time.Time{wall:0xc253a1a46f55b847, ext:369461630, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x301233, Device:0xfc01}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"object mapping for [data.service] tried to parse field [service] as object, but found a concrete value"}


Thank you!

Bony V John

unread,
Jan 20, 2026, 11:51:48 PMJan 20
to Wazuh | Mailing List

Hi,

Apologies for the late response. Based on the logs you shared, it looks like some Zeek logs are currently not being indexed in the Wazuh indexer due to field mapping conflicts. This is not related to the indexer version. The issue occurs because the Wazuh alerts and archives indices use predefined templates that map fields based on their data types for searchability.

In this case, fields such as data.version and data.service are mapped as objects in the template, while in the Zeek logs these fields arrive as strings. This mismatch causes the indexing conflict.

To avoid this, we need to handle the Zeek events before indexing by renaming the conflicting fields through an ingest pipeline. The approach below renames the data namespace to zeek when the log_source is zeek, so the fields will be indexed as zeek.service, zeek.version, and so on.


Steps to resolve the issue:

1. Backup the existing archives pipeline file:
cp /usr/share/filebeat/module/wazuh/archives/ingest/pipeline.json /tmp/archives-pipeline.json

2. Open the pipeline file:
vi /usr/share/filebeat/module/wazuh/archives/ingest/pipeline.json

3. In the processors section, add the following snippet after the date_index_name processor and before the first remove block.
If you’ve already added a previous configuration I shared, please replace it with this one:  
{
      "script": {
        "if": "ctx?.data?.log_source == 'zeek' && ctx?.data instanceof Map",
        "lang": "painless",
        "source": "if (ctx.zeek == null || !(ctx.zeek instanceof Map)) { ctx.zeek = new HashMap(); }\nfor (entry in ctx.data.entrySet()) { ctx.zeek[entry.getKey()] = entry.getValue(); }\nctx.remove('data');"
      }
    },

4. Save the configuration and apply the pipeline:
filebeat setup --pipelines
systemctl restart filebeat

After this change, Zeek events will be indexed in the wazuh-archives index with fields such as zeek.id, zeek.version, etc., avoiding conflicts with the existing templates.

This configuration checks whether the log_source is zeek, moves all fields under the zeek namespace, and removes the original data field to prevent mapping conflicts.

I’ve tested this setup on my end, and it’s working as expected.Screenshot 2026-01-21 102106.png

Martin Krastev

unread,
Jan 22, 2026, 9:45:25 AMJan 22
to Wazuh | Mailing List
Hello Bony,

Thank you for you reply!

The previous WARN message is not showing anymore after I did what you advised, but now I have this:

2026-01-22T16:08:02.448+0200 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc254a9b05941ea0c, ext:1873106179192, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, Fields:{"agent":{"ephemeral_id":"039bbbcd-9b9f-4447-b72f-18ef0e80a74e","hostname":"siem","id":"679fdd3e-3df8-423f-872a-f72aa83e76eb","name":"siem","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},"host":{"name":"siem"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/archives/archives.json"},"offset":939060759},"message":"{\"timestamp\":\"2026-01-22T16:08:00.449+0200\",\"agent\":{\"id\":\"001\",\"name\":\"nids\",\"ip\":\"192.168.1.203\"},\"manager\":{\"name\":\"siem\"},\"id\":\"1769090880.1218055\",\"full_log\":\"{\\\"ts\\\":1769090878.9160819,\\\"id\\\":\\\"FaVlk03NVWtDOcYi7g\\\",\\\"hashAlgorithm\\\":\\\"sha1\\\",\\\"issuerNameHash\\\":\\\"BA14A9AB8164C6AFB43C9D29383AE6F9D257ABE9\\\",\\\"issuerKeyHash\\\":\\\"944FD45D8BE4A4E2A680FEFDD8F900EFA3BE0257\\\",\\\"serialNumber\\\":\\\"0C9A81D5A03D9C9F3D63CDE87089531C\\\",\\\"certStatus\\\":\\\"good\\\",\\\"thisUpdate\\\":1768946435,\\\"nextUpdate\\\":1769547635,\\\"log_source\\\":\\\"zeek\\\"}\",\"decoder\":{\"name\":\"json\"},\"data\":{\"id\":\"FaVlk03NVWtDOcYi7g\",\"timestamp\":\"1769090878\",\"ts\":\"1769090878.916082\",\"hashAlgorithm\":\"sha1\",\"issuerNameHash\":\"BA14A9AB8164C6AFB43C9D29383AE6F9D257ABE9\",\"issuerKeyHash\":\"944FD45D8BE4A4E2A680FEFDD8F900EFA3BE0257\",\"serialNumber\":\"0C9A81D5A03D9C9F3D63CDE87089531C\",\"certStatus\":\"good\",\"thisUpdate\":\"1768946435\",\"nextUpdate\":\"1769547635\",\"log_source\":\"zeek\"},\"location\":\"/opt/zeek/logs/current/ocsp.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::3150488-64513", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc0000d96c0), Source:"/var/ossec/logs/archives/archives.json", Offset:939061731, Timestamp:time.Time{wall:0xc254a7dc1ea4a97c, ext:196536619, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x301298, Device:0xfc01}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"object mapping for [zeek.id] tried to parse field [id] as object, but found a concrete value"}

Is it some kind of an issue, or I can just leave it?

Bony V John

unread,
Jan 23, 2026, 12:32:09 AMJan 23
to Wazuh | Mailing List

Hi,

This error is happening because the Zeek logs you’re ingesting are not consistent in structure.

In the earlier logs you shared, data.id was an object. Based on that, the pipeline was updated to handle data.id as an object, and events with data.id as an object started indexing successfully.

Now the new log format is different: data.id appears as a string. Since the pipeline (and index mapping) expects data.id to be an object, these events fail to index due to a field mapping conflict.

To fix this, the pipeline needs to handle both cases (object and string) safely.


1. Backup the existing archives pipeline file:
cp /usr/share/filebeat/module/wazuh/archives/ingest/pipeline.json /tmp/archives-pipeline.json

2. Open the pipeline file:
vi /usr/share/filebeat/module/wazuh/archives/ingest/pipeline.json

3. In the processors section, add the following snippet after the date_index_name processor and before the first remove block.
If you’ve already added a previous configuration I shared, please replace it with this one:  
{
  "script": {
    "if": "ctx?.data?.log_source == 'zeek' && ctx?.data instanceof Map",
    "lang": "painless",
    "source": "if (ctx.zeek == null || !(ctx.zeek instanceof Map)) { ctx.zeek = new HashMap(); }\n\n// Copy all data.* into zeek.* EXCEPT id\nfor (entry in ctx.data.entrySet()) {\n  def k = entry.getKey();\n  if (k != 'id') { ctx.zeek[k] = entry.getValue(); }\n}\n\n// Handle id in both formats\nif (ctx.data.containsKey('id') && ctx.data.id != null) {\n  def zid = ctx.data.id;\n\n  // Zeek tuple object\n  if (zid instanceof Map) {\n    ctx.zeek.id = zid;\n    def oh = zid.containsKey('orig_h') ? zid.orig_h : null;\n    def op = zid.containsKey('orig_p') ? zid.orig_p : null;\n    def rh = zid.containsKey('resp_h') ? zid.resp_h : null;\n    def rp = zid.containsKey('resp_p') ? zid.resp_p : null;\n    ctx.zeek.zeek_id = (oh != null ? oh : '') + ':' + (op != null ? op : '') + '->' + (rh != null ? rh : '') + ':' + (rp != null ? rp : '');\n  } else {\n    // Scalar id (ocsp.log etc.)\n    ctx.zeek.zeek_id = zid.toString();\n  }\n}\n\nctx.remove('data');"
  }
},


4. Save the configuration and apply the pipeline:
filebeat setup --pipelines
systemctl restart filebeat

Condition:
  • Only applies when data.log_source is zeek

  • Moves data.* into zeek.*

  • Handles data.id safely:

    • If data.id is an object, it stores it as zeek.id and also builds a readable zeek.zeek_id

    • If data.id is a string, it avoids zeek.id and stores the value only in zeek.zeek_id (string), preventing mapping conflicts

  • Removes data to avoid future conflicts

I tested this approach on my end and it worked correctly.

Martin Krastev

unread,
Jan 23, 2026, 10:21:51 AMJan 23
to Wazuh | Mailing List
Hello Bony,

I did what you recommended and it solved the issue, but now there's another one. May I assume all that happens because of that log_source = zeek? This is something which I added manually some time ago for troubleshooting purpose, but if this is the reason for all that problems may I just remove it? If yes, should I remove the previous config which you recommended?

Thank you!

Bony V John

unread,
Jan 27, 2026, 12:38:29 AMJan 27
to Wazuh | Mailing List

Hi,

Apologies for the late response. This issue is not occurring because of the log_source = zeek field that you added externally. There is no need to remove this field. In fact, this field is required because the pipeline uses it to decide when and how to rename fields during indexing. The pipeline checks the value of this field and applies the field transformation accordingly.

The real issue is caused by inconsistent field data types.

For example, in the previous case I mentioned, the field data.id appears with different data types depending on the log type:

  • In one event, data.id is an object

  • In another event, data.id is a string

Wazuh indexer cannot index the same field with two different data types, and this is what causes the mapping conflict.

To resolve this, we implemented a different approach in the custom script I shared. The script checks the data type of data.id:

  • If data.id is an object, it is indexed correctly by only changing the namespace from data to zeek

  • If data.id is a string, the field name is changed from zeek.id to zeek.zeek_id, which is stored as a string and indexed safely

This avoids mapping conflicts and allows both log formats to be indexed successfully.

You can apply the same logic to other fields if similar issues occur. The Filebeat error logs will indicate which field is causing the conflict. For example, in the last error you shared, it shows:

type":"mapper_parsing_exception","reason":"object mapping for [zeek.id] tried to parse field [id] as object, but found a concrete value

This means that zeek.id was expected to be an object, but in the current log it is a concrete (string) value, which caused the failure.

Using this method, you can identify the problematic field from the error logs and adjust the script accordingly to handle both data types safely.

Martin Krastev

unread,
Jan 27, 2026, 2:52:50 PMJan 27
to Wazuh | Mailing List
Great. Thanks a lot for your help!
Reply all
Reply to author
Forward
0 new messages