Kunai JSON events not ingested?

33 views
Skip to first unread message

Xavier Mertens

unread,
Jan 29, 2026, 8:39:18 AM (4 days ago) Jan 29
to Wazuh | Mailing List
Hello *,

I'm using Kunai on some sensitive servers. Events are logged in /var/log/kunai/kunai.json (in JSON format). The Wazuh agent has been configured to ingest them:

<localfile>
  <log_format>json</log_format>
  <location>/var/log/kunai/events.json</location>
</localfile>

Wazuh seems to be happy:

root@host:/var/ossec/etc# grep kunai /var/ossec/logs/ossec.log
2026/01/29 13:59:26 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/log/kunai/kunai.json'.

I see events collected on my manager (stored in archives.json):

{"timestamp":"2026-01-29T13:28:40.191+0000","agent":{"id":"002","name":"xxx","ip":"192.168.254.103"},"manager":{"name":"wazuh"},"id":"1769693320.119397192","full_log":"{\"data\":{\"ancestors\":\"\",\"command_line\":\"?\",\"exe\":{\"path\":\"?\"},\"socket\":{\"domain\":\"AF_INET\",\"type\":\"SOCK_STREAM\",\"proto\":\"TCP\"},\"src\":{\"ip\":\"192.168.254.103\",\"port\":48150},\"dst\":{\"hostname\":\"?\",\"ip\":\"172.28.0.3\",\"port\":9997,\"public\":false,\"is_v6\":false},\"community_id\":\"1:kHrO8lHLQeBpJXcWuF8QxMWTQds=\",\"connected\":true},\"filter\":{\"rules\":[\"log.interesting_events\"],\"tags\":[\"os:linux\"]},\"info\":{\"host\":{\"uuid\":\"xxx\",\"name\":\"xxx\",\"container\":null},\"event\":{\"source\":\"kunai\",\"id\":60,\"name\":\"connect\",\"uuid\":\"xxx\",\"batch\":52776422},\"task\":{\"name\":\"splunkd\",\"pid\":1751,\"tgid\":1442,\"guuid\":\"b7cd722e-0200-0000-56e9-7da1a2050000\",\"uid\":0,\"user\":\"root\",\"gid\":0,\"group\":\"root\",\"namespaces\":{\"mnt\":4026531841},\"flags\":\"0x400040\",\"zombie\":false},\"parent_task\":{\"name\":\"systemd\",\"pid\":1,\"tgid\":1,\"guuid\":\"xxx\",\"uid\":0,\"user\":\"root\",\"gid\":0,\"group\":\"root\",\"namespaces\":{\"mnt\":4026531841},\"flags\":\"0x400100\",\"zombie\":false},\"utc_time\":\"2026-01-29T13:28:39.693815609Z\"}}","decoder":{"name":"json"},"data":{"data":{"command_line":"?","exe":{"path":"?"},"socket":{"domain":"AF_INET","type":"SOCK_STREAM","proto":"TCP"},"src":{"ip":"192.168.254.103","port":"48150"},"dst":{"hostname":"?","ip":"172.28.0.3","port":"9997","public":"false","is_v6":"false"},"community_id":"1:kHrO8lHLQeBpJXcWuF8QxMWTQds=","connected":"true"},"filter":{"rules":["log.interesting_events"],"tags":["os:linux"]},"info":{"host":{"uuid":"xxx","name":"xxx","container":"null"},"event":{"source":"kunai","id":"60","name":"connect","uuid":"xxx","batch":"52776422"},"task":{"name":"splunkd","pid":"1751","tgid":"1442","guuid":"xxx","uid":"0","user":"root","gid":"0","group":"root","namespaces":{"mnt":"4026531841.000000"},"flags":"0x400040","zombie":"false"},"parent_task":{"name":"systemd","pid":"1","tgid":"1","guuid":"xxx","uid":"0","user":"root","gid":"0","group":"root","namespaces":{"mnt":"4026531841.000000"},"flags":"0x400100","zombie":"false"},"utc_time":"2026-01-29T13:28:39.693815609Z"}},"location":"/var/log/kunai/kunai.json"}

But events are not indexed in OpenSearch!? I see this in filebeat logs:

2026-01-29T13:13:07.028Z        WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event  .... DpkgStatus Dir::State::status}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:154"}}

I checked the JSON syntax, it's reported as "valid"... Any idea?

/x

Marcos Darío Buslaiman

unread,
Jan 29, 2026, 10:58:18 AM (4 days ago) Jan 29
to Wazuh | Mailing List
Hi 
The error happens because the field data.data is mapped as keyword, but some incoming events send data.data as a JSON object.
Wazuh Indexer cannot index the same field as both text and object types, so the document is rejected.  
You can validate that using Dev Tools from Indexer Management and executing this:
GET wazuh-alerts*/_mapping/field/data.data
You will receive something like 
{
  "wazuh-alerts-4.x-2025.10.29": {
    "mappings": {
      "data.data": {
        "full_name": "data.data",
        "mapping": {
          "data": {
            "type": "keyword"
          }
        }
      }
    }
  },

So this means that the index expects data.data → string (keyword), but it sends data.data → object (JSON map)

You can fix this by adding an ingest pipeline processor that detects when data.data is sent as a JSON object and moves it to a different field.

This script checks if:

  • the data object exists,

  • data.data exists,

  • and data.data is a JSON object (Map).

If all conditions are met, it:

  • copies the object to data.data_json, and

  • removes data.data from the document to avoid a mapping conflict (since data.data is mapped as keyword).

      {
        "script": {
          "lang": "painless",
          "source": "if (ctx.containsKey('data') && ctx.data != null && ctx.data.containsKey('data') && ctx.data.data != null && (ctx.data.data instanceof Map)) { ctx.data.data_json = ctx.data.data; ctx.data.remove('data'); }"
        }
      },

This prevents indexing failures while preserving the full structured content under data.data_json.

To apply this fix on DevTools execute this.
GET _ingest/pipeline
Copy the block and add the processor somethis like this.

{
  "filebeat-7.10.2-wazuh-alerts-pipeline": {
    "description": "Wazuh alerts pipeline",
    "processors": [
      {
        "json": {
          "field": "message",
          "add_to_root": true
        }
      },
      {
        "script": {
          "lang": "painless",
          "source": "if (ctx.containsKey('data') && ctx.data != null && ctx.data.containsKey('data') && ctx.data.data != null && (ctx.data.data instanceof Map)) { ctx.data.data_json = ctx.data.data; ctx.data.remove('data'); }"
        }
      },
      {
        "set": {
          "override": false,
          "ignore_failure": true,
          "ignore_empty_value": true,
          "field": "data.aws.region",
          "value": "{{data.aws.awsRegion}}"
        }
      },....

Then 
PUT _ingest/pipeline/filebeat-7.10.2-wazuh-alerts-pipeline  
Untitled.png

And then re-ingest the data.
I test this in a lab
Untitled2.png


Please let me know if you have any questions.

Xavier Mertens

unread,
Jan 30, 2026, 6:48:56 AM (3 days ago) Jan 30
to Wazuh | Mailing List
Hi Marcos,

Tx for the feedback! It seemed clear in my mind but when I try to "PUT" the new config, I get this:
Screenshot 2026-01-30 at 12.48.11.png

Marcos Darío Buslaiman

unread,
Jan 30, 2026, 9:22:43 AM (3 days ago) Jan 30
to Wazuh | Mailing List
Hi Xavier,
That is because the pipeline name is already in the PUT command argument, so you need to remove the following block.
Untitled3.png
From the first opening brace ( ) to the colon at the end of the name, then remember to delete the closing brace at the end of the json block, which was closing the brace deleted.
Please let me know if that works.

Regards!

Xavier Mertens

unread,
Jan 31, 2026, 5:54:26 AM (2 days ago) Jan 31
to Wazuh | Mailing List
Hi Marcos,

It works! Great! I also applied the same to "filebeat-7.10.2-wazuh-archives-pipeline"...
Tx!

Reply all
Reply to author
Forward
0 new messages