HI
There is an error "
Document contains at least one immense term in field=\"previous_output\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[123, 34, 119, 105, 110, 34, 58, 123, 34, 115, 121, 115, 116, 101, 109, 34, 58, 123, 34, 112, 114, 111, 118, 105, 100, 101, 114, 78, 97, 109]...', original message: bytes can be at most 32766 in length; got 38029","caused_by":{"type":"max_bytes_length_exceeded_exception","reason":"bytes can b e at most 32766 in length; got 38029"}}
"
The fact that new indexes are not forming in Wazuh Indexers can be directly caused by these oversized fields breaking the ingestion pipeline.
Setting the max_bytes to 32 KB will not work — that is already the hard limit. Truncation beyond this size isn’t supported, so Filebeat will still reject the event.
In this case, you have two options:
Remove the problematic field from indexing – this way, all other fields will be indexed and alerts will still generate, but that specific oversized field will be excluded
To do this, edit the pipeline at:
/usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.jsonAdd:
{
"remove": {
"field": "previous_output ",
"ignore_missing": true,
"ignore_failure": true
}
},
Then run:
filebeat setup --pipelines
systemctl restart filebeat
Refer
https://www.elastic.co/guide/en/elasticsearch/reference/7.10/remove-processor.htmlFilter the log at the source – adjust the logging so that extremely large queries or fields are avoided before they reach Filebeat.
This way, you won’t hit the 32 KB field size limitation.