HI Phil
I Have found this issue:
This is one of error log from filebeat.log:
Timestamp:time.Time{wall:0xc19bf7965ca1b624, ext:129285713737333, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x466740, Device:0xfc00}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"illegal_argument_exception","reason":"Document contains at least one immense term in field=\"previous_output\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[123, 34, 119, 105, 110, 34, 58, 123, 34, 115, 121, 115, 116, 101, 109, 34, 58, 123, 34, 112, 114, 111, 118, 105, 100, 101, 114, 78, 97, 109]...', original message: bytes can be at most 32766 in length; got 43793","caused_by":{"type":"max_bytes_length_exceeded_exception","reason":"max_bytes_length_exceeded_exception: bytes can be at most 32766 in length; got 43793"}}
2024-07-12T00:00:03.636-0500
INFO
log/harvester.go:302
Harvester started for file: /var/ossec/logs/alerts/alerts.json
I was able to identify the root cause, The error message you are encountering indicates that a document being indexed by Elasticsearch contains a term in the "previous_output" field that exceeds the maximum allowed length of 32766 bytes. This is causing the indexing operation to fail.
The probable solution is that you cannot increase max_bytes in Filebeat, but you can truncate those large fields into a determined number of bytes.
You can edit the filebeat.yml file located at /etc/filebeat and add the following:
processors:
truncate_fields:
max_bytes: 32766
Adding this will truncate any field to that size and it will avoid triggering the error that you are having.
I am leaving below some documentation regarding this:
Filebeat Truncate Fields:
https://www.elastic.co/guide/en/beats/filebeat/current/truncate-fields.htmlI hope it helps you.