My goal is to tail numerous files in one directory and forward the contents to my aggregator solution individually, without having to use possibly hundreds of unique source/match blocks.
Currently my configuration, using wildcards, is able to read every file in a directory, but it also lumps all the data together prior to the match tag receiving it. This results in one large POST to my aggregator and data being split/buffered.
In this scenario Fluentd is reading files that have already been created, always tailing from head, so the second goal is to not overflow buffers and keep POST sizes down by segregating the data.
The expected input on the aggregator side would be data from a single file per POST plus an identifier (probably tailed_path) to identify the payload. Uniquely identifying the payload would permit the aggregator to gracefully handle data split from Fluentd buffering.
Thanks in advance for any guidance.
Mark