I am using the fluentd docker log driver, fluentd, fluent-plugin-elasticsearch and elasticsearch. I want to add all the reverse domain notation labels that docker swarm and compose and others add to containers. However, elasticsearch rejects fields that contain the "." character. I would like to add a filter in fluentd to rewrite the fields to be alphanumeric.
Here comes my configuration in case it's useful context...
For the fluentd docker log driver I have used the following options:
--log-driver fluentd --log-opt fluentd-address=localhost:24224 --log-opt tag=docker.{{.ImageName}}.{{.ImageID}}.{{.Name}}.{{.ID}} --log-top labels=com.docker.compose.config-hash,com.docker.compose.container-number,com.docker.compose.oneoff,com.docker.compose.project,com.docker.compose.service,com.docker.compose.version,com.docker.swarm.affinities,com.docker.swarm.id
fluentd configuration:
<source>
type forward
port 24224
bind 0.0.0.0
</source>
<match docker.**>
@type copy
<store>
@type elasticsearch
@id output_elasticsearch1
logstash_format true
index_name fluentd
type_name fluentd
</store>
<store>
@type file
@id output_docker1
path /fluentd/logs/docker.*.log
symlink_path /fluentd/logs/docker.log
append true
time_slice_format %Y%m%d
time_slice_wait 1m
time_format %Y%m%dT%H%M%S%z
</store>
</match>
<match **>
@type file
@id output1
path /fluentd/logs/data.*.log
symlink_path /fluentd/logs/data.log
append true
time_slice_format %Y%m%d
time_slice_wait 10m
time_format %Y%m%dT%H%M%S%z
</match>
Error output from elasticsearch:
[2016-01-12 08:12:05,215][DEBUG][action.bulk ] [Punchout] [logstash-2016.01.12][3] failed to execute bulk item (index) index {[logstash-2016.01.12][fluentd][AVI05FLyAO15xpkHnohx], source[{"com.example.site.applicationid":"logs","com.example.site.componentid":"kibana","container_id":"6b0e7cb8db11024bf712f0c56e29774d49b67cfaa5ca823bbf03569312b30245","container_name":"/elated_williams","source":"stdout","log":"Some log text","com.docker.swarm.id":"db0a0774ccdc4c131d1f69848b6390a55691f42a397f6fff7deb97f8d0ba293b","@timestamp":"2016-01-12T08:11:26+00:00"}]} at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:278)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:223)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:140)
at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:121)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:391)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$2.execute(MetaDataMappingService.java:386)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:388)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)