So I have almost completed my rollout of fluentd and it has gone very smoothly up until now.
It seems that the last two app servers I have deployed it to aren't getting logs for one docker image.
Here is the
conf:
<filter docker.**>
@type record_transformer
remove_keys tag
<record>
hostname ${hostname}
container_image ${tag_parts[2]}
container_name ${tag_parts[3]}
</record>
</filter>
<match **>
type copy
<store>
type elasticsearch
logstash_format true
hosts elasticsearch-dev-1.node.consul,elasticsearch-dev-2.node.consul,elasticsearch-dev-3.node.consul
port 9200
index_name fluentd
type_name fluentd
include_tag_key true
</store>
<store>
@type stdout
</store>
</match>
And the
DOCKER_OPTS:
DOCKER_OPTS="--bip=172.17.0.1/16 --dns=172.17.0.1 --dns=8.8.8.8 --log-driver=fluentd --log-opt tag=docker.hostname.{{.ImageName}}.{{.Name}} --log-opt fail-on-startup-error=false"
Fluentd seems to be working on the server as I am getting non-docker logs into ES. I get logs from these containers when using the json driver. I do not see logs in ES or in the logs of the fluentd logger itself.
Could this be something to do with the buffer or something? The containers that don't have their logs seen by fluentd do have quite a bit of output. I tried killing and redeploying the containers so that it would be fresh logs, but that didn't work either.
Any idea on how to troubleshoot this? It is working on every other server with every other container except these ones.