practical guide to fluent-debug?

1,615 views
Skip to first unread message

Dick Davies

unread,
Jan 19, 2015, 10:08:53 AM1/19/15
to flu...@googlegroups.com
our haproxy logs are parsed with the tail input -> forward -> fluentd
with central elasticsearch output.

almost all traffic goes through fine (there's some extra cruft that
occasionally makes it into the haproxy logs, but the 'source' fluentd
drops those with a 'pattern not match' warning.

lately we've had a few 50X errors related to new code. those lines
are present in the haproxy access logs, and match our existing regexps
(no 'pattern not match' errors on the source server).

tcpdump confirms the msgpack data is passing between the proxys fluentd instance
and arriving at the central fluent servers port 24224.

however it never shows up in kibana. I've relaxed all the filters I
can and something seems to be dropping those events specifically.

Turning up all inputs/outputs to 'log_level trace' doesn't seem to
show me anything
other than object ids.

i tried breaking out fluent-debug but all i get is an IRB prompt, and
can't find any docs on how to use this.

I'm pretty decent as a ruby coder; all i want to do is

1. 'puts' all inbound messages
2. 'puts' all outbound elasticsearch messages

Can anyone give me a quick cheatsheet?

Thanks!

Naotoshi Seo

unread,
Jan 19, 2015, 10:25:13 AM1/19/15
to flu...@googlegroups.com
Hi,

you may find fluent-tail is useful for your purpose > https://github.com/choplin/fluent-tail
This is also using debug_agent inside.

Regards,
Naotoshi a.k.a. sonots

--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Mr. Fiber

unread,
Jan 19, 2015, 10:35:41 AM1/19/15
to flu...@googlegroups.com
2. 'puts' all outbound elasticsearch messages

You can check buffer size of fluent-plugin-elasticsearch via in_monitor_agent.


If increasing buffer_total_queued_size, elasticsearch plugin receives events.
Elasticsearch/Kibana has well-known timezone issue, please check
the number of stored documents in elasticsearch.


Masahiro

Dick Davies

unread,
Jan 19, 2015, 11:38:39 AM1/19/15
to flu...@googlegroups.com
Thanks both, have confirmed that the data is indeed hitting fluentd now.

Created a new kibaba dash and managed to find the 502s I was looking
for in the index, so not sure what's wrong with my existing dash but the tips
you provided helped me rule out fluentd as an issue, so thanks a lot :)
Reply all
Reply to author
Forward
0 new messages