fluent-plugin-kafka "Send

129 views
Skip to first unread message

Tim Dillon

unread,
Aug 30, 2017, 4:47:46 PM8/30/17
to Fluentd Google Group
This is what I get:


trying now to forward a data stream that I "pick out" of a syslog data stream into a kafka endpoint and have tried many combinations of configurations and weened them down to defaults other than brokers (of 1 broker) to try to get them into kafka using the fluent-plugin-kafka. Whether I use the "@type kafka_buffered" or simply "@type kafka" and parameters appropriate to each, I always end up with this error at flush time:



2017-08-30 20:10:21 +0000 [warn]: Send exception occurred: wrong number of arguments (7 for 6)
2017-08-30 20:10:21 +0000 [warn]: Exception Backtrace :...



2017-08-30 20:17:59 +0000 [info]: initialized kafka producer: kafka
2017-08-30 20:17:59 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2017-08-30 20:34:49 +0000 error_class="ArgumentError" error="wrong number of arguments (7 for 6)" plugin_id="object:3fe08cce11a0"



can provide full stack trace if needed...



digging up versions to help



BTW, the messages are filtered using grep plugin and a prefix tag of ipdr is added to system tag, which ends up in json as ipdr.system.local0.notice in the message which ends up creating that "topic" in kafka.



Do I have a version mismatch/incompatibility? td-agent version 0.12.35 and fluent-plugin-kafka version 0.6.0 and ruby-kafka 0.4.1

Any help greatly appreciated. Here's my weened down config for the kafka part, let most defaults in place, thought my extended tag might be an issue, nothing changes, always same error.

Here's the section I've been working on. Some parameters were for the "@type kafka" but I'm really trying to use the "@type kafka_buffered"

<match ipdr.system.**>
##   @type kafka
  @type kafka_buffered

  # list of seed brokers
  brokers 10.185.45.143:9092
## requires special build, no plugin...  zookeeper 10.185.45.143:2181

  # buffer settings
##  buffer_type memory
##  buffer_path /var/log/td-agent/buffer/td
  flush_interval 3s

  # topic settings
  default_topic ipdr.system.local0.notice
  exclude_topic_key true

  # data type settings
##  output_include_tag false
  output_data_type json
##  compression_codec gzip

  discard_kafka_delivery_failed true

## helpful debug
  get_kafka_client_log

  # producer settings
##  max_send_retries 1
##  required_acks -1
##  required_acks 1

</match>


Me: Thanks.

Mr. Fiber

unread,
Aug 30, 2017, 6:46:22 PM8/30/17
to Fluentd Google Group
It seems the bug of fluent-plugin-kafka.
I will fix it soon.


Masahiro

--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Mr. Fiber

unread,
Aug 30, 2017, 7:23:08 PM8/30/17
to Fluentd Google Group
Ah, ruby-kafka changes internal class so it causes this problem.

Mr. Fiber

unread,
Aug 30, 2017, 8:18:44 PM8/30/17
to Fluentd Google Group
Release v0.6.1. This problem should be fixed.


Masahiro
Reply all
Reply to author
Forward
0 new messages