partitions have leader brokers without a matching listener

1,158 views
Skip to first unread message

Akatsuki Pain

unread,
Sep 4, 2020, 3:24:52 AM9/4/20
to Confluent Platform
I'm facing this problem.
I have 3 node-cluster kafka. And I'm using filebeat to push logs to kafka.
If 1 node of Cluster is dead, kafka cluster wont push logs anymore.
I have set the advertised.lister in every kafka node (server.properties)
This is config of the Kafka
Topics:

Annotation 2020-09-04 141918.png

Error logs in 2 others kafka nodes:
Annotation 2020-09-04 142129.png


Akatsuki Pain

unread,
Sep 4, 2020, 3:45:16 AM9/4/20
to Confluent Platform
Additional Logs on FileBeat
1.png

Andreas Sittler

unread,
Sep 4, 2020, 4:47:07 AM9/4/20
to confluent...@googlegroups.com

What is your ISR requirement?
What is your ack mode?
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/4c0ae404-22fa-48d2-8935-368f312db400n%40googlegroups.com.
   

Akatsuki Pain

unread,
Sep 4, 2020, 5:23:58 AM9/4/20
to Confluent Platform
Dear sir,

Seem i dont understand your questions.
I just set the replication factor and partition to optimize the availability of the data.
and about the ack mode => i don't understand this, I'm running the kafka as cluster, and pushing log to ElasticSearch

Thanks for reading ^^!

Andreas Sittler

unread,
Sep 4, 2020, 6:46:32 AM9/4/20
to confluent...@googlegroups.com

Just some general remarks.
Every producer (your case: filebeat) specifies an ack mode (acks https://docs.confluent.io/current/installation/configuration/producer-configs.html#producer-configurations), which incarnates your requirements for availability and durability.
You also specify a replication factor per topic (from your logs 2) that impacts durability and availability as well.
There is also an ISR requirement (broker configuration min.insync.replicas) that impacts successful writes (probably 2 in your case?)

What can happen (depending on your settings; lets assume acks=all, min.isr=2, replication.factor=2)
If you lose a broker, you cannot produce anymore, as the ISR requirement is not met.
Your producer should see exceptions like NotEnoughReplicas or alike.

Quick recommendation: Increase the replication factor to 3 and retest.

In general, I propose to either involve your service provider (if you have one) and/or read the docs on configuration options.

Hth,
Andreas

Akatsuki Pain

unread,
Sep 7, 2020, 3:11:11 AM9/7/20
to Confluent Platform
The problem still occurs when I set replication-factor = 3 and turn of 1 node.

I see that if I turn off [node 02 or node 03], the log still transfer to ElasticSearch but when I stop node 01, log won't transfer anymore.

and the errors don't show my topic, it shows something like "connect-status-0, connect-config-10......." => I think it occurs because of the rest API and the replication-factor of those default topic: 

curl -XPOST -H 'Content-type:application/json' '192.168.24.65:8083/connectors' -d '{ "name" : "elasticsearch-sink-filebeat", "config" : { "connector.class" : "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "tasks.max" : "3", "topics" : "memember","topic.index.map" : "logs:logs_index", "connection.url" : "http://192.168.24.61:9200", "type.name" : "true", "key.ignore" : "true", "schema.ignore" : "true" }}'

I don't know how to add multiple IP to the 'IP:8083/connectors' and 'http://IP:9200'. It won't if any server with that IP in Curl command dies.

Andreas Sittler

unread,
Sep 7, 2020, 3:57:40 AM9/7/20
to confluent...@googlegroups.com

If it only happens with one broker, this is suspicious.
I think you need to post your full config of producers and brokers (all brokers).

Akatsuki Pain

unread,
Sep 7, 2020, 5:37:59 AM9/7/20
to Confluent Platform
I think the problem comes from the REST API as I mentioned. (  curl -XPOST -H 'Content-type:application/json' '192.168.24.65:8083/connectors' -d '{ "name" : "elasticsearch-sink-filebeat", "config" : { "connector.class" : "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "tasks.max" : "3", "topics" : "memember","topic.index.map" : "logs:logs_index", "connection.url" : "http://192.168.24.61:9200", "type.name" : "true", "key.ignore" : "true", "schema.ignore" : "true" }}' ) 

Because of 2 IPs the CURL command, if the main server IP in that command dies, the problem occurs. (single point of failure)

=> So I run 3 CURL command to create the bridges like the image below(3 colours) => It's ok when 1 Kafka-node dies.
Cluster (1).png

So now if I want to add more IP into the curl command for High Availability and load balancing to elastic search, what should I do. (It becomes the below Image)
Cluster.png

many thanks for interesting in this topic :D 

Reply all
Reply to author
Forward
0 new messages