Kafka with "At least once" for message delivery guarantees

343 views
Skip to first unread message

Jérôme LAFORGE

unread,
Dec 21, 2016, 11:29:02 AM12/21/16
to golang-nuts
Hello all,
I am looking for feedback about Kafka consumer driver, that can allow "At least once" for message delivery guarantees (as described here https://kafka.apache.org/documentation/#semantics).

Do you have feedback on this driver? or another alternative for managing "At least once" processing message?

Thx in advance.
Jérôme

Caleb Doxsey

unread,
Dec 28, 2016, 7:41:50 AM12/28/16
to golang-nuts
These days I would recommend using the confluent consumer: https://github.com/confluentinc/confluent-kafka-go. It's just a wrapper around librdkafka. Sarama works fine too though. 

With all of these you can configure the behavior you are looking for. For example: request.required.acks would allow you to customize how many acks are required. If you didn't receive enough on the producer, you might send a message twice, but you're increasing the likelihood you won't lose the message.

For most problems you can tolerate duplicate messages without having to do anything special. Even operations like sending an email are tolerable if the duplication is relatively rare (and based on my experience kafka issues are relatively rare). But if you can't tolerate duplicate messages, you will need to maintain a hashtable (or similar) of message ids that you've processed. Something like redis would work well for this.

As an example, consider payment processing systems, which include a idempotency key as part of a request: https://stripe.com/docs/api?lang=curl#idempotent_requests. With this approach you're trading availability for consistency (if the centralized database you depend on to de-dupe is down, your whole pipeline is down).

I would aim to handle the duplicates if you can though, since systems that tolerate duplicates are much easier to build than ones that don't. 
Reply all
Reply to author
Forward
0 new messages