Re: Confluent offset commit

228 views
Skip to first unread message

Jun Rao

unread,
Mar 16, 2015, 1:23:36 PM3/16/15
to Joseph Jeganathan, confluent...@googlegroups.com
Hi, Joseph,

Thanks for your interest.

The rest API allows you to control the number of messages fetched in terms of bytes. Also, there is a separate api (http://confluent.io/docs/current/kafka-rest/docs/api.html#consumers) for committing the offsets manually, if you choose to disable auto offset commit.

Thanks,

Jun

On Mon, Mar 16, 2015 at 10:13 AM, Joseph Jeganathan <joseph.j...@bedegaming.com> wrote:
Hi Jun Rao,

I'm a C# developer working for Bede Gaming at present. We are currently trying to evaluate the pros and cons to replace our messaging hub from Azure service bus to Kafka.

We had played with the exiting .NET driver for Kafka and found out that using that isn't a good option. We thought to write our own .NET client but the recently released confluent's RESTful API was very promising.

We have only one issue regarding this RESTful API, in which we can't commit the specific offset of the message we consumed <topic, partition, offset>. 

Is there any ways that we can either specify the number of messages to fetch when we consume (lets say one by one) or can we specify the offset of the successfully processed message <topic, partition, offset>?

Many Thanks

--
Joseph Jeganathan
.NET Developer, @ bedegaming.com

Jun Rao

unread,
Mar 16, 2015, 4:09:59 PM3/16/15
to Joseph Jeganathan, confluent...@googlegroups.com
Joseph,

Could you just fetch messages, process them, and commit the offsets after the processing is successful?

Thanks,

Jun

On Mon, Mar 16, 2015 at 10:59 AM, Joseph Jeganathan <joseph.j...@bedegaming.com> wrote:
Hi Jun,

Thanks for you response. I've checked both of them before. The problem I have in the current API is

1. By controlling max_bytes to fetch we can't control the number of messages to fetch. For example, 10 bytes can have 1 message or 10 messages (depends on the message payload)
2. We can't specify the offset using REST API's offset commit http://confluent.io/docs/current/kafka-rest/docs/api.html#post--consumers-(string-group_name)-instances-(string-instance)-offsets. Offset commit in the API doesn't take any payload.


The aim here is to commit offset of successfully PROCESSED message (either by fetching one-by-one or by committing the offset of the successfully processed message).

Thanks
Joseph

Jun Rao

unread,
Mar 17, 2015, 5:34:36 PM3/17/15
to Joseph Jeganathan, confluent...@googlegroups.com
Joseph,

Currently, the REST API doesn't support committing arbitrary offsets. We can probably support that in the future when the new consumer is developed in Kafka.

Thanks,

Jun

On Mon, Mar 16, 2015 at 2:54 PM, Joseph Jeganathan <joseph.j...@bedegaming.com> wrote:
Jun, ideally what we want is to process messages transitionally.

Lets say we have fetched 5 messages and 3 processed successfully and 4th one failed. Messages processed in such a way are irreversible in some of our use cases.

In such case we ideally looking for committing the offset as the 3rd one (than 5th or none) and later we'll process from 4th one onwards.

If the REST API can't support that, then we need to change our logic around to handle the transaction & roll back.

Thanks a lot

Joseph

Jun Rao

unread,
Mar 18, 2015, 1:53:43 PM3/18/15
to Joseph Jeganathan, confluent...@googlegroups.com
Great. Please keep us posted on your experience.

Thanks,

Jun

On Wed, Mar 18, 2015 at 10:11 AM, Joseph Jeganathan <joseph.j...@bedegaming.com> wrote:
Thanks for the update Jun, and that would be great.

We are going ahead to tryout Confluent.IO for our logging + messages at Bede Gaming. It seems promising so far.

Thanks you!

This `confluent...@googlegroups.com` google group doesn't seem to be public yet by the way.

Jun Rao

unread,
Apr 1, 2015, 12:09:57 PM4/1/15
to Joseph Jeganathan, confluent...@googlegroups.com
Hi, Joseph,

It's good to know that you are making good progress on using Confluent platform. Thanks for sharing the .NET client wrapper.

1. One way to do that is to re-publish the failed messages to another topic and have another process deal with it later.

It defaults to 300 secs. So, if a consumer fails to consume after 300 secs, it will be destroyed.

3. You want to use (b). Kafka rest itself won't do forwarding based on the host header. Some load balancers or http proxies may do the forwarding for you, 
but approach (b) will be more reliable.

BTW, you may want to email your questions to confluent-platform@googlegroups.com directly. That way, you will likely get the answers quicker.

Thanks,

Jun

On Wed, Apr 1, 2015 at 3:04 AM, Joseph Jeganathan <joseph.j...@bedegaming.com> wrote:
Hi Jun,

Hope you are doing well.

We have a very good progress in tying out confluent.io in order to replace Azure Service bus messages.

I've already published a .NET client wrapper for Confluent by the way https://github.com/josephjeganathan/Confluent.RestClient

I have 3 questions on the Confluent REST API

1. In Azure Service bus we have a concept of dead-lettering messages when a consumer can't process it. This can be later processed by any applications if necessary. What would be your suggestion to notify or store such failed to process messages in Kafka/Confluent?

2. In the REST API, I assume a consumer instance will get destroyed when there is no fetch request from the instance. Is it right? If it is so, after how log a consumer instance get destroyed? There are occasion that we need to retry our "handling received messages" with exponential back off time. I wonder those retries should not kill the consumer instance.

3. When we create a consumer instance using REST API we receive `base_uri` as a response. Is it enough to use the that in the "Host" header (domain & port only) when we fetch messages? Or should we use the URL as the base URL to consume messages from that point onward?

For Example:

Is that request (a.) is good enough? Or (b.) is the right way?


Many Thanks
Joseph

Reply all
Reply to author
Forward
0 new messages