Avro vs Protocol Buffers for schema evolution

7,702 views
Skip to first unread message

Emiliano Capoccia

unread,
Apr 22, 2016, 11:04:20 AM4/22/16
to Confluent Platform
Hello group,

In my company we are planning to use Kafka Streams and we need to decide to adopt a standard for our message format.
We are particularly concerned about schema evolution, a similar issue as in this thread.

Our scenario is that we'll be writing some data to a topic, and this data will later on be read from a different application, managed by a different team.

Essentially we are concerned of managing the situation where we change our schema and start publishing an evolved content, and we'd like to have a weak coordination with the other team. That is to say, they will have to change their code to use the new information, but it'll be nice if they can do it at their convenience, whenever they are ready to do so. Any change we do to the schema does not force downstream clients to update their schema immediately.

Now, as far as I understand, both Avro and Protocol Buffers offer this facility, however Avro mandates to have the exact schema used for serializing during deserialization, which imply that we will need a schema registry accessible to both teams for this solution to work. Protobuf instead does not require such a common registry as it is able to cope with schema evolution out of the box.
(essentially it all boils down on how avro vs brotobuf serialize the data, see here for a detailed explanation)

I wonder if we are missing something, and there are other reasons to use Avro in such a scenario, or else its better for us to go for the Protobuf solution.

What do you think about it?




Petr Novak

unread,
Apr 22, 2016, 2:26:16 PM4/22/16
to Confluent Platform
Hi,
without schema registry what would be your strategy to guarantee that producers send data with compatible schema into topic? It is easier to send whatever one wants into any topic,e.g. by mistake. Schema Registry doesn't prevent it completely but at least there is a formal process requiring to use SR to produce data hence if it followed SR gives better runtime guaranties. And there is a central online place to lookup schemas which were used to write topic, it guaranties unique schema Ids. I think SR is an advantage for data exchange. TagSizeValue based serialization seems better for protocols to me.

It would be great if Kafka could somehow allow to forbid to send data which are not schematized through SR but I have no idea how this could be done.

Regards,
Petr

Petr Novak

unread,
Apr 22, 2016, 2:32:09 PM4/22/16
to Confluent Platform
Maybe it already is but Encoder type could be part of topic metadata so that once I would define that a particular topic can use only KafkaAvroEncoder it would not be possible to send anything using e.g. DefaultEncoder.

Regards,
Petr

Emiliano Capoccia

unread,
Apr 22, 2016, 4:58:51 PM4/22/16
to Confluent Platform
Hi Petr, thanks for your feedback.

I'm not overly concerned about mistakenly sending incompatible data in a kafka topic, also as you pointed out the SR does not offer such a guarantee either. 
My concerns are about the schema evolution, and what are the implications for the operations with Avro vs Protobuf.
Let me be more clear.

Say you have initially a schema like this:

{ firstName: String, lastName: String } and you send over two messages:

offset 1: { "Emiliano", "Capoccia" }
offset 2: { "Petr", "Novak" }

The other end is deserializing based on the same schema and reading first name and last name. So far so good.

Now we change the publisher schema to be:

{ firstName: String, lastName: String, age: int }

and we stream a third message

offset 3: offset 1: { "A.N.", "Other", 33 }

Now here is what happens in my opinion on the receiving side:

Protobuf

the receiving size deserializes the 3rd message without any issues. Of course, they don't know about the availability of the new field, but they keep on going working with the old fields without downtime. Then, whenever they feel / need the new information, they can switch to the new schema and start using the new fields.
How they will receive the new schema in protobuf is irrelevant. Any way is fine, can be SR but also any other way of transferring a file. The key point IMO is that this need not happen immediately.

Avro

in order to read the 3rd message the deserializer have to have immediate access to the evolved schema. This is a constraint for Avro, it cannot deserialize a message without the exact writing schema. A possible solution to this problem is a SR. 
But IMHO this choice carries some implications: it introduces another player into the game, the SR and this needs be developed, maintained, deployed somewhere and continuously available.
It makes the producer and consumer more bound to each other, via the SR, whereas the solution with Protobuf seems more decoupled.

Of course this is not valid in the general case: for instance if it is of paramount importance to use the new information as soon as they become available, then the "forgiveness" of Protobuf is of no help, then the two solutions are pretty much equal, they both need immediate access to the evolved schema.

I'm happy to hear your opinion on the above, and thank you again for posting.

Emiliano

Petr Novak

unread,
Apr 22, 2016, 5:28:35 PM4/22/16
to Confluent Platform
I'm believe you can have different writer and reader schema for Avro. Avro will ignore the field the same way as Proto. Avro will match fields by name and they can be reordered compared to schema definition. There are differences between Avro, PB and Thrift but I don't think it is this one. Avro would be pretty poor if it has such a limitation.

I have this overview in my bookmarks. I haven't re-read now but I remember it is a good one.
http://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html

Regards,
Petr

On Friday, April 22, 2016 at 5:04:20 PM UTC+2, Emiliano Capoccia wrote:

Petr Novak

unread,
Apr 22, 2016, 5:36:23 PM4/22/16
to Confluent Platform
In other words, you need the schema and get it somehow. But it doesn't have to be exactly the same, just compatible. Hence I think this is invalid statement:


in order to read the 3rd message the deserializer have to have immediate access to the evolved schema.

Regards,
Petr

On Friday, April 22, 2016 at 5:04:20 PM UTC+2, Emiliano Capoccia wrote:

Geoff Anderson

unread,
Apr 22, 2016, 5:54:28 PM4/22/16
to confluent...@googlegroups.com
Hey Emiliano,

I'll try to clarify some of the thinking behind schema evolution with the Confluent schema registry and see if that helps you at all. We actually viewed support for schema evolution to be a strong requirement for such a system, and this was one of the reasons we begin with Avro.

When you read or write data using a schema, in the context of evolution, you essentially need one or both of the following guarantees:

1. Newer schemas can be used to read messages written with older schemas.
2. Older schemas can be used to read messages written with newer schemas.

In our schema registry, when you "version" or evolve a schema that is registered under a subject, it is subject to the following configurable compatibility constraints:

  FORWARD - data written by the new schema must be readable using previous versions of the schema under the given subject.
  BACKWARD - the new schema must be able to read data written using previous versions of schemas under the subject.
  FULL - both BACKWARD and FORWARD compatibility are enforced

More notes on that here:


Cheers,
Geoff





--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/20667a41-084d-48e8-865c-10185acb390c%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Geoff Anderson | Software Engineer | Confluent | +1 612.968.7340
Download Apache Kafka and Confluent Platform: www.confluent.io/download

Emiliano Capoccia

unread,
Apr 22, 2016, 8:48:08 PM4/22/16
to Confluent Platform
Hello,

thanks for your reply, actually I've read the post by mr Kleppmann carefully before writing my initial post! that's why I'm convinced that having the exact same schema used for writing is mandatory in Avro.

Quoting MK: "You need to have the exact same version of the schema as the writer of the data used. If you have the wrong schema, the parser will not be able to make head or tail of the binary data. Although you need to know the exact schema with which the data was written (the writer’s schema), that doesn’t have to be the same as the schema the consumer is expecting (the reader’s schema). You can actually give two different schemas to the Avro parser, and it uses resolution rules to translate data from the writer schema into the reader schema."

now, I'm aware of the fact that the resolution rules will convert the message from the new schema to the old or vice versa. However, this does not save you from the obligation to have the exact schema used in serialization when deserializing any message.

Loosely speaking, it boils down to the fact that while PB tags each field, so can interpret any field in deserialization for which a tag is present in the old schema, Avro does not tag fields and, as such, does not know what to expect while deserializing, its only chance is knowing in advance what comes next. Hence the necessity to have the exact same schema.

 At least, this is how I understand it. Do you agree?

Emiliano Capoccia

unread,
Apr 22, 2016, 9:20:38 PM4/22/16
to Confluent Platform
Hello Geoff,

thanks for your reply, that was an interesting reading.

It seems that we are interested in forward compatibility mostly. Looking at the API of the schema registry, I see it is essentially a RESTful service.
I've got no doubt that it works absolutely fine in some scenarios, I wonder if that's the best architectural choice we can make in our particular scenario (message passing in kafka, two distinct teams with distinct release dates and processes)

One detail that is not completely clear to me is where do you store the avro schema id for any particular message in kafka. 
I was looking for something similar to message headers but with no success, maybe you can advise if there is a best practice. 
As far as I can see my only option is embedding the id in the payload somehow, but this is a bit of a chicken-and-egg problem when it's time to deserialize, isn't it? I would need to be able to deserialize an unknown message using the correct schema in order to --> have the id with which I can --> query the registry and obtain the schema with which... deserialize the message! Ok I can play around with wrapper classes etc, but I have this bad feeling I'm missing something :) can you clarify on the subject?

However, pretending the above is somehow resolved, it looks like that maintaining an extra service to cope with schema evolution only is an overkill for us, but I'm keen to hear your opinion whether there are other benefits in adopting Avro as compared to PB which I'm overlooking at present. I was pretty much focused on schema evolution and trying to avoid having to maintain an extra service, but I'm sure the picture is bigger.

Thank you again for your answer, I look forward to hearing from you.

Emiliano
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Emiliano Capoccia

unread,
Apr 22, 2016, 9:30:04 PM4/22/16
to Confluent Platform
PS

just stumbled on this SO post, still by mr Kleppmann, which seems to match my understanding:


Emiliano

Félix GV

unread,
Apr 22, 2016, 10:04:50 PM4/22/16
to Confluent Platform
Hi Emiliano,

You are right that the standalone schema registry is an extra moving part for which you need to guarantee a certain uptime. It may not need 100% uptime though, since clients (consumers and producers) would cache whatever they get out of the SR. Since the SR data is immutable, it is dead simple to cache, without any need for invalidation or anything like that. This means transient SR downtime, whether planned or unplanned, is not that big of a deal.

The other aspect is that of enforcement. In order to use this architecture, Kafka messages MUST be prefixed with a schema ID, so it is actually impossible to write valid data into Kafka without first registering with the SR (done once at producer start up time). With the PB approach, there is no such safety net to prevent someone from pushing incompatible data into the wrong topic and so forth.

Ultimately, these may or may not be minor considerations. It depends on your expectations in terms of manageability, both technical and organizational. Ultimately, I think you could say that the promise of the SR approach is that you would be doing the tradeoff of adding a little bit of technical complexity (maintaining the extra process) in exchange for eliminating a lot of organizational complexity (since compatibility would be enforced by a central authority, which guards against both willful and unintentional negligence).

-F
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Andrew Stevenson

unread,
Apr 23, 2016, 3:33:23 AM4/23/16
to Confluent Platform
Avro supports schema evolution allowing readers to be on different versions. This enable schema projection. However you have to set up the schema correctly on the writer side. You should set the type as a union of null and the data type [null,datatype] and set a default. This allows the reader to ignore fields and return a default if requred.

However you do need to get a schema from somewhere such as the schema registry. For Flume I used to place this in the event header.

See
http://kitesdk.org/docs/1.0.0/Schema-Evolution.html

https://docs.oracle.com/cd/NOSQL/html/GettingStartedGuide/schemaevolution.html


Bo Liu

unread,
Apr 23, 2016, 6:42:50 AM4/23/16
to Confluent Platform
wow~, my first ever post to this group got cited!! :)

Our project needs BACKWARD compatibility, so we intend to use the latest schema wherever possible. I think schema revolution is more about BACKWARD compatibility comparing with the other two level, and that's maybe the reason why the default compatibility level of SR is BACKWARD. 

I've used thrift, pb and avro at different projects at certain consideration or have to... In terms of schema revolution, I actually favor the tagged approach, you only use one schema which is more friendly and "logical" to programmer: I have a schema for my message. But the sequence of the fields needs to be treated carefully. Most of the time, schema revolution is done by add optional fields.

We choose avro because of the flexibility of dynamic typing, this suits our current project perfectly and outweighs other serialization concerns such as speed, size and schema revolution. Schema center is very important, we actually plan to implement our own before knowing confluent SR.  I just validate that SR suits our requirement well on schema revolution and plan to integrate it with our system. 

Our consideration and choice is described above. Hope it helps. 

Petr Novak

unread,
Apr 23, 2016, 10:34:51 AM4/23/16
to Confluent Platform
Hi Emiliano,
the scenario you have previously described will work in Avro without consumer having to know that publisher extended schema, in other words without having to know exact writer schema. This basic type of evolution - adding fields aka old schema can read what it needs from data written by new schema (as far as compatible) - is handled by Avro without SR. You can use { firstName: String, lastName: String } schema to deserialize { firstName: String, lastName: String, age: int }. The first schema is a subset of the second and consumer schema fields keep the correct order. Hence you can't rule out Avro only based on your example.

The order of fields in schema has to be kept. E.g. you can't use { firstName: String, lastName: String } to deserialize { age: int, firstName: String, lastName: String }. You would get EOFException. In this case you would have to update your schema. 

It doesn't make sense to change fields order in schema when evolving producer schema but in practice in might be risky because of human error. Somebody can have this great idea anytime and it might be hard to spot. SR solves this problem. PB is not sensitive to it, order in definition can change as far as tags don't and tag ids further suggest the order.

I have tried it in actual code just now. I can't confirm if complete FORWARD compatibility needs something more though.

With Regards,
Petr

Emiliano Capoccia

unread,
Apr 23, 2016, 10:37:05 AM4/23/16
to Confluent Platform
Thanks Felix, Bo, Andrew for your replies,
I think your considerations make a lot of sense, looks like in our scenario protobuf is viable, but equally is avro at a slightly higher cost in terms of initial setup and maintenance burden.
In particular, at infrastructure level both the producer and the consumer need to access the same registry in a sort of "happens-before" relationship when evolving the schema, which can be difficult / unviable in some infrastructures (not ours, we are fine with it).
Thanks for your suggestion gentlemen.
Regards
Emiliano

On Friday, April 22, 2016 at 4:04:20 PM UTC+1, Emiliano Capoccia wrote:

Emiliano Capoccia

unread,
Apr 23, 2016, 12:28:09 PM4/23/16
to Confluent Platform
Thanks Petr, I see your point and I believe this is backward compatible only because it's a particularly simple case (appending a new field).
In the general case I wonder how can avro manage modifications to the schema such as suppression of optional fields happening in the middle of the schema without having the writer schema. 
There seems to be a general consensus on the fact that the writer schema is mandatory when deserializing.
Thanks for your feedback,
Emiliano

On Friday, April 22, 2016 at 4:04:20 PM UTC+1, Emiliano Capoccia wrote:

Félix GV

unread,
Apr 23, 2016, 1:54:36 PM4/23/16
to Confluent Platform
Yes, the writer's schema is always necessary for reading in Avro.

Both PB and Avro have their own quirks in terms of what's possible or not when evolving schemas. Avro has somewhat tricky semantics in terms of unions and optionality. PB on the other hand doesn't support re-ordering fields, and while it does support renaming fields, this can potentially result in complicated semantics if a developer assumes they can both rename and re-order fields.

All in all, whatever serialization tech you choose, you need to minimally know what you're doing. The SR approach can alleviate some of the pitfalls by offering automated safeguards, but that may be overkill for small orgs. For example, if the producer and consumer "teams" are two persons sitting on each ends of a couch, there is hopefully an adequate level of understanding between the two that doesn't warrant a SR. If the org is large, long-lived, with many multi-person teams, with lots of new hires and departures on an ongoing basis, then automated checks can be life savers.

Best of luck!

-F
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

Petr Novak

unread,
Apr 24, 2016, 4:21:28 PM4/24/16
to Confluent Platform
Hi Emiliano,
You are right that for anything non trivial SR is required. In practice it means that SR is mandatory for anything outside of storing data files.I don't even know if the behaviour I have described is defined in Avro specs. Maybe it is not and exact writer schema is always expected, hence it can possibly change anytime in such a case. If anybody know it is actually guaranteed behaviour I would like to know.

With Regards,
Petr

Emiliano Capoccia

unread,
Apr 25, 2016, 4:29:02 AM4/25/16
to Confluent Platform
Hello,

@Petr, I agree, that's what I was also thinking about. It might work, but with no guarantee. Thanks a lot for your comments.
@Felix, thanks a lot for your comments, I think your position makes a lot of sense, and those are the correct considerations upon which choosing one approach of the other.

Thanks everybody for the comments, that was a quite useful discussion.

Best
Emiliano

On Friday, April 22, 2016 at 4:04:20 PM UTC+1, Emiliano Capoccia wrote:

Andrew Otto

unread,
Apr 25, 2016, 12:51:46 PM4/25/16
to confluent...@googlegroups.com
I’m following this thread closely, and just wanted to add that Wikimedia considered the whole confluent platform, but decided against using it mostly because of this problem. We want schema evolution, but the need to run SR and to have the writer’s schema at read time made things difficult.  In addition, all producers  consumers need to know how to write and read off the Avro schema id from the binary message in Kafka.  We use many different languages at WMF, most of which are not JVM based ones. Having to code custom clients for each of these just to be able to read from Kafka wasn’t really feasible.  We didn’t evaluate other binary serialization formats, and instead decided to stick with JSON Schema and do a lot of CI up front to make sure folks don’t make incompatible schema changes.  This means that we can only ever add fields to schemas. :/



--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

Félix GV

unread,
Apr 25, 2016, 2:21:02 PM4/25/16
to confluent...@googlegroups.com
Avro has bindings for many languages though, doesn't it? And Confluent's REST Proxy should cover most of the rest. Of course, the proxy probably won't enable super high-throughput use cases but, arguably, you won't get high-throughput anyway unless you run the native (JVM) Kafka client so performance may be a moot point in the interop discussion.

Of course, JSON is not an unreasonable choice either, but I do wonder if all the CI work you need to mandate is a cost-effective way of ensuring stability. Again, I'm sure it depends on organization size, so perhaps a better question may be "at which org sizes do each of the strategies thrive most?"

Interesting thread (:

Andrew Otto

unread,
Apr 25, 2016, 3:56:54 PM4/25/16
to confluent...@googlegroups.com
​> Avro has bindings for many languages though, doesn’t it? 
It does, but it wasn’t the lack of Avro support isn’t the problem. In order to use Avro with Kafka, you need a way to associate each message with a particular Avro schema.  This is usually done by embedding a binary integer at the beginning of the Kafka message. Producers need to know to how write this integer, consumers needs to know how to extract this integer and then use it to look up the schema in a SR somewhere.  Even though not difficult, every client in every language would need an implementation that does this.


dk.h...@gmail.com

unread,
Jun 21, 2016, 11:30:12 AM6/21/16
to Confluent Platform

May I ask a silly question here? Why not send the avro json schema with each and every message? Messages would be slightly bigger, so what? A proper schema_id would also allow caching the schema in the consumer side, in other words, no need to re-parse it if the id is the same as previous parsed schema. The result !!!??? You can dump the Schema Registry completely!!!

Roger Hoover

unread,
Jun 21, 2016, 5:13:21 PM6/21/16
to confluent...@googlegroups.com
I don't think it's a bad idea unless you really need to optimize messages size.  There was a binary schema format proposed for Avro schemas but never merged and last I tried I couldn't get it to work.


--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

Gwen Shapira

unread,
Jun 21, 2016, 7:21:39 PM6/21/16
to confluent...@googlegroups.com
In many cases the schema could be significantly larger than the
message itself, and this impacts storage, network utilization and
everything else.
> https://groups.google.com/d/msgid/confluent-platform/CAPOm%3DTOxD5iH-6vgkREW9sTF0a9tJqZcoYBf8CA5SkcUUwoE7w%40mail.gmail.com.

Félix GV

unread,
Jun 21, 2016, 8:48:34 PM6/21/16
to confluent...@googlegroups.com
Although if you always send messages in batch, then presumably it compresses fairly well.

It's definitely very inefficient in small batches / low latency use cases, however.

-F

dk.h...@gmail.com

unread,
Jun 22, 2016, 10:40:49 AM6/22/16
to Confluent Platform
> It's definitely very inefficient in small batches / low latency use cases

Unless you are a high-frequency trading firm, the latency impact should really not matter. We are talking about extra nanoseconds here, not even microseconds.

> In many cases the schema could be significantly larger than the message itself, and this impacts storage, network utilization and everything else.

Again the impact is very, very small compared to the drawbacks of having the schema registry as an additional moving part and *most importantly* sending the schema along with each and every message completely solves the very valid problem described by the OP.

>> To post to this group, send email to confluent...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/confluent-platform/d499f443-feaf-418d-9bef-52f283d4169c%40googlegroups.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> To post to this group, send email to confluent...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/confluent-platform/CAPOm%3DTOxD5iH-6vgkREW9sTF0a9tJqZcoYBf8CA5SkcUUwoE7w%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

Félix GV

unread,
Jun 22, 2016, 12:48:30 PM6/22/16
to Confluent Platform
I think you may be underestimating the impact of storing the schema alongside every payloads. If using Avro documentation fields (which is a good practice), then it's reasonable to have a schema size of ~1KB to store a payload containing just a handful of integers which would end up taking ~10B in Avro's efficient binary encoding. That is a 100x increase in LAN bandwidth (paid once on the producer side and one or many times on the consumer side) as well as a 100x increase on the storage side.

Now, like I said, if your producer is configured to do compressed batches with a long linger time, the schema size could be amortized over many records and it's probably fine. But if you need to have a short linger time, then you're out of luck.

If you don't use Avro doc fields, then maybe your overhead is only ~10x rather than ~100x. Maybe that's accetable for you, but I'd be wary of doing blanket generalizations that one or two orders of magnitude of difference is ok for everyone.

Besides, other disadvantages of the json-encoded schema alongside the payload is:

1. Your consumers need to parse that json on every message, which essentially means schema definitions must be re-understood on every message, rather than being cached efficiently. This is a waste of CPU cycles and causes GC pressure. Totally fine at low throughputs, but probably not acceptable for high-throughput use cases.

2. A standalone schema registry service provides upfront guarantee that schemas are compatible, before messages are produced into a topic. The schema with payload approach defers the failure to some non-deterministic time later in the downstream consumers.

It's all about tradeoffs. It is true that an extra piece of infrastructure is a hassle, and if you have a 2 nodes Kafka cluster, then certainly it looks like a big investment to have 1 or many extra processes just for schemas. If you're running 10 or 100 or more Kafka brokers, then the shema registry processes are a drop in the bucket in terms of maintenance overhead, and the benefits they provide in terms of stability, predictability and agility are probably worth it. It's all about scale (both technical and organizational).

Happy tradeoffs (:

-F
>> To post to this group, send email to confluent...@googlegroups.com.

>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/confluent-platform/d499f443-feaf-418d-9bef-52f283d4169c%40googlegroups.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> To post to this group, send email to confluent...@googlegroups.com.

> To view this discussion on the web visit
> https://groups.google.com/d/msgid/confluent-platform/CAPOm%3DTOxD5iH-6vgkREW9sTF0a9tJqZcoYBf8CA5SkcUUwoE7w%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages