KTable materialization and compaction

1,682 views
Skip to first unread message

Alexander Jipa

unread,
Jul 28, 2016, 5:23:48 PM7/28/16
to Confluent Platform
Hello,
I was trying to make an example of the stateful foreach() processing that will restore from the local RocksDB database.
At first I've spent quite some time trying to figure out why table().to() was not creating any backing RocksDB store.
As far as I was able to find - it's an upcoming feature of the kafka-streams, because in confluent 3.0.0 the local store would only be created for some joins.
So I tweaked the classes locally and now the table is materialized at least.

Next I was trying to make sure the latest state would be replayed for my foreach() processor - but I wasn't able to do that until I've set AUTO_OFFSET_RESET_CONFIG to "earliest".
But here I faced two problems:
- there is no cleanup.policy set for the table topic and looks like I can only set it manually
- I want the latest state to be replayed from the local RocksDB store and then subscribe to the topic starting from the latest offset for my consumer group

Which raises the following questions:
- how can one utilize the compaction feature for table topics in kafka-streams
- how can one specify for the table to subscribe to the latest offset for the application's consumer group
- how to make sure the latest state from local RocksDB store is replayed before it subscribes to the latest offset for the application's consumer group (because otherwise having local RocksDB state seems redundant for table().to() scenario, only aggregates will need it to get the previous value)

Thank you!

Eno Thereska

unread,
Jul 29, 2016, 12:48:31 PM7/29/16
to Confluent Platform
Hi Alexander,

Good questions. KTables are now materialised in the trunk branch of Kafka. Would you be willing to try that to see if it answers some of your questions?

For the specific questions
- the topic you subscribe to for building KTables should be a compacted topic. Make sure you create it as compacted when you first create it. Any internal topics created through the DSL that back KTables will automatically be created as compact topics so you don't have to do anything there.
- the cleanup policy for a KTable's topic can only be "compact". In a subsequent release we'll expose the option to set a retention policy as well, but for now it is only "compact".

I didn't quite understand the last question on latest state and RocksDB, perhaps you can elaborate further with an example?

Thanks
Eno

Alexander Jipa

unread,
Jul 29, 2016, 4:13:22 PM7/29/16
to Confluent Platform
Hello Eno,
Thanks for the reply.
I'll try the trunk version to check if materialization works.

You've mentioned that internal topics will be created as compacted.
Can you please specify what it the version this has been introduced?
Because the 3.0.0 I'm using does not mark them as compacted (kafka-topic does not show this in Config section)
One more Kafka question - does log.cleaner.enable=true has to be set for compaction to actually work?

My last question was about the following situation:
The consumer application has caught up with a certain offset and the latest state is now in RocksDB.
Now I restart my consumer application (with same id) - the offset is restored - I'll be getting new data.
But how can I make sure the local state will be passed through the table().foreach()?
Currently it only works for new data - making RocksDB local state useless...
Maybe I'm doing something wrong?

---
Cheers,
Alex

Eno Thereska

unread,
Jul 29, 2016, 5:14:00 PM7/29/16
to Confluent Platform
Hi Alexander,

The JIRA for the compacted topics is at https://issues.apache.org/jira/browse/KAFKA-3504 and it went in April. It is in Kafka 0.10.0 and in CP 3.0.0. I should have qualified: it's for all internal changelog topics that back KTables. There could be other internal topics for which compaction is not enabled. 

Yes, log.cleaner.enable needs to be set to true.

For the last question, a small code example would help so I can see how you are creating the table. Any chance you can send the code that constructs your stream?

Thanks
Eno

Alexander Jipa

unread,
Jul 29, 2016, 5:45:25 PM7/29/16
to Confluent Platform
Hi Eno,
Sure, here's the snippet:

KStreamBuilder builder = new KStreamBuilder();
final Serde<SchemaAndValue> customValueSerde = new CustomSerde();
customValueSerde.configure(singletonMap(SCHEMA_REGISTRY_URL_CONFIG, SCHEMA_REGISTRY_URL), false);
builder
    .table(new Serdes.StringSerde(), customValueSerde, topic)
    .foreach((String key, SchemaAndValue value) -> {
        if (value == null || value.value() == null) {
            onRemoveRecord(key);
            return;
        }
        onChangedRecord(key, value.schema(), (Struct) value.value());
    });

KafkaStreams streams = new KafkaStreams(builder, kafkaProps);
streams.start();

I want this stream to recreate an internal index/cache every time I restart the application.
At first it start from "earliest" offset and populate a materialized RocksDB store.
On next start I want it to continue from the last offset for its application.id but before that I'd like foreach() to be triggered for all the records from the RocksDB state so that my index/cache has the latest state.

I believe there should be a way of doing it - otherwise there's no need to have RocksDB store without any aggregation in the stream - that requires the previous value.

Eno Thereska

unread,
Jul 29, 2016, 6:29:28 PM7/29/16
to Confluent Platform
Hi Alexander,

I believe the right thing will happen out of the box. It is true that the KTable is now materialized with a RocksDB store (in Kafka trunk only, not 0.10.0), however that RocksDB store is backed up by the "same" topic that you pass in as a parameter to .table(..., topic). Hence, it's not 2 topics at the end of the day, it's just one topic. Source KTables are special in that way, in that they recycle the same topic they were created from (instead of creating another topic and copying things from first topic to the second one).

If you are using 0.10.0 on the other hand, the KTable is not materialized at all and there is no RockDb store at all to worry about.

Thanks
Eno

Alexander Jipa

unread,
Jul 29, 2016, 9:54:10 PM7/29/16
to Confluent Platform
Hi Eno,
Looks like I understood the concept correctly. 
So after restart of the application foreach will rerun from RocksDB state and then will be getting new messages after persisted offset?
Is this correct?

Btw, I tried materializing 0.10.0 - it didn't work that way. There was even no code to traverse the state via iterator (aka store.all())...
I'll check the trunk and let you know if it works.
Thanks!

---
Cheers,
Alexander

Matthias J. Sax

unread,
Jul 31, 2016, 5:19:38 PM7/31/16
to confluent...@googlegroups.com
Hi,

I just want to add something (even on the risk that this is already
clear -- I am just not sure).

If you have a KTable, foreach() will be called for each update to the
KTable. Thus, on restart, as long as internal KTable state is created
(ie, before actual processing begins), there will be no call to
foreach(). Only after the initialization phase, when new updates to
KTable happens, foreach() will be called.

Furthermore, coming back to your original question about building an
index: If your index should survive an application restart, you need to
use a stateful transformation (ie, register an additional user state
within your application), where the state holds your index. Thus, on
restart, the latest index state will be recreated before any new data is
consumed.

-Matthias
> application.id <http://application.id> but before that I'd like
> --
> You received this message because you are subscribed to the Google
> Groups "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to confluent-platf...@googlegroups.com
> <mailto:confluent-platf...@googlegroups.com>.
> To post to this group, send email to confluent...@googlegroups.com
> <mailto:confluent...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer>.
> For more options, visit https://groups.google.com/d/optout.

signature.asc

Alexander Jipa

unread,
Jul 31, 2016, 11:31:50 PM7/31/16
to Confluent Platform
Hi,
Thanks for a clarification.
So I understood it wrong and foreach does not work as I've expected.
It's not really a foreach but rather a foreachupdate.
It looks like the description in KTable javadoc is misleading, because it won't be called for every item in the KTable, but rather for any new one.

Where can I read about custom store that you've described?
Is there any function that will work the way I've described without a custom store?
Seems like a useful function.
Plus why would then one want to have a KTable materialized in a table().to() scenario?
What is it good for?

Thanks!

---
Cheers,
Alexander

Michael Noll

unread,
Aug 2, 2016, 9:25:54 AM8/2/16
to confluent...@googlegroups.com
Alexander,

let me ask a question to better understand your original motivation for the use case you're talking about.

You wrote:
I want this stream to recreate an internal index/cache every time I restart the application.
> At first it start from "earliest" offset and populate a materialized RocksDB store.
> On next start I want it to continue from the last offset for its application.id but before that
> I'd like foreach() to be triggered for all the records from the RocksDB state so that my
> index/cache has the latest state.

If I understand correctly, I think what you're doing is, essentially, implement your own state management layer -- but then also kind of expecting Kafka Streams to help with that.

I think you have basically two options right now:

1. Stick to the Kafka Streams DSL but stop using foreach.  Question here is:  Can you re-model your Kafka Streams application in a way that, for example, you're using the built-in DSL operations to create/update the state you're interested in?  If you did that, then Kafka Streams would guarantee that the state is always valid, up to date, and (if need be) fully reconstructed (e.g. after a machine crash).

2. Alternatively, stop using the DSL and switch to the low-level Processor API.  Here, you'd use the low-level Processor API of Kafka Streams to directly interact with your state stores.  This would allow you to ensure that the interaction with your state stores can be properly tracked/logged to Kafka changelog topics.

Background:

- About `foreach()` in the DSL: `foreach()` is a black box as far as Kafka Streams is concerned.  Kafka Streams does not know what you're doing in that function -- e.g. you could be talking to an external database, and for Kafka Streams this interaction would literally be considered as happening "off the record" [no pun intended] -- Kafka Streams / Kafka would never be aware of that.  This implies that in such a scenario Kafka Streams would therefore not be able to perform its (full) recovery magic for you.

- About recovery and state management: Kafka Streams' auto-recovery works only as long as any state related information is properly captured in internal changelog topics.  A) If you use the DSL, then its stateful operations (such as joins, reduce) will automatically ensure that any state changes will be tracked in changelog topics, and thus can be correctly reconstructed on restart/resume.  But this won't happen when you call `foreach()` in the DSL!  B) If you aren't able to use the DSL's stateful operations as mentioned in option 1 above, then you should perhaps go with option 2 and switch to the Processor API where you can manually do your state management.

Hope this helps,
Michael




--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Michael G. Noll | Product Manager | Confluent | +1 650.453.5860
Download Apache Kafka and Confluent Platform: www.confluent.io/download

Matthias J. Sax

unread,
Aug 2, 2016, 11:04:40 AM8/2/16
to confluent...@googlegroups.com
Hi,


> It's not really a foreach but rather a foreachupdate.

I think the name foreach() is ok, because, foreach() is called for each
record -- however, those calls are spread over all runs (ie,
start-stop-resume etc) of your application.

> Where can I read about custom store that you've described?

About documentation, you should have a look in the Confluent docs:
http://docs.confluent.io/3.0.0/streams/developer-guide.html#stateful-transformations

and

http://docs.confluent.io/3.0.0/streams/developer-guide.html#processor-api

You can actually mix-and-match DSL an low-level API by using process(),
transform(), transformValues(), on a KStream.


> Plus why would then one want to have a KTable materialized in a
> table().to() scenario?

In the next release (CP 3.1) we introduce a feature called "Queryable
State" that allows for in-application KTable lookups (by key). For this,
KTable need to be materialized in RocksDB.


-Matthias
> > an email to confluent-platf...@googlegroups.com
> <javascript:>
> > <mailto:confluent-platf...@googlegroups.com
> <javascript:>>.
> > To post to this group, send email to confluent...@googlegroups.com
> <javascript:>
> > <mailto:confluent...@googlegroups.com <javascript:>>.
> <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to confluent-platf...@googlegroups.com
> <mailto:confluent-platf...@googlegroups.com>.
> To post to this group, send email to confluent...@googlegroups.com
> <mailto:confluent...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com?utm_medium=email&utm_source=footer>.
signature.asc

Alexander Jipa

unread,
Aug 2, 2016, 11:40:06 AM8/2/16
to Confluent Platform
Hello,
My custom state management is the same as that of KTable's - to have the latest state.
Which is why it's confusing that I can't simply query/iterate it, e.g. via KeyValueStore.all()

I tried solution 1. using the tutorial http://codingjunkie.net/kafka-processor-part1/ and it appears to be working.
But the result solution looks as if I've reinvented KTable - same get and put calls, then a commit when done.

Don't seem to need non-DSL solution while 1. works.

As for foreach - it doesn't seem to fulfill the contract the way I see it: 
/**
* Perform an action on each element of {@link KTable}.
* Note that this is a terminal operation that returns void.
*
* @param action an action to perform on each element
*/
In reality the action will be performed on any updated/removed element after latest caught up offset.

Matthias mentions that KTable state will soon be exposed for the lookup by key that raises the question of whether it will be possible to traverse is somehow - that will be exactly what I need.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Matthias J. Sax

unread,
Aug 2, 2016, 11:44:26 AM8/2/16
to confluent...@googlegroups.com
Have look here for QA KIP-67
https://cwiki.apache.org/confluence/display/KAFKA/KIP-67%3A+Queryable+state+for+Kafka+Streams

-Matthias
> its application.id <http://application.id/> but before that
> > an email to confluent-platf...@googlegroups.com
> > <mailto:confluent-platf...@googlegroups.com>.
> > <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the
> Google Groups "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from
> it, send an email to
> confluent-platf...@googlegroups.com <javascript:>.
> To post to this group, send email to
> confluent...@googlegroups.com <javascript:>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
>
>
>
> --
> *Michael G. Noll | Product Manager | Confluent | +1 650.453.5860
> Download Apache Kafka and Confluent Platform:
> www.confluent.io/download <http://www.confluent.io/download>*
>
> --
> You received this message because you are subscribed to the Google
> Groups "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to confluent-platf...@googlegroups.com
> <mailto:confluent-platf...@googlegroups.com>.
> To post to this group, send email to confluent...@googlegroups.com
> <mailto:confluent...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/confluent-platform/89b12cd6-582f-4c92-b108-3633b52a3262%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/89b12cd6-582f-4c92-b108-3633b52a3262%40googlegroups.com?utm_medium=email&utm_source=footer>.
signature.asc

Alexander Jipa

unread,
Aug 3, 2016, 12:47:07 PM8/3/16
to Confluent Platform
Hi,
Thanks, that's exactly what I've been looking for!
Where can I get the latest code for the feature?

---
Cheers,
Alexander
>             > an email to confluent-platform+unsub...@googlegroups.com
>             > <mailto:confluent-platform+unsub...@googlegroups.com>.
>             > To post to this group, send email to
>             confluent...@googlegroups.com
>             > <mailto:confluent...@googlegroups.com>.
>             > To view this discussion on the web visit
>             > https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com
>             <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com>
>
>             > <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer
>             <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
>             > For more options, visit https://groups.google.com/d/optout
>             <https://groups.google.com/d/optout>.
>
>         --
>         You received this message because you are subscribed to the
>         Google Groups "Confluent Platform" group.
>         To unsubscribe from this group and stop receiving emails from
>         it, send an email to
>         confluent-platform+unsub...@googlegroups.com <javascript:>.
>         To post to this group, send email to
>         confluent...@googlegroups.com <javascript:>.
>         To view this discussion on the web visit
>         https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com
>         <https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
>         For more options, visit https://groups.google.com/d/optout
>         <https://groups.google.com/d/optout>.
>
>
>
>
>     --
>     *Michael G. Noll | Product Manager | Confluent | +1 650.453.5860
>     Download Apache Kafka and Confluent Platform:
>     www.confluent.io/download <http://www.confluent.io/download>*
>
> --
> You received this message because you are subscribed to the Google
> Groups "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send

Matthias J. Sax

unread,
Aug 3, 2016, 4:30:43 PM8/3/16
to confluent...@googlegroups.com
Some parts are already merged in trunk branch at
https://github.com/apache/kafka

Watch https://issues.apache.org/jira/browse/KAFKA-3909 for further progress.


-Matthias
> confluent-platf...@googlegroups.com <javascript:>
> > >
> <mailto:confluent-platf...@googlegroups.com <javascript:>>.
> > > To post to this group, send email to
> > confluent...@googlegroups.com
> > > <mailto:confluent...@googlegroups.com>.
> > > To view this discussion on the web visit
> > >
> https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com>
>
> >
> <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com>>
>
> >
> > >
> <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer>
>
> >
> <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer>>>.
>
> >
> > > For more options, visit
> https://groups.google.com/d/optout <https://groups.google.com/d/optout>
> > <https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>>.
> >
> > --
> > You received this message because you are subscribed to the
> > Google Groups "Confluent Platform" group.
> > To unsubscribe from this group and stop receiving emails from
> > it, send an email to
> > confluent-platf...@googlegroups.com
> <javascript:> <javascript:>.
> <https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com?utm_medium=email&utm_source=footer
> <http://www.confluent.io/download <http://www.confluent.io/download>>*
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "Confluent Platform" group.
> > To unsubscribe from this group and stop receiving emails from it,
> send
> > an email to confluent-platf...@googlegroups.com
> <javascript:>
> > <mailto:confluent-platf...@googlegroups.com
> <javascript:>>.
> > To post to this group, send email to confluent...@googlegroups.com
> <javascript:>
> > <mailto:confluent...@googlegroups.com <javascript:>>.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/confluent-platform/89b12cd6-582f-4c92-b108-3633b52a3262%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/89b12cd6-582f-4c92-b108-3633b52a3262%40googlegroups.com>
>
> >
> <https://groups.google.com/d/msgid/confluent-platform/89b12cd6-582f-4c92-b108-3633b52a3262%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/confluent-platform/89b12cd6-582f-4c92-b108-3633b52a3262%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
> > For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to confluent-platf...@googlegroups.com
> <mailto:confluent-platf...@googlegroups.com>.
> To post to this group, send email to confluent...@googlegroups.com
> <mailto:confluent...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/confluent-platform/6f27ae11-f13d-4dbd-89ba-75bc8a5fee18%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/6f27ae11-f13d-4dbd-89ba-75bc8a5fee18%40googlegroups.com?utm_medium=email&utm_source=footer>.
signature.asc

Michael Noll

unread,
Aug 4, 2016, 5:18:58 AM8/4/16
to confluent...@googlegroups.com
Alexander,

happy to hear that queryable state will help you solve your use case. :-)

As Matthias said, we're currently finishing the last few tasks for the new feature.



>             > an email to confluent-platf...@googlegroups.com
>             > <mailto:confluent-platf...@googlegroups.com>.
>             > To post to this group, send email to
>             confluent...@googlegroups.com
>             > <mailto:confluent...@googlegroups.com>.
>             > To view this discussion on the web visit
>             > https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com
>             <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com>
>
>             > <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer
>             <https://groups.google.com/d/msgid/confluent-platform/09f317f8-0481-4827-905b-08edb6284563%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
>             > For more options, visit https://groups.google.com/d/optout
>             <https://groups.google.com/d/optout>.
>
>         --
>         You received this message because you are subscribed to the
>         Google Groups "Confluent Platform" group.
>         To unsubscribe from this group and stop receiving emails from
>         it, send an email to
>         confluent-platf...@googlegroups.com <javascript:>.
>         To post to this group, send email to
>         confluent...@googlegroups.com <javascript:>.
>         To view this discussion on the web visit
>         https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com
>         <https://groups.google.com/d/msgid/confluent-platform/cd92bbed-db81-4b41-9757-57c78fc07b50%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
>         For more options, visit https://groups.google.com/d/optout
>         <https://groups.google.com/d/optout>.
>
>
>
>
>     --
>     *Michael G. Noll | Product Manager | Confluent | +1 650.453.5860
>     Download Apache Kafka and Confluent Platform:
>     www.confluent.io/download <http://www.confluent.io/download>*
>
> --
> You received this message because you are subscribed to the Google
> Groups "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/6f27ae11-f13d-4dbd-89ba-75bc8a5fee18%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Reply all
Reply to author
Forward
0 new messages