KSQL app dev experience and prototype for improvements

115 views
Skip to first unread message

Tim Fox

unread,
Nov 15, 2019, 5:07:31 AM11/15/19
to ksql-dev

Hey folks,


Long post, sorry!


The past week or so I’ve been playing around trying to put together a “real-world”(ish) app that uses KSQL in order to get a better idea of where our strengths and weaknesses are. The idea being - we need to make it super fun and easy for devs to write apps with KSQL in order to drive adoption.


The app I’ve chosen to build is modelling a simple retail business using event sourcing principles. It contains various topics containing things like: line_items, shopping basket events (add/remove), orders, and warehouse stock events (add/remove). There will be various aggregations that provide materialized views for things like: Current shopping basket states and current warehouse stock.


The app will allow the user to view the current catalogue, add/remove items to their basket, and place orders. There will be pages to show current basket state for a user (pull query) and current warehouse stock (pull query), and other pages showing live updating reports (push query to browser) for things like total orders value.


I’ve spent some time trying to build such an app using KSQL but I've hit some stumbling blocks, including:


  • To get shopping basket state for a user I need to execute a pull query of the form “SELECT * FROM BASKET_STATES WHERE USER_ID=?” - this currently is not possible as we don’t support pull queries that return multiple values and which have a where clause on non key field. To workaround this I have had to setup connect to dump the aggregate table state into a JDBC table and read from that directly in the app using a JDBC client. It took me about a day to get connect working properly (figuring out the configuration), and the app needs to use the JDBC client directly thus adding another layer of complexity.

  • Chunked response pull queries are not easily usable in the app. The chunks do not correspond to whole rows(s) so some tricky parsing on the app is necessary to re-assemble the chunks into rows so they can be handled by the app. It's not reasonable to expect the app developer to do this.

  • The websocket pull query endpoint works, but is not documented, so we can’t expect users to use that currently. This means that right now there is no reasonable way for KSQL users to get pull query results in their apps.

  • The app needs to send messages to Kafka topics, e.g. to place an order, or to represent shopping basket events. This means the app also needs to use the Kafka client directly.


There are also a bunch of other more minor usability issues I have come across along the way (I won’t list those here).

In order to create a meaningful app the user currently has to use up to 4 different clients currently (JDBC, Kafka, HTTP (for REST API), and possible Websocket client for ws endpoint). That's really confusing and hard to setup for the app developer. I think we need to provide a KSQL client to improve this.


Also, streaming over the current HTTP/REST or (undocumented) websocket API doesn't provide an effective solution for app developers. We need a better way of doing this that separates out the stream oriented thing (push queries, pull queries, inserting) and implements it in a way more suitable for high throughput streaming.


My next step is to create a prototype. The prototype will:


  • Flesh out the beginning of a KSQL client (initially in Java). The new client will allow:
    * Inserting message into streams
    * Pull queries
    * Push queries
    The idea is for the app to do everything it needs to do to create an awesome streaming app using one client.

  • New server side streaming API - this will provide a simple binary protocol (most likely over websockets and/or TCP) for executing and streaming the results from pull or push queries and for handling inserts. These kinds of operations are inherently stream oriented and not suitable for HTTP/REST API.

  • I will try and hack together an implementation of pull queries supporting non key selects probably by creating a new KS state store implementation which allows a 3rd party relational db to be used for the state store. I can use something simple and embedded like Apache Derby for the prototype or quick out of the box experience but we could make this configurable by the user. This will be a hack initially.



I think if we tackle the above then it will really open up KSQL to a lot more real-world use cases, and make the app development experience really great.


Vinoth Chandar

unread,
Nov 15, 2019, 12:19:57 PM11/15/19
to Tim Fox, ksql-dev
Thanks for taking the time to do this This "end-user" perspective is very essential atm! 

On the stumbles 

- On non-key field queries, as you might guess this needs secondary indexing. RocksDB does not offer secondary indexes atm. We could think about some form of local secondary indexing by leveraging transactions support and double writing to two tables.. but all this needs a lot of design and implementation across Kafka Streams + KSQL.  For now, could we create another "table"  off BASKET_STATES aggregating by users, where we store a set/list of items keyed by user id? I know this wont be consistent with BASKET_STATES all the time.. but a lot of nosql stores offer eventually consistent global secondary indexes and users seem to find it atleast a workable solution? 
- +1 . This was my core concern as well, from our internal discussions. I think it got lost in translation. Even when we are returning full jsons out of push queries now, then it's not streaming anymore i.e each chunk is not consumer by itself (I thought that was the orginal intent with that API). We need to get out client story straight IMHO. 
- Sorry maybe I am missing something. Can't the app do insert statement via KSQL instead of directly talking to Kafka ? +1 on abstracting Kafka away from user. 
- On separating pull queries, inserting, push, again +1, they are very different and the clean way is to have different resources for them at the server. Everything from threadpool, connection management is different.

My only suggestion for prototype would be : can we start a KLIP first on the client redesign? I don't intend to stop your experimenting. By all means, please prototype and it will help drive requirements. Just saying we need a ground up rethinking before we take the next step, to avoid getting into the same state again :) 

>I will try and hack together an implementation of pull queries supporting non key selects probably by creating a new KS state store
unless you are planning to keep this consistent with your original table by hacking this into Kafka Streams itself, this effect can be achieved by materializing the changelog from the table using another ksql query ? 


/thanks/vinoth


--
You received this message because you are subscribed to the Google Groups "ksql-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ksql-dev+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/ksql-dev/7b18983f-c1ab-474b-be31-37dfb8c1f3cb%40googlegroups.com.

Tim Fox

unread,
Nov 15, 2019, 1:11:20 PM11/15/19
to ksql-dev
Hey Vinoth,

replies inline


On Friday, November 15, 2019 at 5:19:57 PM UTC, Vinoth Chandar wrote:
Thanks for taking the time to do this This "end-user" perspective is very essential atm! 

On the stumbles 

- On non-key field queries, as you might guess this needs secondary indexing. RocksDB does not offer secondary indexes atm. We could think about some form of local secondary indexing by leveraging transactions support and double writing to two tables.. but all this needs a lot of design and implementation across Kafka Streams + KSQL.  For now, could we create another "table"  off BASKET_STATES aggregating by users, where we store a set/list of items keyed by user id? I know this wont be consistent with BASKET_STATES all the time.. but a lot of nosql stores offer eventually consistent global secondary indexes and users seem to find it atleast a workable solution? 


Implementing more sophisticated pull queries is not really the purpose of this prototype -basically I just want to hack something together for now, for the purposes of the demo. I was thinking of using the processor API for aggregations then, aiui, we can use a custom state store. So basically instead of using RocksDB, use a relational embedded DB - then we get the querying for free. Don't know if that really makes sense but , as I mention t's not super central to this prototype. 
 
- +1 . This was my core concern as well, from our internal discussions. I think it got lost in translation. Even when we are returning full jsons out of push queries now, then it's not streaming anymore i.e each chunk is not consumer by itself (I thought that was the orginal intent with that API). We need to get out client story straight IMHO. 
- Sorry maybe I am missing something. Can't the app do insert statement via KSQL instead of directly talking to Kafka ? +1 on abstracting Kafka away from user. 

Yes, probably, but not in high volume. We need a high throughput way of inserting. The three thing that need to support high throughput (lots of traffic over the wire and lots of connections) are pull queries, push queries and inserts. That's why I'm separating in this prototype into a new API, separate from the REST API. Those kinds of things are not well suited towards HTTP or REST.
 
- On separating pull queries, inserting, push, again +1, they are very different and the clean way is to have different resources for them at the server. Everything from threadpool, connection management is different.

Absolutely - in my prototype I'm using Vert.x which has great scalability, great performance, low resource usage and lots of tools that make writing network protocols really easy, so hopefully that will help.
 

My only suggestion for prototype would be : can we start a KLIP first on the client redesign? I don't intend to stop your experimenting. By all means, please prototype and it will help drive requirements. Just saying we need a ground up rethinking before we take the next step, to avoid getting into the same state again :) 


Yes, a KLIP will follow. But I'm not yet at the point where it's clear enough for me to write a KLIP. Once I've experimented some more and got a better idea of what works and what doesn't work I will do this, and maybe even more than one KLIP. There's a danger of starting a KLIP to early - it's easy to get mired in analysis paralysis :)

 

>I will try and hack together an implementation of pull queries supporting non key selects probably by creating a new KS state store
unless you are planning to keep this consistent with your original table by hacking this into Kafka Streams itself,


I was thinking of using the processor API but don't know hoe feasible that is, as I'm not an expert in this area. So it's quite possible it's a bad idea!
 
To unsubscribe from this group and stop receiving emails from it, send an email to ksql-dev+unsubscribe@googlegroups.com.

Almog Gavra

unread,
Nov 15, 2019, 2:25:04 PM11/15/19
to Tim Fox, ksql-dev
Thanks for writing this up Tim! I think we should push to develop with this level of user experience in mind regularly. I have a few high-level, rough and arguably constructive comments:

- From the API perspective I think it's important to limit the number of public APIs and protocols that we support, designing them in tandem so that they each have clear and distinct responsibilities with little overlap. Unfortunately, this probably means we should also invest up front to get a big picture view (cf. what Vinoth said about clients) so we don't end up with what the mish-mash we have today.
- With regards to the streaming protocol, is there anything we can leverage from Kafka instead of trying to reinvent the wheel? They have years of building an optimized streaming protocol, and turns out it's not easy to write a good one.
- Similarly with clients, we should take care to learn from Kafka and its experience with heavyweight clients. Kafka has tons of clients, some which are third party and others that don't properly implement the protocol, and it causes a massive development and support burden. Could we get away with pretty lightweight clients?

> The app needs to send messages to Kafka topics, e.g. to place an order, or to represent shopping basket events. This means the app also needs to use the Kafka client directly.

My 2c here, I think the INSERT INTO ... VALUES API is quite intuitive and we should leverage it but we need to invest in it significantly before it can handle serious QPS (it was built as a demo/play feature and, for example, opens a new producer on each request) and to add capabilities for complex data structures (structs).

Cheers,
Almog

Hey Vinoth,

To unsubscribe from this group and stop receiving emails from it, send an email to ksql-dev+u...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "ksql-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ksql-dev+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/ksql-dev/60bd95a6-4bee-4ba2-9fe1-889797067c66%40googlegroups.com.

Tim Fox

unread,
Nov 16, 2019, 5:55:13 AM11/16/19
to ksql-dev


On Friday, November 15, 2019 at 7:25:04 PM UTC, Almog Gavra wrote:
Thanks for writing this up Tim! I think we should push to develop with this level of user experience in mind regularly. I have a few high-level, rough and arguably constructive comments:

- From the API perspective I think it's important to limit the number of public APIs and protocols that we support, designing them in tandem so that they each have clear and distinct responsibilities with little overlap. Unfortunately, this probably means we should also invest up front to get a big picture view (cf. what Vinoth said about clients) so we don't end up with what the mish-mash we have today.
- With regards to the streaming protocol, is there anything we can leverage from Kafka instead of trying to reinvent the wheel? They have years of building an optimized streaming protocol, and turns out it's not easy to write a good one.

That's certainly true! I've spent a long time in my career writing/optimising streaming protocols for some of the most well known messaging systems (most predating Kafka). It's not easy but problems are not new.

 
- Similarly with clients, we should take care to learn from Kafka and its experience with heavyweight clients. Kafka has tons of clients, some which are third party and others that don't properly implement the protocol, and it causes a massive development and support burden. Could we get away with pretty lightweight clients?

Absolutely. The intention is not to create a heavyweight client. The hardest thing is going to be writing a protocol that fast and full featured but also simple enough to easily to write clients for, or use by opening a socket/websocket and writing to directly.

Apurva Mehta

unread,
Nov 19, 2019, 1:39:08 AM11/19/19
to Tim Fox, ksql-dev
Thanks for starting this discussion, Tim. I think developing a coherent application development lifecycle is a critical piece for KSQL right now. I only have one question, which is more for my edification: 

Chunked response pull queries are not easily usable in the app. The chunks do not correspond to whole rows(s) so some tricky parsing on the app is necessary to re-assemble the chunks into rows so they can be handled by the app. It's not reasonable to expect the app developer to do this.

Can you share the example response output for pull / push queries today? The discussions here have tended to be abstract, and writing down the exact request/response would help solidify the gaps in my mind. 

Thanks,
Apurva

--
You received this message because you are subscribed to the Google Groups "ksql-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ksql-dev+u...@googlegroups.com.

Tim Fox

unread,
Nov 19, 2019, 3:59:08 AM11/19/19
to ksql-dev


On Tuesday, November 19, 2019 at 6:39:08 AM UTC, Apurva Mehta wrote:
Thanks for starting this discussion, Tim. I think developing a coherent application development lifecycle is a critical piece for KSQL right now. I only have one question, which is more for my edification: 

Chunked response pull queries are not easily usable in the app. The chunks do not correspond to whole rows(s) so some tricky parsing on the app is necessary to re-assemble the chunks into rows so they can be handled by the app. It's not reasonable to expect the app developer to do this.

Can you share the example response output for pull / push queries today? The discussions here have tended to be abstract, and writing down the exact request/response would help solidify the gaps in my mind. 


Let's say the server writes rows (this is not real chunked encoding, I've simplified for clarity) - each JSON object represents a row

{"foo1": "bar1", "foo2", "bar2"}
{"foo1": "bar1", "foo2", "bar2"}
{"foo1": "bar1", "foo2", "bar2"}

The server might decide to write them to the response as a single chunk or multiple chunks, or split across chunks, so you might get this on the wire:

---- chunk1 start----
{"foo1": "bar1", "foo
---- chunk 2 start ---
2", "bar2"}\n
{"foo1": "bar1", "foo2", "bar2"}
{"foo1": "bar1", "foo2", "ba
--- chunk 3 start ----
2"}

You might be able to control when chunks get written (e.g. by flushing) but with chunked encoding it's better to let the server do this for efficiency reasons. (Writing lots of small chunks will be slow).

But whether or not you can control the chunking isn't going to help with most clients. If you're using JAX-RS I believe you can get an InputStream at the client, but you still need to parse that into valid Json Objects by finding the boundaries. With Vert.x you'll get the body as a stream of buffers which you also have to reassemble into Json Objects. Either way it's a pain!





 

Matthias J. Sax

unread,
Nov 22, 2019, 1:29:24 AM11/22/19
to ksql...@googlegroups.com
Great initiative!

About the "state store hack": this will be rather difficult. The current
API of Kafka Streams enforces to use `KeyValue<Bytes, byte[]>` stores to
be plugged into the DSL operators.

Similarly, a `ReadOnlyKeyValue` interface is exposed via the IQ API. It
_might_ be possible to hack around both to some extend, but not sure if
it's actually possible to pull it off.

However, even if you can manage the first two issue, how do you know
which instance to query? Stores are partitioned by the primary key, and
Kafka Streams has support to detect which instance hosts what store
partitions based on the key. But there is no support for secondary
indexes. Hence, the only way I see how this could be implemented given
the limitation of Kafka Stream, is a lookup into all
instances/store-partitions (maybe exploiting a local secondary index to
avoid a full table scan).

Just want to point the expected issues...


For building a KSQL client: for push queries, did you consider to just
run a `KafkaCosumer` with the client and read directly from the result
topic? Cutting out the ksqlServer from the communication path might be
desirably from a performance point of view? Furthermore, we don't need
to design any protocol.

If we think that using a `KafkaConsumer` for push queries is the right
way to go, than we only need to worry about pull queries. And because
those deliver a finite result (in fact, currently only a single row), we
can design the protocol quite differently IMHO; even REST would work for
this case.

Just some random ideas.


-Matthias
> --
> You received this message because you are subscribed to the Google
> Groups "ksql-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ksql-dev+u...@googlegroups.com
> <mailto:ksql-dev+u...@googlegroups.com>.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/ksql-dev/038ac975-5826-44e7-8c66-dc5cbd2d6239%40googlegroups.com
> <https://groups.google.com/d/msgid/ksql-dev/038ac975-5826-44e7-8c66-dc5cbd2d6239%40googlegroups.com?utm_medium=email&utm_source=footer>.

signature.asc

Tim Fox

unread,
Nov 22, 2019, 5:08:11 AM11/22/19
to ksql-dev
Hi Matthias, comments inline


On Friday, November 22, 2019 at 6:29:24 AM UTC, Matthias J. Sax wrote:
Great initiative!

About the "state store hack": this will be rather difficult. The current
API of Kafka Streams enforces to use `KeyValue<Bytes, byte[]>` stores to
be plugged into the DSL operators.

Similarly, a `ReadOnlyKeyValue` interface is exposed via the IQ API. It
_might_ be possible to hack around both to some extend, but not sure if
it's actually possible to pull it off.

I suspect as much, at least when using the KS DSL. I'm thinking it might be possible to use a non KV state store if we use the processor API directly. I think this would mean writing our own aggregate etc implementations instead of using the KSQL ones, so extra work there. But it might be possible?
 

However, even if you can manage the first two issue, how do you know
which instance to query? Stores are partitioned by the primary key, and
Kafka Streams has support to detect which instance hosts what store
partitions based on the key. But there is no support for secondary
indexes. Hence, the only way I see how this could be implemented given
the limitation of Kafka Stream, is a lookup into all
instances/store-partitions (maybe exploiting a local secondary index to
avoid a full table scan).

This is a great point that I hadn't considered. 

One thought on this: In many cases the user probably only wants results from one node. E.g. imagine there is a shopping basket BASKET_ITEMS table and the user wants to execute a pull query "SELECT * FROM BASKET_ITEMS WHERE USER_ID=?" to retrieve their current basket.The primary key of that table would be a composite (USER_ID, ITEM_ID). It would make sense to partition that table such that all items for a particular user are on the same partition (i.e. partition by USER_ID). That way we only need to execute the query on one node to get the correct results. There will be other cases where we do need to aggregate query results from all nodes but I suspect these would be more heavyweight "report" style queries and less common, but yes, I think we will need a "gather and aggregate" implementation on the server for that.
 

Just want to point the expected issues...


For building a KSQL client: for push queries, did you consider to just
run a `KafkaCosumer` with the client and read directly from the result
topic? Cutting out the ksqlServer from the communication path might be
desirably from a performance point of view? Furthermore, we don't need
to design any protocol.


This is where our terminology breaks down a bit ;) (I think Almog raised a similar point recently). 

By consuming "push queries" it's referring to creating a _transient_ query on the KSQL server and consuming from that. This queries create a KS topology on the fly and instead of having a sink topic like a persistent query, the results are fed into a buffer in memory and streamed back to the client. The query doesn't last longer than the connection. For these using the AK client won't help as Kafka won't know about these.

If we support the above. Then we can also support doing SELECT * FROM <some_container> which can shortcut the creation of the transient query and just read directly from the underlying topic. We could do both of those via the same mechanism, or we could use a Kafka Consumer directly for the latter case. The Kafka consumer might have provide lower latency (as there would be one less hop), but I suspect most of users aren't super latency sensitive (we're talking ms here). One disadvantage of using the AK consumer directly from the ksqldb client would be more complex configuration at the client side (AK server hosts would need to be provided too), also it means the AK cluster would have to be exposed to the client application. If we hide AK completely from the client it means the AK cluster can potentially be on a different network.
 

Matthias J. Sax

unread,
Nov 23, 2019, 12:29:37 AM11/23/19
to ksql...@googlegroups.com
Yes, if you use the Processor API you have more flexibility. But also
more work obviously. It was always clear that using the DSL helps to
move fast in the beginning, but also limits flexibility, and I am not
surprised that ksqlDB might actually move off the DSL more or less
completely at some point for this reason.

About the partitioning: that makes sense. From my understanding,
currently ksqlDB relies on default partitioning based on the full key
(not a partial key) that is also a hash partitioning. If ksqlDB is
acting smarter by using custom partitioning, maybe even with a range
partitioners, it might be possible to implement pull queries more
efficiently for some patterns.

For the ksql-client: My understanding was that a transient query would
write its result into a topic. If this his not true, my proposal does of
course not work. However, considering EOS, it might actually be an
advantage to re-route the result data through an output topic --
otherwise, EOS breaks. Pushing data over a websocket would be a
side-effect from EOS perspective and hence would not be covered.

The other tradeoffs you mention make all sense, and I was aware of them.
The goal of my comment was that we should consider all available options
(I was not sure if you did consider this option, but your response
indicate you actually did :)), get the pros/cons down and figure out
what the best solution is. If we want to be "fancy" at some point in the
future, we might even have a mix of strategies depending on the type of
query etc. Of course, all those details would be hidden from the user by
the system (ie, ksqlServer + ksqlClient) that would just be smart about
it, while the user facing API is the same.



-Matthias
> --
> You received this message because you are subscribed to the Google
> Groups "ksql-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ksql-dev+u...@googlegroups.com
> <mailto:ksql-dev+u...@googlegroups.com>.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/ksql-dev/4e035cb6-1231-4822-862d-ae7d83d79955%40googlegroups.com
> <https://groups.google.com/d/msgid/ksql-dev/4e035cb6-1231-4822-862d-ae7d83d79955%40googlegroups.com?utm_medium=email&utm_source=footer>.

signature.asc

Tim Fox

unread,
Nov 25, 2019, 3:43:08 AM11/25/19
to ksql-dev


For the ksql-client: My understanding was that a transient query would
write its result into a topic. If this his not true, my proposal does of
course not work. However, considering EOS, it might actually be an
advantage to re-route the result data through an output topic --

AIUI, this is basically what a persistent query does (create X as select). For a transient query we could also have a topic as sink as an option, but we'd have to delete that topic when the query closes. The query lifetime would normally be tied to the user connection lifetime. In most cases, transient push queries are used as side effects (e.g. writing contents to a web page) so EOS is not required.

Having said that, I do think it makes sense for the KSQL client to (eventually) support transactions for both inserting and consuming from topics. This would be useful in the case the user wants to implement custom processing logic in their app and they want to do this transactionally (aka EOS, although I don't really like that term having worked in messaging for years ;) ) 

Matthias J. Sax

unread,
Nov 25, 2019, 1:47:11 PM11/25/19
to ksql...@googlegroups.com, Almog Gavra
Yes, a persistent query obviously creates an output topic. However, I
asked (I think it was Almog) recently how transient queries work, and he
explained to me, that transient queries also create an (internal) output
topic, and read this output topic with a KafkaConsumer server side to
serve the client. And as you mentioned, for this case this "transient"
output topic is delete when the transient query is terminated.

I never read the code and can only say what I was told -- might be worth
to double check.

About EOS client side: I agree. Even if we have a "transient" output
topic, reading the topic would not guarantee EOS atm anyway. Also not
sure what we need to support for this case, and I would put this
question aside for now.


-Matthias
> --
> You received this message because you are subscribed to the Google
> Groups "ksql-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ksql-dev+u...@googlegroups.com
> <mailto:ksql-dev+u...@googlegroups.com>.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/ksql-dev/393bf5ab-0c51-4429-a0c5-215f30652eba%40googlegroups.com
> <https://groups.google.com/d/msgid/ksql-dev/393bf5ab-0c51-4429-a0c5-215f30652eba%40googlegroups.com?utm_medium=email&utm_source=footer>.

signature.asc

Tim Fox

unread,
Nov 25, 2019, 2:24:21 PM11/25/19
to ksql-dev


On Monday, November 25, 2019 at 6:47:11 PM UTC, Matthias J. Sax wrote:
Yes, a persistent query obviously creates an output topic. However, I
asked (I think it was Almog) recently how transient queries work, and he
explained to me, that transient queries also create an (internal) output
topic, and read this output topic with a KafkaConsumer server side to
serve the client. 

I'm pretty sure they don't output to a topic. Looking at the code the output for transient query is an in memory blocking queue. The KS code fills up this queue as messages arrive, and they're removed from the queue and sent to the client (either polling on a timer with websockets, or polling in a loop for HTTP chunked response).
 

Almog Gavra

unread,
Nov 25, 2019, 2:45:10 PM11/25/19
to Tim Fox, ksql-dev
It looks like Tim is right. The reason I was confused is that if you do an aggregation in a transient query, it will create an intermediate transient topic for you (i.e. _confluent-ksql-default_transient_8002559966857563507_1574711003648-Aggregate-groupby-repartition) note that these do not show up in the default SHOW TOPICS output. Perhaps it makes sense, though, to just implement our clients as reading over temporary topics. That way we can leverage the kafka API (perhaps at a cost of more network, which may or may not be worth it).

--
You received this message because you are subscribed to the Google Groups "ksql-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ksql-dev+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/ksql-dev/3663a3b1-4390-48bf-b899-b80a6369c2ca%40googlegroups.com.

Matthias J. Sax

unread,
Nov 25, 2019, 4:40:47 PM11/25/19
to ksql...@googlegroups.com
Thanks for confirmation!

It might make sense to explore both options to see what pros/cons they have.


-Matthias
> <mailto:ksql-dev+u...@googlegroups.com>.
> <https://groups.google.com/d/msgid/ksql-dev/3663a3b1-4390-48bf-b899-b80a6369c2ca%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "ksql-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ksql-dev+u...@googlegroups.com
> <mailto:ksql-dev+u...@googlegroups.com>.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/ksql-dev/CAMzSnGi3F6xVzCz5TN3cOpWa5q-w1VLJZT%3DRzCHh22_VkcOYMg%40mail.gmail.com
> <https://groups.google.com/d/msgid/ksql-dev/CAMzSnGi3F6xVzCz5TN3cOpWa5q-w1VLJZT%3DRzCHh22_VkcOYMg%40mail.gmail.com?utm_medium=email&utm_source=footer>.

signature.asc

Nick Dearden

unread,
Dec 3, 2019, 5:58:32 PM12/3/19
to ksql-dev
worth syncing up with the folks working on REST Proxy redesign before inventing YetAnotherWayToConsumeTopicsOverWebProtocols ? :)
>     send an email to ksql...@googlegroups.com
>     <mailto:ksql...@googlegroups.com>.
>     To view this discussion on the web, visit
>     https://groups.google.com/d/msgid/ksql-dev/3663a3b1-4390-48bf-b899-b80a6369c2ca%40googlegroups.com
>     <https://groups.google.com/d/msgid/ksql-dev/3663a3b1-4390-48bf-b899-b80a6369c2ca%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "ksql-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send
Reply all
Reply to author
Forward
0 new messages