Realtime indexing leveraging stream processing system

276 views
Skip to first unread message

Roman Leventov

unread,
Aug 11, 2017, 11:49:27 AM8/11/17
to druid-de...@googlegroups.com
It would be awesome if realtime indexing system (overlord + middle managers, at the moment) could leverage state of the art solutions for resource management, redundancy/replication, infinite windowing, etc. implemented and being improved in the modern stream processing systems.

In fact, realtime indexing has processes events with state (incremental index), and then does something with that state (index merging + off-handing), that is already supported by stream processing systems.

But they don't support (as far as I know) running queries over the state, and integration with the layer that knows which workers to query (this layer is Brokers, in case of Druid).

Rapidly evolving codebases of both Druid and OSS stream processing systems would likely not allow achieve the required depth of integration without making a strategic agreement with one of them, to respect mutual interests, establish some API boundaries which could be changed only coordinated with the both dev communities, etc.

Xavier Léauté

unread,
Aug 11, 2017, 1:21:30 PM8/11/17
to druid-de...@googlegroups.com
Hi Roman, I've been toying with the idea of embedding Druid realtime indexing as a state store implementation for Kafka Streams, which does support querying the state. There are still a lot of gaps to fill to make this work, mainly due to the way Kafka Streams currently thinks about state stores, but I think it should be feasible to get a POC. Streams does not provide an RPC mechanism so that's still left to a higher level implementation, but it lets you find out how which instance is hosting what data, so you could simply embed a broker on every node and have it do the right thing.

Happy to chat more if you're interested in experimenting with this.

Xavier

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/CAB5L%3Dwd3DVJKnKf6XRXzrTqfBvLR_6zH1aApEsL7B55hy3nEUg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Roman Leventov

unread,
Aug 11, 2017, 2:04:06 PM8/11/17
to druid-de...@googlegroups.com
Hi Xavier, thanks for reply. After reading https://www.confluent.io/blog/apache-flink-apache-kafka-streams-comparison-guideline-users/, I can say that I kept something more like Flink in mind, in order to delegate management (scaling, Mesos, worker node start/stop) to the Flink's master node, and remove the notion and the logic of Overlord and Middle Manager from Druid.

On Fri, Aug 11, 2017 at 8:21 PM, Xavier Léauté <xav...@confluent.io> wrote:
Hi Roman, I've been toying with the idea of embedding Druid realtime indexing as a state store implementation for Kafka Streams, which does support querying the state. There are still a lot of gaps to fill to make this work, mainly due to the way Kafka Streams currently thinks about state stores, but I think it should be feasible to get a POC. Streams does not provide an RPC mechanism so that's still left to a higher level implementation, but it lets you find out how which instance is hosting what data, so you could simply embed a broker on every node and have it do the right thing.

Happy to chat more if you're interested in experimenting with this.

Xavier

On Fri, Aug 11, 2017 at 8:49 AM Roman Leventov <roman.leventov@metamarkets.com> wrote:
It would be awesome if realtime indexing system (overlord + middle managers, at the moment) could leverage state of the art solutions for resource management, redundancy/replication, infinite windowing, etc. implemented and being improved in the modern stream processing systems.

In fact, realtime indexing has processes events with state (incremental index), and then does something with that state (index merging + off-handing), that is already supported by stream processing systems.

But they don't support (as far as I know) running queries over the state, and integration with the layer that knows which workers to query (this layer is Brokers, in case of Druid).

Rapidly evolving codebases of both Druid and OSS stream processing systems would likely not allow achieve the required depth of integration without making a strategic agreement with one of them, to respect mutual interests, establish some API boundaries which could be changed only coordinated with the both dev communities, etc.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-development+unsubscribe@googlegroups.com.
To post to this group, send email to druid-development@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-development+unsubscribe@googlegroups.com.
To post to this group, send email to druid-development@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/CA%2BrPSbY8zBSFHq10CAp0yprMdZRb6MY81ntGLqGuSmkHFEeXLA%40mail.gmail.com.

Roman Leventov

unread,
Aug 19, 2017, 9:46:51 AM8/19/17
to druid-de...@googlegroups.com
Any other opinions on this idea from PMC members?

On Fri, Aug 11, 2017 at 8:04 PM, Roman Leventov <roman.l...@metamarkets.com> wrote:
Hi Xavier, thanks for reply. After reading https://www.confluent.io/blog/apache-flink-apache-kafka-streams-comparison-guideline-users/, I can say that I kept something more like Flink in mind, in order to delegate management (scaling, Mesos, worker node start/stop) to the Flink's master node, and remove the notion and the logic of Overlord and Middle Manager from Druid.
On Fri, Aug 11, 2017 at 8:21 PM, Xavier Léauté <xav...@confluent.io> wrote:
Hi Roman, I've been toying with the idea of embedding Druid realtime indexing as a state store implementation for Kafka Streams, which does support querying the state. There are still a lot of gaps to fill to make this work, mainly due to the way Kafka Streams currently thinks about state stores, but I think it should be feasible to get a POC. Streams does not provide an RPC mechanism so that's still left to a higher level implementation, but it lets you find out how which instance is hosting what data, so you could simply embed a broker on every node and have it do the right thing.

Happy to chat more if you're interested in experimenting with this.

Xavier

Roman Leventov

unread,
Aug 22, 2017, 1:32:56 PM8/22/17
to druid-de...@googlegroups.com
It was not possible to discuss this question during today's dev sync, let's discuss it as one of the first things next week.

On Sat, Aug 19, 2017 at 8:46 AM, Roman Leventov <roman.l...@metamarkets.com> wrote:
Any other opinions on this idea from PMC members?

Himanshu Gupta

unread,
Aug 23, 2017, 3:15:12 PM8/23/17
to Druid Development
some quick scan thoughts....

I wouldn't add a hard dependency on another external system, so existing Overlord and MiddleManager stuff shouldn't be really removed unless a "druid local" version of external system is created so that users do not need to deploy/manage a new thing to be able to use realtime ingestion.

I do think it makes sense to have a pluggable architecture where realtime ingestion is possible without deploying any Overlord and Middle Managers. There is already pluggability at the level of node types, so simplest idea would be to create new node types like "yarnOverlord", "mesosOverlord" etc that can be deployed instead and they would use yarn, mesos etc to manage the elasticity and tasks.... or within Overlord itself make the task management layer configurable (it is already configurable to certain degree but not sure if that would be enough) and be able to use implementations based on current Middle Managers, or Yarn or Mesos or something else.


-- Himanshu


On Tuesday, 22 August 2017 12:32:56 UTC-5, Roman Leventov wrote:
It was not possible to discuss this question during today's dev sync, let's discuss it as one of the first things next week.

Roman Leventov

unread,
Aug 23, 2017, 4:37:18 PM8/23/17
to druid-de...@googlegroups.com
I don't see the difference between starting "Overlord" instance or "master of a stream processing system" instance, so I'm not sure what "druid local" means here. If it's the ability to start an instance that joins Druid Coordinator and "indexing master" functions, conceptually it should be possible as long as the stream processing system is written in a JVM-based language, that is a requirement anyway since we need to integrate Druid staff (at least, querying) into "workers".

However I think more about delegating all Overlord's responsibilities to a stream processing system rather than creating abstractions inside Overlord and make a stream processing system to fit in one of those abstractions, because the first allows to throw away a lot of code and complexity (actually delegate to another system, but it's not going to be Druid's headache anymore), while the second approach only increases the codebase and complexity of Druid.

On Wed, Aug 23, 2017 at 2:15 PM, Himanshu Gupta <g.him...@gmail.com> wrote:
some quick scan thoughts....

I wouldn't add a hard dependency on another external system, so existing Overlord and MiddleManager stuff shouldn't be really removed unless a "druid local" version of external system is created so that users do not need to deploy/manage a new thing to be able to use realtime ingestion.

I do think it makes sense to have a pluggable architecture where realtime ingestion is possible without deploying any Overlord and Middle Managers. There is already pluggability at the level of node types, so simplest idea would be to create new node types like "yarnOverlord", "mesosOverlord" etc that can be deployed instead and they would use yarn, mesos etc to manage the elasticity and tasks.... or within Overlord itself make the task management layer configurable (it is already configurable to certain degree but not sure if that would be enough) and be able to use implementations based on current Middle Managers, or Yarn or Mesos or something else.


-- Himanshu


On Tuesday, 22 August 2017 12:32:56 UTC-5, Roman Leventov wrote:
It was not possible to discuss this question during today's dev sync, let's discuss it as one of the first things next week.
On Sat, Aug 19, 2017 at 8:46 AM, Roman Leventov <roman.l...@metamarkets.com> wrote:
Any other opinions on this idea from PMC members?

Roman Leventov

unread,
Aug 23, 2017, 4:46:34 PM8/23/17
to druid-de...@googlegroups.com
An updated list of things that a stream processing system should be able to do:

 - running queries over the state, and integration with Druid Brokers, that should know which workers to query
 - data source locking
 - running non-realtime tasks (merge, append) on the same resource base as realtime tasks
 - auto-scaling of the resource base (io.druid.indexing.overlord.autoscaling.AutoScaler)
 - running shallow tasks - delegates to external systems (Hadoop, Spark)

Charles Allen

unread,
Aug 23, 2017, 5:23:12 PM8/23/17
to Druid Development
Note that such a system does not need to do all of these things. For example, shallow tasks that don't really need advanced resource or streaming needs can be run by a system very much like the current overlord, and the stream processing items can be part of an external system, with a monitor or driver which acts like a "shallow" task. Shallow tasks are not required to be on a middle manager either, they could simply be forked from the overlord as long as they handle the overlord restarting or changing leadership properly.

To flip the responsibility of locking (segment-versioning) to a library or other "distributed" execution model could also work, and is similar to the way the kafka-indexer is architected if i recall correctly. Gian may have more input there. For example, if some of the sequential versioning used by the kafka indexing service could be expanded to be used by flink / samza / spark-streaming or similar, that could provide a standard-ish way for those systems to handle the real-time portion, regardless of if people end up running a batch workflow later.



On Wednesday, August 23, 2017 at 1:46:34 PM UTC-7, Roman Leventov wrote:
An updated list of things that a stream processing system should be able to do:

 - running queries over the state, and integration with Druid Brokers, that should know which workers to query
 - data source locking
 - running non-realtime tasks (merge, append) on the same resource base as realtime tasks
 - auto-scaling of the resource base (io.druid.indexing.overlord.autoscaling.AutoScaler)
 - running shallow tasks - delegates to external systems (Hadoop, Spark)

Roman Leventov

unread,
Aug 23, 2017, 6:19:23 PM8/23/17
to druid-de...@googlegroups.com
Using a stream processing system as another "external system" probably won't allow to benefit from recovery tools implemented in it (checkpointing, snapshotting, failover). A task could not only be "running", "succeed" or "failed", but it could also be failed, but going to recover. And also it won't allow to benefit from hybrid stream-batch systems, if overlord still wants to control itself when it starts realtime and batch tasks. So I think ultimately the control should be on the side of a stream processing system. But indeed it don't need to be able to run shallow non-indexing tasks, this could be left to a rudimentary "overlord".

On Wed, Aug 23, 2017 at 4:23 PM, Charles Allen <charle...@metamarkets.com> wrote:
Note that such a system does not need to do all of these things. For example, shallow tasks that don't really need advanced resource or streaming needs can be run by a system very much like the current overlord, and the stream processing items can be part of an external system, with a monitor or driver which acts like a "shallow" task. Shallow tasks are not required to be on a middle manager either, they could simply be forked from the overlord as long as they handle the overlord restarting or changing leadership properly.

To flip the responsibility of locking (segment-versioning) to a library or other "distributed" execution model could also work, and is similar to the way the kafka-indexer is architected if i recall correctly. Gian may have more input there. For example, if some of the sequential versioning used by the kafka indexing service could be expanded to be used by flink / samza / spark-streaming or similar, that could provide a standard-ish way for those systems to handle the real-time portion, regardless of if people end up running a batch workflow later.


On Wednesday, August 23, 2017 at 1:46:34 PM UTC-7, Roman Leventov wrote:
An updated list of things that a stream processing system should be able to do:

 - running queries over the state, and integration with Druid Brokers, that should know which workers to query
 - data source locking
 - running non-realtime tasks (merge, append) on the same resource base as realtime tasks
 - auto-scaling of the resource base (io.druid.indexing.overlord.autoscaling.AutoScaler)
 - running shallow tasks - delegates to external systems (Hadoop, Spark)

Gian Merlino

unread,
Aug 23, 2017, 8:02:01 PM8/23/17
to druid-de...@googlegroups.com
Some thoughts.

> It would be awesome if realtime indexing system (overlord + middle managers, at the moment) could leverage state of the art solutions for resource management, redundancy/replication, infinite windowing, etc. implemented and being improved in the modern stream processing systems.

Yes, it would be :). The things you mentioned are really important stuff for a database, but if we can outsource it cleanly, why not. It would leave us to focus on the query engine, storage formats, historical nodes, and data lifecycle.

> After reading https://www.confluent.io/blog/apache-flink-apache-kafka-streams-comparison-guideline-users/, I can say that I kept something more like Flink in mind, in order to delegate management (scaling, Mesos, worker node start/stop) to the Flink's master node, and remove the notion and the logic of Overlord and Middle Manager from Druid.

While rethinking the Overlord/MM is good (I'm pretty sure they're not the best design for the problem they're solving), I think it would be bad for the community to introduce a mandatory operational dependency on a stream processing system. ZK, metadata store, and deep storage are tough enough for people to operate alongside Druid and we should avoid increasing the burden. I'm ok with an optional dependency though, or a mandatory non-operational dependency (i.e. something embedded within Druid, with no additional services to run).

It's because replacing the Overlord/MM with e.g. Flink is not just zero-sum in terms of operational complexity. It's more complex, since Flink's configuration parameters don't match Druid's, its services are not spawned the same way as Druid's, it can't use Druid's common.runtime.properties, and tuning of its CPU and memory use would not leverage any of users' existing familiarity with tuning of other kinds of Druid nodes, like historicals.

Gian

Himanshu Gupta

unread,
Aug 23, 2017, 8:57:14 PM8/23/17
to Druid Development
> I don't see the difference between starting "Overlord" instance or "master of a stream processing system" instance, so I'm not sure what "druid local" means here. If it's the ability to start an instance that joins Druid Coordinator and "indexing master" functions, conceptually it should be possible as long as the stream processing system is written in a JVM-based language, that is a requirement anyway since we need to integrate Druid staff (at least, querying) into "workers".
I meant that it should not become mandatory for users to deploy an external system (or manage another set of different non-druid processes) to be able to do realtime ingestion.

On Wednesday, 23 August 2017 19:02:01 UTC-5, Gian Merlino wrote:
Some thoughts.

> It would be awesome if realtime indexing system (overlord + middle managers, at the moment) could leverage state of the art solutions for resource management, redundancy/replication, infinite windowing, etc. implemented and being improved in the modern stream processing systems.

Yes, it would be :). The things you mentioned are really important stuff for a database, but if we can outsource it cleanly, why not. It would leave us to focus on the query engine, storage formats, historical nodes, and data lifecycle.

> After reading https://www.confluent.io/blog/apache-flink-apache-kafka-streams-comparison-guideline-users/, I can say that I kept something more like Flink in mind, in order to delegate management (scaling, Mesos, worker node start/stop) to the Flink's master node, and remove the notion and the logic of Overlord and Middle Manager from Druid.

While rethinking the Overlord/MM is good (I'm pretty sure they're not the best design for the problem they're solving), I think it would be bad for the community to introduce a mandatory operational dependency on a stream processing system. ZK, metadata store, and deep storage are tough enough for people to operate alongside Druid and we should avoid increasing the burden. I'm ok with an optional dependency though, or a mandatory non-operational dependency (i.e. something embedded within Druid, with no additional services to run).

It's because replacing the Overlord/MM with e.g. Flink is not just zero-sum in terms of operational complexity. It's more complex, since Flink's configuration parameters don't match Druid's, its services are not spawned the same way as Druid's, it can't use Druid's common.runtime.properties, and tuning of its CPU and memory use would not leverage any of users' existing familiarity with tuning of other kinds of Druid nodes, like historicals.

Gian

Roman Leventov

unread,
Aug 23, 2017, 11:19:09 PM8/23/17
to druid-de...@googlegroups.com
On Wed, Aug 23, 2017 at 7:57 PM, Himanshu Gupta <g.him...@gmail.com> wrote:
 
> I don't see the difference between starting "Overlord" instance or "master of a stream processing system" instance, so I'm not sure what "druid local" means here. If it's the ability to start an instance that joins Druid Coordinator and "indexing master" functions, conceptually it should be possible as long as the stream processing system is written in a JVM-based language, that is a requirement anyway since we need to integrate Druid staff (at least, querying) into "workers".
I meant that it should not become mandatory for users to deploy an external system (or manage another set of different non-druid processes) to be able to do realtime ingestion.

On Wednesday, 23 August 2017 19:02:01 UTC-5, Gian Merlino wrote:
Some thoughts.

> It would be awesome if realtime indexing system (overlord + middle managers, at the moment) could leverage state of the art solutions for resource management, redundancy/replication, infinite windowing, etc. implemented and being improved in the modern stream processing systems.

Yes, it would be :). The things you mentioned are really important stuff for a database, but if we can outsource it cleanly, why not. It would leave us to focus on the query engine, storage formats, historical nodes, and data lifecycle.

> After reading https://www.confluent.io/blog/apache-flink-apache-kafka-streams-comparison-guideline-users/, I can say that I kept something more like Flink in mind, in order to delegate management (scaling, Mesos, worker node start/stop) to the Flink's master node, and remove the notion and the logic of Overlord and Middle Manager from Druid.

While rethinking the Overlord/MM is good (I'm pretty sure they're not the best design for the problem they're solving), I think it would be bad for the community to introduce a mandatory operational dependency on a stream processing system. ZK, metadata store, and deep storage are tough enough for people to operate alongside Druid and we should avoid increasing the burden. I'm ok with an optional dependency though, or a mandatory non-operational dependency (i.e. something embedded within Druid, with no additional services to run).

Again, I don't see any conceptual difference between "Overlord" and "stream processing system (SPS) master". It's a matter of naming of a JVM app. Unlike ZK and metadata store, which are entirely different things. And there are no reasons why the SPS code and Druid code couldn't be bundled in a single "druid.jar" and started using Druid's service start utility.
 

It's because replacing the Overlord/MM with e.g. Flink is not just zero-sum in terms of operational complexity. It's more complex, since Flink's configuration parameters don't match Druid's, its services are not spawned the same way as Druid's, it can't use Druid's common.runtime.properties, and tuning of its CPU and memory use would not leverage any of users' existing familiarity with tuning of other kinds of Druid nodes, like historicals.

Regardless of how a SPS manages it's own configurations, should be able to pick up Druid configurations, when it starts worker processes. At least because there are query-related configurations, which worker nodes should know about. It's one of the "deep integration" points that differentiates "Druid RT on a SPS" from a "simple" use of SPS.

But I don't see how complexity is increased. I don't think any CPU and memory tuning knowledge is shared between historicals and middle managers. Other points: configs, service naming/spawning is to my feeling more about "there was one system, which you knew, and now there is another system, which you need to study", that is inevitable cost of such transition, but it doesn't mean that something that is new is necessarily more complex than something old.

However there is another point, not mentioned yet: logging and metrics. But I think it could be integrated under the covers without exposing additional complexity to users.
 

Gian

Gian Merlino

unread,
Aug 24, 2017, 3:38:59 AM8/24/17
to druid-de...@googlegroups.com
> Again, I don't see any conceptual difference between "Overlord" and "stream processing system (SPS) master". It's a matter of naming of a JVM app. Unlike ZK and metadata store, which are entirely different things. And there are no reasons why the SPS code and Druid code couldn't be bundled in a single "druid.jar" and started using Druid's service start utility.

It's about what it feels like to a user: does it feel like "I am setting up Druid" or does it feel like "I am setting up Druid and an SPS and I'm configuring them to talk to each other". The feel is a fuzzy thing. It depends on what files a user edits to configure the services, how the distribution tarball is laid out, where documentation can be found (does a user need to refer to a separate site for SPS docs), how services are started, how services are monitored, and so on. For example, Druid services are consistent in that they are configured by common.runtime.properties and runtime.properties files, they offer HTTP JSON APIs, they report metrics through the emitter, and various other conventions. If the SPS is embedded as a library within Druid and shares its conventions, then it's likely it will feel like a single system, which is good.

What I'm really saying, is that if we want to replace the overlord/MM system with something else, that other thing should ideally feel like just as much a part of Druid as overlord/MM.

Do you think that's valuable? And do you think it would be possible to do this transition in such a way that it does feel like one system?

Gian

Roman Leventov

unread,
Aug 24, 2017, 11:23:09 PM8/24/17
to druid-de...@googlegroups.com
I think what you enumerated is mostly reasonable and mostly should be possible to accomplish. Not sure we should copy-paste all SPS docs and config parameter descriptions, pretending that SPS doesn't exist, because it could add confusion, make googling harder, etc.

Nishant, could you please share your opinion on this topic?

On Thu, Aug 24, 2017 at 2:38 AM, Gian Merlino <gi...@imply.io> wrote:
> Again, I don't see any conceptual difference between "Overlord" and "stream processing system (SPS) master". It's a matter of naming of a JVM app. Unlike ZK and metadata store, which are entirely different things. And there are no reasons why the SPS code and Druid code couldn't be bundled in a single "druid.jar" and started using Druid's service start utility.

It's about what it feels like to a user: does it feel like "I am setting up Druid" or does it feel like "I am setting up Druid and an SPS and I'm configuring them to talk to each other". The feel is a fuzzy thing. It depends on what files a user edits to configure the services, how the distribution tarball is laid out, where documentation can be found (does a user need to refer to a separate site for SPS docs), how services are started, how services are monitored, and so on. For example, Druid services are consistent in that they are configured by common.runtime.properties and runtime.properties files, they offer HTTP JSON APIs, they report metrics through the emitter, and various other conventions. If the SPS is embedded as a library within Druid and shares its conventions, then it's likely it will feel like a single system, which is good.

What I'm really saying, is that if we want to replace the overlord/MM system with something else, that other thing should ideally feel like just as much a part of Druid as overlord/MM.

Do you think that's valuable? And do you think it would be possible to do this transition in such a way that it does feel like one system?

Gian

Gian Merlino

unread,
Aug 25, 2017, 3:14:54 PM8/25/17
to druid-de...@googlegroups.com
I don't think we need to pretend the subsystem doesn't exist, but I think we should copy-paste the docs, or at least what we think are the most important ones (especially anything related to tuning, such as descriptions of how resources are allocated and what tuning configs there are).

And I think it really has to be something that can be embedded as a library. Otherwise I suspect it will be impractical to make the system feel like a coherent whole.

Gian

On Thu, Aug 24, 2017 at 8:23 PM, Roman Leventov <roman.l...@metamarkets.com> wrote:
I think what you enumerated is mostly reasonable and mostly should be possible to accomplish. Not sure we should copy-paste all SPS docs and config parameter descriptions, pretending that SPS doesn't exist, because it could add confusion, make googling harder, etc.

Nishant, could you please share your opinion on this topic?
On Thu, Aug 24, 2017 at 2:38 AM, Gian Merlino <gi...@imply.io> wrote:
> Again, I don't see any conceptual difference between "Overlord" and "stream processing system (SPS) master". It's a matter of naming of a JVM app. Unlike ZK and metadata store, which are entirely different things. And there are no reasons why the SPS code and Druid code couldn't be bundled in a single "druid.jar" and started using Druid's service start utility.

It's about what it feels like to a user: does it feel like "I am setting up Druid" or does it feel like "I am setting up Druid and an SPS and I'm configuring them to talk to each other". The feel is a fuzzy thing. It depends on what files a user edits to configure the services, how the distribution tarball is laid out, where documentation can be found (does a user need to refer to a separate site for SPS docs), how services are started, how services are monitored, and so on. For example, Druid services are consistent in that they are configured by common.runtime.properties and runtime.properties files, they offer HTTP JSON APIs, they report metrics through the emitter, and various other conventions. If the SPS is embedded as a library within Druid and shares its conventions, then it's likely it will feel like a single system, which is good.

What I'm really saying, is that if we want to replace the overlord/MM system with something else, that other thing should ideally feel like just as much a part of Druid as overlord/MM.

Do you think that's valuable? And do you think it would be possible to do this transition in such a way that it does feel like one system?

Gian

Nishant Bangarwa

unread,
Sep 12, 2017, 11:56:17 AM9/12/17
to druid-de...@googlegroups.com
Exposing APIs and Making the SPS embeddable as a library sounds great.  
In addition to the above requirements from other community members, IMO how this system is presented to the druid users is very important.  As mentioned earlier, SPS should feel like part of a single ecosystem and not a separate system altogether with minimal/zero additional operational dependencies. 

Fwiw, there is already confusion in community between when to use Realtime Nodes vs Overlord/MM. For most of the new users the initial perception is that one has to setup both in order to get realtime ingestion working. Adding SPS in the mix would add to more confusion. 

 A migration strategy from Overlord/MM to SPS and proper documentation when to use one vs other or when the user needs to set up both the systems and which tasks need to be delegated to which system is a must to avoid any confusion. keeping the configurations similar would also make the migration less painful.

Gian

 

Gian

To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/CACZNdYDGL%2BsTwBf93tLPUPtvvVTDP_-YbzJ3rezNzCeQ-FdbSA%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages