Re: Alternative to etcd, possible sharded/partitioned KV store

130 views
Skip to first unread message

Daniel Smith

unread,
Feb 7, 2020, 6:49:23 PM2/7/20
to Deepak Vij, K8s API Machinery SIG, kubernetes-sig-scale
+K8s API Machinery SIG is actually the relevant sig for the persistence layer, not sig scale.

We've extensively discussed this in the past, e.g. here, here, here, here, here

On Fri, Feb 7, 2020 at 3:22 PM Deepak Vij <dvij...@gmail.com> wrote:

Hi all, I would like to reach out to you folks regarding alternative/s to current underlying “etcd” data store. I recently started looking around on this topic. I also saw that this particular topic was discussed during the following three Scalability weekly meetings:

  • 12/19/2019
  • 12/05/2019
  • 11/07/2019

 

I remember that there was an ongoing discussion on possibly leveraging TiKV as the possible partitioned KV data store. Also, a while back I had an in-depth discussion with the Apache Ignite KV community on all this. They showed interest as well at the time. Apache Ignite is a partitioned in-Memory persistence backed KV data store. Unlike other KV data stores, Ignite may be possibly leveraged for the underlying caching as well, not sure on this but worth looking into.

 

Also, I remember in one of the meeting notes, it was mentioned that there is an ongoing discussion regarding adjusting the “Watch” semantics as well. I am wondering where we are on all this, is this a prerequisite task prior to looking at an alternative KV data store as possible replacement for “etcd”.

 

In any case, it would be good to sync. up with you folks and hopefully know more on all this. I will also try to sync. up with you folks during the regular weekly meetings on this.

 

Regards,

Deepak Vij

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-scale" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-s...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-scale/6ab1c5e6-e94e-4f2a-b5ea-73091aab3792%40googlegroups.com.

Wojciech Tyczynski

unread,
Feb 10, 2020, 3:16:52 AM2/10/20
to Daniel Smith, Deepak Vij, K8s API Machinery SIG, kubernetes-sig-scale
On Sat, Feb 8, 2020 at 12:49 AM 'Daniel Smith' via kubernetes-sig-scale <kubernetes...@googlegroups.com> wrote:
+K8s API Machinery SIG is actually the relevant sig for the persistence layer, not sig scale.

We've extensively discussed this in the past, e.g. here, here, here, here, here

Agree with Daniel - we've been discussing this a little bit in our meetings, but it was more like "we should have a good motivation for doing that".
Scalability may obviously be this motivating factor, that's why I said that having some POC proving that we can go X times higher that with etcd may be a good starting point.

But the ultimate decision maker for this effort would be SIG apimachinery (though please involve me in discussions if you start them). 

Daniel Smith

unread,
Feb 10, 2020, 11:29:36 AM2/10/20
to Wojciech Tyczynski, Deepak Vij, K8s API Machinery SIG, kubernetes-sig-scale
On Mon, Feb 10, 2020 at 12:16 AM Wojciech Tyczynski <woj...@google.com> wrote:


On Sat, Feb 8, 2020 at 12:49 AM 'Daniel Smith' via kubernetes-sig-scale <kubernetes...@googlegroups.com> wrote:
+K8s API Machinery SIG is actually the relevant sig for the persistence layer, not sig scale.

We've extensively discussed this in the past, e.g. here, here, here, here, here

Agree with Daniel - we've been discussing this a little bit in our meetings, but it was more like "we should have a good motivation for doing that".
Scalability may obviously be this motivating factor, that's why I said that having some POC proving that we can go X times higher that with etcd may be a good starting point.

But the ultimate decision maker for this effort would be SIG apimachinery (though please involve me in discussions if you start them). 

Yeah, don't worry, in the unlikely event we undertook it, this would be a large effort and we'd make sure everyone got a chance to be involved. :)
Reply all
Reply to author
Forward
0 new messages