Pull and Push mode in Kubernetes Federation V2

286 views
Skip to first unread message

Guang Ya Liu

unread,
Aug 7, 2018, 3:46:26 AM8/7/18
to kubernetes-sig-multicluster
Hi,

I want to have some discussion for Pull and Push Mode for federation V2. In document here https://docs.google.com/document/d/1ihWETo-zE8U_QNuzw5ECxOWX0Df_2BVfO3lC4OesKRQ/edit# , it is said that Pull reconciler does not have a reference implementation in federation V2 yet, and fed v2 is now leveraging kubebuilder to use Push reconciler enables/disables the sync loop runtime based on propagation config


But there is a use case as follows:
1) All of the kubernetes member clusters only has outbound network and no inbound network access.
2) Customer want to have a manager to manage all of those kubernetes member clusters.
3) As fed v2 is now using push mode and it cannot support this case.

Any comments for this user scenario?

Thanks,

Guangya

Quinton Hoole

unread,
Aug 7, 2018, 4:25:42 PM8/7/18
to Guang Ya Liu, kubernetes-sig-multicluster
To be useful, a Kubernetes cluster needs to have network reachable API endpoint.  The federation control plane only needs access to those API endpoints, not to the nodes or containers in the clusters. 

I'm sure that there exist cases where the API endpoints of the clusters are not network reachable from the place where the Federation control plane is running, but they are relatively few and far between.  And yes, pull mode would be one of several approaches to addressing those.

Q


--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-multicluster/02384110-d046-4700-be65-6f5fada61ef2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
Quinton Hoole
qui...@hoole.biz

Guang Ya Liu

unread,
Aug 7, 2018, 9:38:18 PM8/7/18
to kubernetes-sig-multicluster
Thanks Quinton, I think we still need to consider this use case, as the current federation v2 will totally not work in such environment.

As you mentioned that pull mode was one of several approaches, does there are any other approaches in your mind? ;-)

Thanks,

Guangya


On Wednesday, August 8, 2018 at 4:25:42 AM UTC+8, Quinton Hoole wrote:
To be useful, a Kubernetes cluster needs to have network reachable API endpoint.  The federation control plane only needs access to those API endpoints, not to the nodes or containers in the clusters. 

I'm sure that there exist cases where the API endpoints of the clusters are not network reachable from the place where the Federation control plane is running, but they are relatively few and far between.  And yes, pull mode would be one of several approaches to addressing those.

Q


On Tue, Aug 7, 2018 at 12:46 AM Guang Ya Liu <gyli...@gmail.com> wrote:
Hi,

I want to have some discussion for Pull and Push Mode for federation V2. In document here https://docs.google.com/document/d/1ihWETo-zE8U_QNuzw5ECxOWX0Df_2BVfO3lC4OesKRQ/edit# , it is said that Pull reconciler does not have a reference implementation in federation V2 yet, and fed v2 is now leveraging kubebuilder to use Push reconciler enables/disables the sync loop runtime based on propagation config


But there is a use case as follows:
1) All of the kubernetes member clusters only has outbound network and no inbound network access.
2) Customer want to have a manager to manage all of those kubernetes member clusters.
3) As fed v2 is now using push mode and it cannot support this case.

Any comments for this user scenario?

Thanks,

Guangya

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-multicluster+unsub...@googlegroups.com.


--
Quinton Hoole
qui...@hoole.biz

Paul Morie

unread,
Aug 7, 2018, 9:40:57 PM8/7/18
to Guang Ya Liu, kubernetes-sig-multicluster
I’d love for someone to investigate pull reconciliation, viz by pulling from a git repo. Would love to talk to you about this.

P

On Tue, Aug 7, 2018 at 9:38 PM Guang Ya Liu <gyli...@gmail.com> wrote:
Thanks Quinton, I think we still need to consider this use case, as the current federation v2 will totally not work in such environment.

As you mentioned that pull mode was one of several approaches, does there are any other approaches in your mind? ;-)

Thanks,

Guangya

On Wednesday, August 8, 2018 at 4:25:42 AM UTC+8, Quinton Hoole wrote:
To be useful, a Kubernetes cluster needs to have network reachable API endpoint.  The federation control plane only needs access to those API endpoints, not to the nodes or containers in the clusters. 

I'm sure that there exist cases where the API endpoints of the clusters are not network reachable from the place where the Federation control plane is running, but they are relatively few and far between.  And yes, pull mode would be one of several approaches to addressing those.

Q


On Tue, Aug 7, 2018 at 12:46 AM Guang Ya Liu <gyli...@gmail.com> wrote:
Hi,

I want to have some discussion for Pull and Push Mode for federation V2. In document here https://docs.google.com/document/d/1ihWETo-zE8U_QNuzw5ECxOWX0Df_2BVfO3lC4OesKRQ/edit# , it is said that Pull reconciler does not have a reference implementation in federation V2 yet, and fed v2 is now leveraging kubebuilder to use Push reconciler enables/disables the sync loop runtime based on propagation config


But there is a use case as follows:
1) All of the kubernetes member clusters only has outbound network and no inbound network access.
2) Customer want to have a manager to manage all of those kubernetes member clusters.
3) As fed v2 is now using push mode and it cannot support this case.

Any comments for this user scenario?

Thanks,

Guangya

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.


--
Quinton Hoole
qui...@hoole.biz

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.

To post to this group, send email to kubernetes-si...@googlegroups.com.

Guang Ya Liu

unread,
Aug 7, 2018, 9:54:13 PM8/7/18
to Paul Morie, kubernetes-sig-multicluster
This is great. For pull mode, we may need to create a controller or agent for each member cluster, and the controller or agent for each member cluster is similar with kubelet, it will pull all of the info from federation controller plane api server.

I will update the agande of today's meeting and add this as a topic.

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-multicluster+unsub...@googlegroups.com.


--
Quinton Hoole
qui...@hoole.biz

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-multicluster+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-multicluster@googlegroups.com.

Paul Morie

unread,
Aug 7, 2018, 11:53:27 PM8/7/18
to Guang Ya Liu, kubernetes-sig-multicluster
Whoops, didn’t mean to take that off-list.

One example is kube applier: 

I could also see people wanting to target kustomize: 

Wdyt?

P

On Tue, Aug 7, 2018 at 10:14 PM Guang Ya Liu <gyli...@gmail.com> wrote:
like a propagator that dumps yaml into a git repo and does a push?

I think above you mean "like a propagator that dumps yaml into a git repo and does a pull?" I'm not clear about those tools, but it would be great if you can list some tools that we can check?

Building arbitrary propagation strategies is always great!

Thanks,

Guangya

On Wed, Aug 8, 2018 at 10:01 AM, Paul Morie <pmo...@redhat.com> wrote:
i want people to be able to implement arbitrary propagation strategies. Simultaneously i am not eager to build a brand-new pull mechanism. What about an integration with an existing tool, like a propagator that dumps yaml into a git repo and does a push? There are a number of tools that would likely be able to integrate at that level.

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.


--
Quinton Hoole
qui...@hoole.biz

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.

To post to this group, send email to kubernetes-si...@googlegroups.com.

Matt Ward

unread,
Aug 8, 2018, 3:51:28 AM8/8/18
to Paul Morie, Guang Ya Liu, kubernetes-sig-multicluster
adding a bit of background information about what I do today to manage our multicluster set up, which is very similar to what Paul is talking about I believe, but I'll give some more details of what I do in hopes it helps. (I'm here following along in hopes to migrate off this pile of closed source voodoo to Federation V2)

1- we basically wrote our own version of kustomize a year or so ago and have run with that. its really basic single level inheritance to prevent confusion from tons of layering/variables as most other tools do, and that reduced out yaml in our codebase by ~50%.

2- bootstrapping new clusters is a blast for us... nope. it involves a ton of copy-pasta of all our yamls, a quick manual deployment of a buildkite agent into our cluster, and then the agent runs per commit in out "deploy repo" which is basically a flattened output of what you'd get from our kustomize like tool. its worth noting that the way we manage cross cluster deployments is each cluster has a unique namespace you basically override to target it right now, which means the agent runs a "kubectl apply -R -f ${AGENT_CLUSTER_NAMESPACE}" to "push" our pile of software into the cluster.

3- we run clusters across AWS + GCE and the firewall rules we have setup require you to be on our VPN to access the API. our deploy agents essentially bypass this via buildkite, which simply instructs them what project/commit to run some commands for via a tunnel out. our clusters in AWS therefore do not have access to the clusters in GCE and vice versa.

4- a bit unrelated but... because of #3 cross cluster communication is a rather giant pain to set up right now with manual TLS, cluster external IPs, and firewall rule management. we started doing it before a bunch of cooler new things came out and are considering migrating our manual networking management there to Consul Service Mesh.

Cheers,
-- Matt Ward


Guang Ya Liu

unread,
Aug 8, 2018, 4:22:48 AM8/8/18
to Matt Ward, Paul Morie, kubernetes-sig-multicluster
Thanks Paul and Matt for the sharing, really helpful.

Here want to talk more for kube-applier and kustomize. The kube-applier can "Pull" YAML templates from github and deploy it in its cluster, with this, we can make sure all customers can have same resources copies. The kustomize can do some customize for the applications, similar as the xxx-override in federation v2.

But with kube-applier and kustomize, we can only resolve the "Pull" issue for workload deployment issue in different clusters, and there are still three issues in my mind:
1) How to integrate with federation v2? Also the kube-applier will request we need to install the kube-applier into each member cluster as an agent.
2) Still cannot handle the federation query case.
3) Seems cannot handle the case of customize workload in different member clusters but can only customize workload for each member cluster.

Thanks,

Guangya

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-multicluster+unsub...@googlegroups.com.


--
Quinton Hoole
qui...@hoole.biz

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-multicluster+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-multicluster@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-multicluster+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-multicluster@googlegroups.com.

Guang Ya Liu

unread,
Aug 12, 2018, 11:29:49 AM8/12/18
to kubernetes-sig-multicluster
Hi Paul, Maru and fedv2 members,

I posted a document for Pull Mode in fedv2 at https://docs.google.com/document/d/1JK8VMwx_pqnAChkjhmwoi7u78yu5oKnmgIuw8IM_XsM/edit?usp=sharing , please take a look and feed fred to post your comments there.

I will put this to today'e meeting agenda.

Thanks,

Guangya

aapa...@gmail.com

unread,
Apr 28, 2019, 12:03:23 AM4/28/19
to kubernetes-sig-multicluster
Any updates regarding the pull mode? Is there some work in progress or still in design/planning phase? Thanks!

Guang Ya Liu

unread,
Apr 28, 2019, 12:55:02 AM4/28/19
to aapa...@gmail.com, kubernetes-sig-multicluster
Hi aapaerno, this work is now in pending state as the federation working group are now working on some priority tasks for federation beta.

On Sun, Apr 28, 2019 at 12:03 PM <aapa...@gmail.com> wrote:
Any updates regarding the pull mode? Is there some work in progress or still in design/planning phase? Thanks!

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.

To post to this group, send email to kubernetes-si...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages