KPNG... should we do it, or just keep it as a POC ? hangout tomorrow!!!!

197 views
Skip to first unread message

jay vyas

unread,
Oct 13, 2022, 5:08:50 PM10/13/22
to kubernetes-sig-network
hey folks !   So, we need to brainstorm a little bit about, what we should do , now that KPNG is "working" for the most part....  No wrong answers here, just... that we need to know how to prioritize work. 

As mentioned today - it now works on all backends (windows, nft, iptables, ipvs, and even ebpf)... and so, as a POC , we can say "yup, it works".... now... we have to decide 

- Should it be a project that lives,  separate from core sig-network  in-tree proxy roadmap?
- Should it be something that gets integrated over time, into sig-network /s main repos?
- Should it just be a really cool, living POC of what you *could* do for a kube proxy impl?

We tried to pose this question but, instead got bogged down in the "How" of:
- what sigs to talk to 
- how scale testing works
- why 1000 nodes is hard 
etc... which is all totally valid , but... only in context of wether we actually want to replace the in tree kube proxy something like KPNG,   which still seems to be up for grabs..... 

So, we need some opinions from the sig - should we do another round , or if not, find other ways to help out (for example, with more traditional issues, like the EndpointSlice stuff, or component config, etc...)

If folks have thoughts - reach out to me or andrew stoyocos or rajas - - on #sig-network-kpng or maybe just add a thought in this thread.  
 

 
if folks want an invite, ping me / andrew stoyocos / kal / mark rosetti we'll just forward it along! its at 4EST / 1 PST tomorrow! 


--
jay vyas

Shane Utt

unread,
Oct 14, 2022, 9:41:49 AM10/14/22
to kubernetes-sig-network
Thanks for bringing this up Jay. One thing that I think is missing for me to feel more confident in providing feedback is more clear motivation. Perhaps we can take some of the time at the upcoming meeting to fill in the currently missing motivation section of the KEP?

jay vyas

unread,
Oct 14, 2022, 9:56:40 AM10/14/22
to kubernetes-sig-network
Ah yeah, we can actually add some of  the original notes (component config, diffs, vendorability, developer experience, decoupling from apiserver "watch" etc... ) https://docs.google.com/document/d/1yW3AUp5rYDLYCAtZc6e4zeLbP5HPLXdvuEFeVESOTic/edit# to the motivation section today in the kpng meeting (Starting in 30) .... good idea thx shane

Bowei Du

unread,
Oct 14, 2022, 11:16:56 AM10/14/22
to jay vyas, kubernetes-sig-network
Agreed with Shane on really focusing on the requirements before talking about "the what".

For example:

- Better componentization
- Better extensibility
- Enable consistency if someone wanted to have an external (e.g. their own) implementation.
- ?Decoupling from K8s release?
- ?Better scalability?

Then the discussion becomes much clearer around what needs to be done.

Thanks,
Bowei

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/3e77177a-e19c-406f-8aef-3b152a0921dbn%40googlegroups.com.

jay vyas

unread,
Oct 14, 2022, 11:38:08 AM10/14/22
to kubernetes-sig-network
Cool , yeah, we cobbled together a Motivation section just now for this .... based partially on lar's original blog post https://kubernetes.io/blog/2021/10/18/use-kpng-to-write-specialized-kube-proxiers/ , and some asks from Per and others at the mtng today... 

If folks want to suggest more along those lines feel free to - - 

Per Andersson

unread,
Oct 14, 2022, 12:58:16 PM10/14/22
to jay vyas, kubernetes-sig-network

This is why I want/need KPNG.
Separation of the  control plane and the  dataplane/backend implementations.

 

I want full control of the data plane implementation, we typically implement the data plane using  P4 or with DPDK/IPDK  (to implement the full stack in the Linux kernel is a not an option for us).
What I do not want to do is to copy/fork the existing proxy and start from there. I want to use an upstream common control plane and then plugin the different backend components we develop towards it
We plan to develop three different backends during 2023, Linux DPDK, Nvidia Bluefield 3 and Intel Mt Evans.
KPNG is intentionally designed to make this easy.

 

We should aim to have one control plane and not a set of similar control planes, KPNG makes this easy.

 

//Per

 

Varun Marupadi

unread,
Oct 14, 2022, 4:11:10 PM10/14/22
to Per Andersson, jay vyas, kubernetes-sig-network
Did this meeting already happen?

I would like to add my voice to Per's use case as well. I am also interested in potentially running an upstream control plane with a data plane that is loosely coupled. 


jay vyas

unread,
Oct 14, 2022, 4:53:43 PM10/14/22
to Varun Marupadi, Per Andersson, kubernetes-sig-network
 https://VMware.zoom.us/j/94048611817?pwd=eTNHNHVVRnFSMDJXRXBSRndxbkJsdz09&from=addon 

still going on reached some conclusions tim can summarize. 
--
jay vyas

Tim Hockin

unread,
Oct 14, 2022, 6:08:03 PM10/14/22
to jay vyas, Varun Marupadi, Per Andersson, kubernetes-sig-network

Varun Marupadi

unread,
Oct 14, 2022, 7:37:30 PM10/14/22
to Tim Hockin, jay vyas, Per Andersson, kubernetes-sig-network
Thanks for the drawings, Tim! It definitely helps those of us that need the block diagrams to make sense of the parts.

I missed the first part of the meeting, so apologies if I'm just rehashing what was said explicitly - it sounds like your concerns are twofold:
1) The gRPC API is elevated to the position of being the main (only?) contract between the front and back ends, and you are not convinced it needs to be.
2) For simpler backends, the downside of having to import/bundle the entire gRPC library is a significant burden
3) Introducing a golang interface to be the main contract between the components addresses both the above, and additionally allows additional remote wire formats (like xDS) in an architecturally cohesive way.

Did I summarize correctly?

-Varun

jay vyas

unread,
Oct 14, 2022, 8:02:27 PM10/14/22
to kubernetes-sig-network
Awesome ok, yeah,  is great thanks Tim (big thanks to everyone who came khal, tim, rob, per, mark, stoyocos, mikeal, mark, ricardo, bowie,  amim, varun et al...) . 


- So for those not on the call, Heres overall what completing the KPNG transition would look like
work , and facading KPNG underneath cmd/proxy/ so that the cmd line options are preserved and so on  https://github.com/kubernetes-sigs/kpng/issues/380 
- imo maybe need to visit componentConfig for different backends (i.e. things like --enable-dsr or --ipvs-exclude-cidr at some point but we can see about that later 
- If anyone wants to donate some resources for larger clusters, were happy to gobble them up.  We'll also i guess now prioritize thursdays suggestions around scale testing/talking to the sig-scalability folks and investing time in that as well, filed a follow on here https://github.com/kubernetes-sigs/kpng/issues/379 ... - We have metrics in flight as well so we can have more fine grained perf info .https://github.com/kubernetes-sigs/kpng/issues/351 ... 

Then we'll come back in a few wks with more stuff to review.  For folks reviewing the KEP, please pardon the dust while we refactor it . 


If folks have further opinions please definitely post them in this thread or hit us up in #sig-network-kpng on slack !

Tim Hockin

unread,
Oct 14, 2022, 8:06:42 PM10/14/22
to Varun Marupadi, jay vyas, Per Andersson, kubernetes-sig-network
On Fri, Oct 14, 2022 at 4:37 PM Varun Marupadi <varu...@google.com> wrote:
>
> Thanks for the drawings, Tim! It definitely helps those of us that need the block diagrams to make sense of the parts.
>
> I missed the first part of the meeting, so apologies if I'm just rehashing what was said explicitly - it sounds like your concerns are twofold:
> 1) The gRPC API is elevated to the position of being the main (only?) contract between the front and back ends, and you are not convinced it needs to be.
> 2) For simpler backends, the downside of having to import/bundle the entire gRPC library is a significant burden
> 3) Introducing a golang interface to be the main contract between the components addresses both the above, and additionally allows additional remote wire formats (like xDS) in an architecturally cohesive way.
>
> Did I summarize correctly?

Yeah, pretty much. We didn't talk about dep-management on the call
but it struck me as I was drawing these. Having the main API be gRPC
brings a lot of deps. They are deps we already use, for the most port
so it's not a HUGE deal - FOR US. I wonder, still if it could be
avoided, and libkpng layer made as svelte as possible.

Per Andersson

unread,
Oct 14, 2022, 9:16:28 PM10/14/22
to Tim Hockin, Varun Marupadi, jay vyas, kubernetes-sig-network
Thanks for the pictures Tim, I like what you are showing in slide 5 and 6, specially if you combine it with slide 3.
The gRPC and XDs adapters are really just two other local built in adapters with this thinking.

I agree with you that we need to look at the dependencies and version management of the remote protocols/APIs

//per



-----Original Message-----
From: Tim Hockin <tho...@google.com>
Sent: Friday, October 14, 2022 20:06
To: Varun Marupadi <varu...@google.com>
Cc: jay vyas <jayunit1...@gmail.com>; Per Andersson <p...@kaloom.com>; kubernetes-sig-network <kubernetes-...@googlegroups.com>
Subject: Re: [k8s-sig-net] Re: KPNG... should we do it, or just keep it as a POC ? hangout tomorrow!!!!

[You don't often get email from tho...@google.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]

On Fri, Oct 14, 2022 at 4:37 PM Varun Marupadi <varu...@google.com> wrote:
>
> Thanks for the drawings, Tim! It definitely helps those of us that need the block diagrams to make sense of the parts.
>
> I missed the first part of the meeting, so apologies if I'm just rehashing what was said explicitly - it sounds like your concerns are twofold:
> 1) The gRPC API is elevated to the position of being the main (only?) contract between the front and back ends, and you are not convinced it needs to be.
> 2) For simpler backends, the downside of having to import/bundle the
> entire gRPC library is a significant burden
> 3) Introducing a golang interface to be the main contract between the components addresses both the above, and additionally allows additional remote wire formats (like xDS) in an architecturally cohesive way.
>
> Did I summarize correctly?

Yeah, pretty much. We didn't talk about dep-management on the call but it struck me as I was drawing these. Having the main API be gRPC brings a lot of deps. They are deps we already use, for the most port so it's not a HUGE deal - FOR US. I wonder, still if it could be avoided, and libkpng layer made as svelte as possible.


> -Varun
>
>
> On Fri, Oct 14, 2022 at 3:08 PM Tim Hockin <tho...@google.com> wrote:
>>
>> OK, I slapped some drawings together, hopefully this makes sense?
>>
>> https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc
>> s.google.com%2Fpresentation%2Fd%2F1Y-tZ4fFC9L2NvtBeiIXD1MiJ0ieg_zg4Q6
>> SlFmIax8w%2Fedit%3Fhl%3Den%26resourcekey%3D0-SFhIGTpnJT5fo6ZSzQC57g%2
>> 3slide%3Did.g16976fedf03_0_221&amp;data=05%7C01%7Cper%40kaloom.com%7C
>> dd849340af8a46dfe24308daae412312%7C47d58e26f79648e8ac401c365c204513%7
>> C0%7C0%7C638013892030996193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwM
>> DAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp
>> ;sdata=5KKyd3iCSyS%2B%2B0O6DtkvBJhWWNUt9jwCY3XTlyCfjtI%3D&amp;reserve
>> d=0
>>
>> On Fri, Oct 14, 2022 at 1:53 PM jay vyas <jayunit1...@gmail.com> wrote:
>> >
>> >
>> > https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fv
>> > mware.zoom.us%2Fj%2F94048611817%3Fpwd%3DeTNHNHVVRnFSMDJXRXBSRndxbkJ
>> > sdz09%26from%3Daddon&amp;data=05%7C01%7Cper%40kaloom.com%7Cdd849340
>> > af8a46dfe24308daae412312%7C47d58e26f79648e8ac401c365c204513%7C0%7C0
>> > %7C638013892030996193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiL
>> > CJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;s
>> > data=Q2GqzQCGv8Syjf8f1QlV1n%2B9HqUgljpiO0mEkQZHclM%3D&amp;reserved=
>> > 0
>> >
>> > still going on reached some conclusions tim can summarize.
>> >
>> > On Fri, Oct 14, 2022 at 4:11 PM Varun Marupadi <varu...@google.com> wrote:
>> >>
>> >> Did this meeting already happen?
>> >>
>> >> I would like to add my voice to Per's use case as well. I am also interested in potentially running an upstream control plane with a data plane that is loosely coupled.
>> >>
>> >>
>> >> On Fri, Oct 14, 2022, 9:58 AM Per Andersson <p...@kaloom.com> wrote:
>> >>>
>> >>> This is why I want/need KPNG.
>> >>> Separation of the control plane and the dataplane/backend implementations.
>> >>>
>> >>>
>> >>>
>> >>> I want full control of the data plane implementation, we typically implement the data plane using P4 or with DPDK/IPDK (to implement the full stack in the Linux kernel is a not an option for us).
>> >>> What I do not want to do is to copy/fork the existing proxy and
>> >>> start from there. I want to use an upstream common control plane and then plugin the different backend components we develop towards it We plan to develop three different backends during 2023, Linux DPDK, Nvidia Bluefield 3 and Intel Mt Evans.
>> >>> KPNG is intentionally designed to make this easy.
>> >>>
>> >>>
>> >>>
>> >>> We should aim to have one control plane and not a set of similar control planes, KPNG makes this easy.
>> >>>
>> >>>
>> >>>
>> >>> //Per
>> >>>
>> >>>
>> >>>
>> >>> From: kubernetes-...@googlegroups.com
>> >>> <kubernetes-...@googlegroups.com> On Behalf Of jay vyas
>> >>> Sent: Friday, October 14, 2022 11:38
>> >>> To: kubernetes-sig-network
>> >>> <kubernetes-...@googlegroups.com>
>> >>> Subject: Re: [k8s-sig-net] Re: KPNG... should we do it, or just keep it as a POC ? hangout tomorrow!!!!
>> >>>
>> >>>
>> >>>
>> >>> Cool , yeah, we cobbled together a Motivation section just now for this .... based partially on lar's original blog post https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkubernetes.io%2Fblog%2F2021%2F10%2F18%2Fuse-kpng-to-write-specialized-kube-proxiers%2F&amp;data=05%7C01%7Cper%40kaloom.com%7Cdd849340af8a46dfe24308daae412312%7C47d58e26f79648e8ac401c365c204513%7C0%7C0%7C638013892030996193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=1Srku1O8SwMaD3wyG5lrxrKxsrZ0FW3qDpDlH1LD1rE%3D&amp;reserved=0 , and some asks from Per and others at the mtng today...
>> >>>
>> >>>
>> >>>
>> >>> https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2
>> >>> Fgithub.com%2Fkubernetes%2Fenhancements%2Fpull%2F2094%2Ffiles%23d
>> >>> iff-f710ebab82ca5cb8d75e7711841fe743804f425fb40b9ac522529ff71ee41
>> >>> 04eR182&amp;data=05%7C01%7Cper%40kaloom.com%7Cdd849340af8a46dfe24
>> >>> 308daae412312%7C47d58e26f79648e8ac401c365c204513%7C0%7C0%7C638013
>> >>> 892030996193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoi
>> >>> V2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=
>> >>> 0%2BoFx396%2BG6rkfVLWuWxPnJJenxoXpq%2BC4K7Q9XpAmo%3D&amp;reserved
>> >>> =0
>> >>>
>> >>> If folks want to suggest more along those lines feel free to - -
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On Friday, October 14, 2022 at 11:16:56 AM UTC-4 bo...@google.com wrote:
>> >>>
>> >>> Agreed with Shane on really focusing on the requirements before talking about "the what".
>> >>>
>> >>>
>> >>>
>> >>> For example:
>> >>>
>> >>>
>> >>>
>> >>> - Better componentization
>> >>>
>> >>> - Better extensibility
>> >>>
>> >>> - Enable consistency if someone wanted to have an external (e.g. their own) implementation.
>> >>>
>> >>> - ?Decoupling from K8s release?
>> >>>
>> >>> - ?Better scalability?
>> >>>
>> >>>
>> >>>
>> >>> Then the discussion becomes much clearer around what needs to be done.
>> >>>
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Bowei
>> >>>
>> >>>
>> >>>
>> >>> On Fri, Oct 14, 2022 at 6:56 AM jay vyas <jayunit1...@gmail.com> wrote:
>> >>>
>> >>> Ah yeah, we can actually add some of the original notes
>> >>> (component config, diffs, vendorability, developer experience,
>> >>> decoupling from apiserver "watch" etc... )
>> >>> https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2
>> >>> Fdocs.google.com%2Fdocument%2Fd%2F1yW3AUp5rYDLYCAtZc6e4zeLbP5HPLX
>> >>> dvuEFeVESOTic%2Fedit%23&amp;data=05%7C01%7Cper%40kaloom.com%7Cdd8
>> >>> 49340af8a46dfe24308daae412312%7C47d58e26f79648e8ac401c365c204513%
>> >>> 7C0%7C0%7C638013892030996193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4w
>> >>> LjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%
>> >>> 7C%7C&amp;sdata=h%2B8N5zTJDDmikpMw2JEZ5OozEfVB9PSZTeo287EQZ1k%3D&
>> >>> amp;reserved=0 to the motivation section today in the kpng
>> >>> meeting (Starting in 30) .... good idea thx shane
>> >>>
>> >>>
>> >>>
>> >>> On Friday, October 14, 2022 at 9:41:49 AM UTC-4 sh...@konghq.com wrote:
>> >>>
>> >>> Thanks for bringing this up Jay. One thing that I think is missing for me to feel more confident in providing feedback is more clear motivation. Perhaps we can take some of the time at the upcoming meeting to fill in the currently missing motivation section of the KEP?
>> >>>
>> >>> On Thursday, October 13, 2022 at 5:08:50 PM UTC-4 jayunit1...@gmail.com wrote:
>> >>>
>> >>> hey folks ! So, we need to brainstorm a little bit about, what we should do , now that KPNG is "working" for the most part.... No wrong answers here, just... that we need to know how to prioritize work.
>> >>>
>> >>>
>> >>>
>> >>> As mentioned today - it now works on all backends (windows, nft,
>> >>> iptables, ipvs, and even ebpf)... and so, as a POC , we can say
>> >>> "yup, it works".... now... we have to decide
>> >>>
>> >>>
>> >>>
>> >>> - Should it be a project that lives, separate from core sig-network in-tree proxy roadmap?
>> >>>
>> >>> - Should it be something that gets integrated over time, into sig-network /s main repos?
>> >>>
>> >>> - Should it just be a really cool, living POC of what you *could* do for a kube proxy impl?
>> >>>
>> >>>
>> >>>
>> >>> We tried to pose this question but, instead got bogged down in the "How" of:
>> >>>
>> >>> - what sigs to talk to
>> >>>
>> >>> - how scale testing works
>> >>>
>> >>> - why 1000 nodes is hard
>> >>>
>> >>> etc... which is all totally valid , but... only in context of wether we actually want to replace the in tree kube proxy something like KPNG, which still seems to be up for grabs.....
>> >>>
>> >>>
>> >>>
>> >>> So, we need some opinions from the sig - should we do another
>> >>> round , or if not, find other ways to help out (for example, with
>> >>> more traditional issues, like the EndpointSlice stuff, or
>> >>> component config, etc...)
>> >>>
>> >>>
>> >>>
>> >>> If folks have thoughts - reach out to me or andrew stoyocos or rajas - - on #sig-network-kpng or maybe just add a thought in this thread.
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> The original KEP is here:
>> >>> .https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%
>> >>> 2Fgithub.com%2Fkubernetes%2Fenhancements%2Fpull%2F2094&amp;data=0
>> >>> 5%7C01%7Cper%40kaloom.com%7Cdd849340af8a46dfe24308daae412312%7C47
>> >>> d58e26f79648e8ac401c365c204513%7C0%7C0%7C638013892030996193%7CUnk
>> >>> nown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik
>> >>> 1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=dGaWDK8lBtmb0DtrVG
>> >>> Chnt7UHf3sY81I7RF6skadxrA%3D&amp;reserved=0
>> >>>
>> >>>
>> >>>
>> >>> if folks want an invite, ping me / andrew stoyocos / kal / mark rosetti we'll just forward it along! its at 4EST / 1 PST tomorrow!
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>>
>> >>> jay vyas
>> >>>
>> >>> --
>> >>> You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
>> >>> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
>> >>> To view this discussion on the web visit https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-network%2F3e77177a-e19c-406f-8aef-3b152a0921dbn%2540googlegroups.com&amp;data=05%7C01%7Cper%40kaloom.com%7Cdd849340af8a46dfe24308daae412312%7C47d58e26f79648e8ac401c365c204513%7C0%7C0%7C638013892030996193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=C4na3Jy94Q2Ocpg2%2BQkam9mQw4b2G6ydR6XZK%2Bi4b3Y%3D&amp;reserved=0.
>> >>>
>> >>> --
>> >>> You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
>> >>> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
>> >>> To view this discussion on the web visit https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-network%2Fa16db069-3e22-47c1-ac2f-2ee75fe17e24n%2540googlegroups.com&amp;data=05%7C01%7Cper%40kaloom.com%7Cdd849340af8a46dfe24308daae412312%7C47d58e26f79648e8ac401c365c204513%7C0%7C0%7C638013892030996193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=hc98M5uMPT1%2BjX6MOveMhncyE01BQTJxGL3PI9OBqDw%3D&amp;reserved=0.
>> >>>
>> >>> --
>> >>> You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
>> >>> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
>> >>> To view this discussion on the web visit https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-network%2FYQBPR0101MB4097AF3C02937D0AA1A76CC5D9249%2540YQBPR0101MB4097.CANPRD01.PROD.OUTLOOK.COM&amp;data=05%7C01%7Cper%40kaloom.com%7Cdd849340af8a46dfe24308daae412312%7C47d58e26f79648e8ac401c365c204513%7C0%7C0%7C638013892030996193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=qVBgLItTZUzTptCUDnd0Py0YUahwFX4lrZEWfWemLtU%3D&amp;reserved=0.
>> >
>> >
>> >
>> > --
>> > jay vyas
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
>> > To view this discussion on the web visit https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-network%2FCACVCA%253Dd9HrACEkLb25h44BRU8NKbXCkGdG3oe4DVYJrksBa1JA%2540mail.gmail.com&amp;data=05%7C01%7Cper%40kaloom.com%7Cdd849340af8a46dfe24308daae412312%7C47d58e26f79648e8ac401c365c204513%7C0%7C0%7C638013892030996193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=XwT%2FWjWfLOQ1BITKpaqOOUQDdfWdDldaP3dW1IoLvJE%3D&amp;reserved=0.

Mikaël Cluseau

unread,
Oct 15, 2022, 7:17:53 AM10/15/22
to Per Andersson, Tim Hockin, Varun Marupadi, jay vyas, kubernetes-sig-network
Hi all,

I made a 2nd try at drawing kpng's architecture, I hope it's better than the previous one. It will hopefully match Tim's drawings pretty well :-)

arch-try-2.png

Also, since the discussion raised some questions about load-balancing/HA, I would like to highlight that pure gRPC allows this in kpng/client:

image.png



arch-try-2.dia
arch-try-2.svg

Mikaël Cluseau

unread,
Oct 15, 2022, 7:20:48 AM10/15/22
to Per Andersson, Tim Hockin, Varun Marupadi, jay vyas, kubernetes-sig-network

Antonio Ojea

unread,
Oct 15, 2022, 8:22:14 AM10/15/22
to jay vyas, kubernetes-sig-network
> - We'll be restructuring the KEP to be around how https://github.com/kubernetes/kubernetes/tree/master/staging and the existing API structures   https://github.com/kubernetes-sigs/kpng/blob/master/api/localnetv1/services.proto 
work , and facading KPNG underneath cmd/proxy/ so that the cmd line options are preserved and so on  https://github.com/kubernetes-sigs/kpng/issues/380 

I couldn't attend the meeting sorry, but I still don't understand what is the need to have the code in the kubernetes repo, is not something that will be consumed by any other code and will complicate so much the development, since it is going to be tied to the kubernetes releases, also bring dependencies to kubernetes/kubernetes that is something that the project is trying to reduce 

I think that it should evolve and mature in its own repo and depending on its evolution, we can judge if it can be a drop in replacement of kube-proxy ...

jay vyas

unread,
Oct 15, 2022, 3:51:33 PM10/15/22
to Antonio Ojea, kubernetes-sig-network
Fair points Antonio.  Especially about tying to versions... 
  In general, where is the policy for how staging/ repos or new dependencies  are justified ?   Is this a sig-arch topic ? Who makes decisions about overall codebase org and deps for k/k? We can follow up w them .

Antonio Ojea

unread,
Oct 16, 2022, 4:32:59 PM10/16/22
to jay vyas, kubernetes-sig-network
Yeah, I would start there

/area code-organization
/sig architecture


Khaled Henidak

unread,
Oct 17, 2022, 10:45:38 AM10/17/22
to Antonio Ojea, jay vyas, kubernetes-sig-network
a side note re the discussion we had last week: Effectively we are changing the min permission needed for kube-proxy to work. Today we need a read on endpointslices, services, nodes (at least on top of my head). When we switch to kpng we will only need permission on:

Option 1: running in cobbled mode (one binary with FE and BE)
-- same as today

Option 2: running in fan-out mode where FE and BE are running separate pods (with endpoints)
a permission model will be needed for kpng service, and that is not something that will be asserted by api-server. And another set of permissions needed, described below

Another note: LB for kpng service (in fan out mode, where FE and LB are separate).
1. We shouldn't ask operators to throw it behind an environment LB instead node agent (the BE) should perform client LB against all endpoints (in some slice attached to a well-known service).
2. That well-known service (similar to kubernetes default service) must be there all of the time. Which means our kpng service (the FE) must make sure that it is always there because without everything will break

Just a thought

Kal
  

  

jay vyas

unread,
Nov 7, 2022, 10:07:07 AM11/7/22
to kubernetes-sig-network
I think i sent this, but not sure, so reposting, just hung out w/ rajas, and talked to stoyocos last wk, .....   re-integrating all this into my brain its pretty clear.... 

- i think in KPNG we have, a new type of car
- but what sig-net is asking for, is a battery.

Antonio's original comment deserves imo another look: 

"I still don't understand what is the need to have the code in the kubernetes repo"  

This whole thread can probably be coarsely summarized as "Lets just build a battery, not a rebuild the car (yet), and (imo) let the battery be a standalone repo" 

create a library , in a place like kubernetes-sigs/kube-proxy-lib or whatever... put the original kube proxy maintainers as the OWNERS.md on it 
leverage 
  - parts from KPNG that are usable, i.e. in https://github.com/kubernetes-sigs/kpng/tree/master/server/jobs/kube2store  
  - stoyocos' deltas in  https://github.com/kubernetes-sigs/kpng/pull/389 as a "first pass" at the minimal footprint of a kube-proxy-lib  
migrate things like dans doc "So you wanna build a Service Proxy "  into that repo , and update it to leverage kube-proxy-lib
publish and release the lib w/ a ref impl (eBPF or userspace or a "dummy svc proxy" that prints services/endpoints/topologies... or something else that makes it self-verifyable and makes it concrete/easy to learn about).
test the lib on specific k8s versions, and version it w/ support for different k8s versions and so on
write  a very small KEP, based on   tims slides that describes this, and references lessons learned trying to make KPNG a replacement and, try to make it unopinionated, so that it can move forward w/ less friction and cognitive load.





jay vyas

unread,
Nov 11, 2022, 11:47:23 AM11/11/22
to kubernetes-sig-network
Per dan today in the KPNG mtng, 
- theres already a kube-proxy github repo
- we could just use that
- one less new repo... for all the "kube-proxy" things that end up out of tree 






Antonio Ojea

unread,
Nov 11, 2022, 11:56:31 AM11/11/22
to jay vyas, kubernetes-sig-network
+1

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.

Douglas Landgraf

unread,
Nov 11, 2022, 3:45:10 PM11/11/22
to jay vyas, kubernetes-sig-network
/me nods +1

 



--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.

Bowei Du

unread,
Nov 11, 2022, 4:44:06 PM11/11/22
to doug...@redhat.com, jay vyas, kubernetes-sig-network
The repo looks active, may want to contact the existing users to understand what their use of it is.

Bowei

Antonio Ojea

unread,
Nov 11, 2022, 6:19:15 PM11/11/22
to Bowei Du, doug...@redhat.com, jay vyas, kubernetes-sig-network

jay vyas

unread,
Jan 28, 2023, 8:17:16 AM1/28/23
to kubernetes-sig-network
Hi everyone , figured we should close this thread out now - - -  stoycos has a new KEP which willi iterate towards kube proxy in staging, and that will solve the "first" problem defined here.  
Then folks can start to hash out the interface. 

Reply all
Reply to author
Forward
0 new messages