Hello SIG Network!
Today, I am thrilled to share with you a concept we've been working on in the community to create a new standard for lower-level networking within Kubernetes.
The concept is called KNI (Kubernetes Networking reImagined [Interface], a term originally coined by Tim Hockin back in 2016). Today networking is set up/torn down by the container runtime, however with KNI, networking is separated into a new modular Kubernetes specific KNI server called the network runtime, and the CRI functions to trigger networking are replaced by the new KNI gRPC API in the kubelet. This makes the code cleaner and simpler, and it is possible for cluster implementors to replace the entire networking implementation in a modular, flexible way without changing the container runtime or core Kubernetes. The KNI is more flexible than the CRI and so will also simplify future enhancement to Kubernetes networking capabilities.
Links:
KNI Presentation (links to the poc code changes are included)
KNI post by Doug Smith (this explains how to setup the demo and thanks Tomo for making this easier)
Hopeful Next steps:
KEP + establish working group + community sync
Your feedback and thoughts are invaluable to us, as we're exploring down this path to see if this is something that helps Kubernetes grow and be ideal for more use cases. We'll be talking about it at next week's SIG Network meeting and invite you to stay tuned for more updates and feel free to reach out to us here on the mailing list, or in #sig-network!
Hope you're having a great new year so far!
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/3c703905-0984-4c15-8dd0-1e1d86ed1084n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/a7b7b9c0-f42b-4053-ab38-ef9706a22a34n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/CACRAZPJrzR8%2B-X%3DCkUkcK1cqtHT11L7Akm40GcDc5uysuwrsUQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/408cd4f1-b88e-483c-97d4-21fa25ca1e1dn%40googlegroups.com.
Something to keep note is that CRI-API is not just for K8s and KNI has a mission to be k8s specific. One of the areas people enjoyed with KNI that it had the flexibility to be decoupled from the container runtime. It would be possible to do if the kni is a different service in the cri-api. However not certain we gain anything with that?
This mail from last year has some thoughts on the consolidation.
Should the CRI include networking? Should it be the CNI? (google.com)
--
You received this message because you are subscribed to a topic in the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/kubernetes-sig-network/dkrHPlVz2ME/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
kubernetes-sig-ne...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/kubernetes-sig-network/SJ0PR00MB1304FA17142C3E900A720CB2DC722%40SJ0PR00MB1304.namprd00.prod.outlook.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/b8ffc0e9-5bb4-4167-9bb0-d7de736c5c56n%40googlegroups.com.
crictl
. It is not a common purpose container runtime API for general use, and is intended to be Kubernetes-centric."To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/5d3baf2c-f407-415a-b8a1-740c2d27831en%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/CAOZRXm9sNz_xJar57KX80CTXbCdvWv4SS%2B1R_Ow0v5o%3DkwVNdA%40mail.gmail.com.
Hello, my responses are inline:
- there is a Pod lifecycle that is handled by the container runtime: container runtime create namespaces, calls CNI, pull images, create containers, ... this can be any project, not necessarily crio or containerd, that is what CRI API offers "CRI is a plugin interface which enables kubelet to use a wide variety of container runtimes"
[zappa] Are you proposing that a new container runtime? KNI aims to be responsible for networking not everything else. Networking for K8s can be done in a single place.
- Kubelet can not do network things on the Pods/Containers created by the runtime per the isolation principle Casey refers to, also this will not work if the runtime runs Pods as VMs as kubelet will not be able to access them directly.
[zappa] kubelet can execute RPC calls and it does. Are you assuming the KNI has native network objects? Are you saying that KNI cannot ever work with VM’s? Networking should be a first-class citizen.
- Adding a new component/API that will interact with the controller runtime in parallel through a new communication channel is the same decision that Openstack took with Neutron [1], and I think this is clearly something we don't want.
[zappa] What specifically did not work with Neutron? Are you assuming the CRI/KNI RPC are moving in parallel? Where is this happening? When you say we who is that?
My conclusion is that Kubelet communicates with the container runtime through CRI, and the network is configured at one point in time by the controller-runtime, so CRI is the only channel of communication we have with the network plugin, creating an out of bound channel by using CRDs or consuming the Kubelet API is how projects are solving this gap these days, and this is racy and hard to troubleshoot since a Pod creation call, requires the network configuration step to perform new queries to kubelet or apiserver, when this information should be part of the same operation.
[zappa] This makes some assumptions. The RunPodSandbox -> PodSandboxStatus -> AttachNetwork -> CreateContainers -> StartContainers. What is your specific concern here? Are you worried that the container could come up with no network? If AttachNetwork fails, the process stops.
One of the main problems I see is that there is no concept of network device in the OCI spec [2], only of block devices, if this concept exists, it will be very simple [3] just to use a device plugin [4] and Containerd Device Interface [5].
[zappa] For the OCI spec wouldn’t that be for the low level runtime aka runc/kata? I am not certain what is being proposed here. Is this to support VM’s? We need to have further discussions around how to support VM’s.
I think we should think holistically about this, "the Network" is not a thing, Pods and Nodes are things that have life cycles, Pods have network interfaces to communicate, how these network interfaces are connected between them is not the problem we are solving because is not a kubernetes problem, Kubernetes is not managing infrastructure, ClusterAPI does it [6], the problem we need to solve is how to configure these network interfaces so projects don't have to keep using out-of-band communication.
[zappa] Networking should be a first-class citizen. I would like to get your specific concerns here since I believe assumptions are being made.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/CABhP%3DtZh_-Q_iCpzbsbs02uD7-ZgQCpAvsYT_Fi32rwC4_5P7w%40mail.gmail.com.