--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/cc0e327f-46bd-45be-8770-d14300e4a84cn%40googlegroups.com.
Users don't configure networking, vendors, CNIs, and service meshes do - so you will always struggle to find user stories from end users around this. Users just expect the networking to be there and work.
That doesn't mean it's not a problem that needs solving, just that you're probably asking the wrong people (end users).
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/16204527-3e95-4a0b-abf7-0b51d0e78138n%40googlegroups.com.
Following the authoritative argument, we should not have a Node object either.I agree that it is the network administrator that configures the network outside the cluster, but then we still need a way to represent that network inside K8s, same as we do for Node.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/CABhP%3DtZ7R%2B%3Dx%3D_3uvVU7GKZsQkcD-d%2Bkyg2ckhWS-ZNvOZD45Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/CABhP%3DtZ7R%2B%3Dx%3D_3uvVU7GKZsQkcD-d%2Bkyg2ckhWS-ZNvOZD45Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/CABhP%3DtZ7R%2B%3Dx%3D_3uvVU7GKZsQkcD-d%2Bkyg2ckhWS-ZNvOZD45Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/CANbFp%3Da2UwUOSb3vj13HC9kn3cKbXpXQFYgaUd-5_oK0Xjr7Wg%40mail.gmail.com.
Hi!Maybe glueing multiple clusters via mesh is just wrong.
On Fri, Apr 5, 2024, 15:35 Sandor Szuecs <sandor...@zalando.de> wrote:Hi!Maybe glueing multiple clusters via mesh is just wrong.I would not say only 'multiple clusters' - but VMs and devices too. Mesh is not k8s specific.And workloads are already 'glued' via internet - with standard DNS for name discovery and well established TLS ( with client certs or not ), JWT, etc
There is a need to also support private VPCs in a consistent matter - also including non-k8s workloads, and to represent this in the Pod - both in the CR status but as interface too.
'Mesh' is a vague term - what used to be called 'intranet', and distinct from the public Internet, but larger than a single k8s cluster.
On Sat, 6 Apr 2024 at 01:33, Costin Manolache <cos...@google.com> wrote:On Fri, Apr 5, 2024, 15:35 Sandor Szuecs <sandor...@zalando.de> wrote:Hi!Maybe glueing multiple clusters via mesh is just wrong.I would not say only 'multiple clusters' - but VMs and devices too. Mesh is not k8s specific.And workloads are already 'glued' via internet - with standard DNS for name discovery and well established TLS ( with client certs or not ), JWT, etcThat's what we do and it works well. There is no coupling between clusters and pods do not know it.There is a need to also support private VPCs in a consistent matter - also including non-k8s workloads, and to represent this in the Pod - both in the CR status but as interface too.1) I don't see a problem, because we do exactly the same without mesh nor that the pod has to have more than one network
2) You can also use a CoreDNS template and use it to pass to a VPCEndpoint or udp/tcp/wireguard/mesh proxy or an http router (f.e. skipper that routes to VPCEndpoints and the rest) that does the glue without a need that the pod knows about multiple interfaces with multiple networks. We do this in >200 clusters with 150k pods for internal ingress.
'Mesh' is a vague term - what used to be called 'intranet', and distinct from the public Internet, but larger than a single k8s cluster.The term I don't mind, but that you need multiple IPs or worse multiple devices in a pod that makes things complicated.There are ways you can do this without it.I was responsible for production data centers with the same network split (storage vs lb net), but TBH it was a mistake.Better to have only one LACP (pod network routed here) and another one for iLO (no pod network access only admin).