onv-kubernetes multi-homing feature

160 views
Skip to first unread message

Miguel Duarte de Mora Barroso

unread,
Jun 9, 2021, 10:22:30 AM6/9/21
to yu...@nvidia.com, giri...@gmail.com, ovn-kub...@googlegroups.com
Hello,

Let me first introduce myself: I'm Miguel, a software developer working for Red Hat, based in Madrid, Spain. I'm currently working for KubeVirt's (virtualization plugin for Kubernetes) networking team.

I've recently come across your PR [0], which adds multi-homing to ovn-kubernetes. Let me tell you that we are very keen on having / using this feature. As such, I would like to ask you what are your current plans for the aforementioned PR, and also to offer help in whatever you need to get it out.

As I've said in [1], the use case we're interested in is actually not exactly what you're proposing; we're looking for flat L2 overlays, optionally connected to the gateway routers in the nodes. Never-the-less, your work brings us a lot closer to it. My "idea" would be to help you achieve your goal, and then work on top of it to achieve KubeVirt's.

Let me share how I see these two topology modes co-existing: we'd need to expose another knob in the network-attachment definition that would indicate whether we want to use a flat overlay or your routed topology. Since yours would be the first implemented, it would be the default.

Depending on it, ovn-kubernetes master would create either the infrastructure you're adding on your PR *or* a simple logical switch (optionally connected to the gateway routers on the nodes).

The pod lifecycle part would be similar, and at a simple glance, all it will have to do is compute the logical switch name in a different way (using the node name or not) depending on the new knob on the configuration.

I'd love to hear your thoughts about this.

Do you think we could meet at the ovn-kubernetes community meeting to discuss this further ?

Thanks in advance,
Miguel


Girish M G (GmG)

unread,
Jun 10, 2021, 11:39:05 AM6/10/21
to Miguel Duarte de Mora Barroso, yu...@nvidia.com, ovn-kub...@googlegroups.com
Hello Miguel,

Timing of your question could not be any better. We are planning on presenting this in the next week's OVN K8s community meeting. We have the code ready and are testing it internally. 

We also have a need to have a single L2 logical switch as OVN logical network for one of the Pod's network attachments.

For example, say the POD needs direct connectivity to the Internet, then we will need a single Logical Switch to capture the public subnet. Our thinking was to use the OVN K8s Networking Configuration JSON file to capture the network topology type/kind and then the OVN K8s CNI code will build the correct topology

    +--------------------------------+          
    |POD                             |          
    |                                |          
    |                                |          
    |  +------+            +------+  |          
    +--+ eth0 +------------+ net1 +--+          
       +---+--+            +---+--+            
           |                   |                
           |                   |                
           |                   |                
           v               +---v---------------+
    .-----------.          |OVN Logical Switch |
 ,-'             '-.       |      Public       |
;    OVN Primary    :      |   24.50.10.0/24   |
:  Logical Network  ;      +-------------------+
 \                 /                            
  '-.           ,-'                            
     `---------'           
                    

Dan Williams

unread,
Jun 10, 2021, 11:45:45 AM6/10/21
to Girish M G (GmG), Miguel Duarte de Mora Barroso, yu...@nvidia.com, ovn-kub...@googlegroups.com
On Thu, 2021-06-10 at 08:38 -0700, Girish M G (GmG) wrote:
> Hello Miguel,
>
> Timing of your question could not be any better. We are planning on
> presenting this in the next week's OVN K8s community meeting. We have
> the code ready and are testing it internally. 
>
> We also have a need to have a single L2 logical switch as OVN logical
> network for one of the Pod's network attachments.

I have the same concerns as when this was brought up 2 years ago...
this increases complexity of ovnkube, testing matrices, etc. I'm also
sure that this feature will get more complex in the future.

Are we sure it needs to be in ovn-kubernetes? Or would it be better as
a secondary CNI plugin run with Multus? Why would it need to be part of
ovn-kubernetes itself?

Dan
> --
> You received this message because you are subscribed to the Google
> Groups "ovn-kubernetes" group.
> To unsubscribe from this group and stop receiving emails from it,
> send
> an email to ovn-kubernete...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ovn-kubernetes/CABop2t2YbusQW%2BTNPRDXfXKCPaOQ%3DwXk8AGcBkSscJ-fDsPuFw%40mail.gmail.com
> .


Girish Moodalbail

unread,
Jun 10, 2021, 1:01:39 PM6/10/21
to Dan Williams, Girish M G (GmG), Miguel Duarte de Mora Barroso, Yun Zhou, ovn-kub...@googlegroups.com
Hello Dan,

Additional OVN Networks to a Pod is still provisioned through Multus calling OVN K8s CNI. Multus will call OVN K8s CNI as many times as the number of OVN Networks for Pod.

Say, a Pod needs an additional storage network on top of the default OVN network. We define ovn-storage network-attachment-definition like below

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: ovn-storage
spec:
config: '{
"cniVersion": "0.4.0",
"name": "ovn-storage",
"primary": false,
"net_cidr": "10.193.0.0/16/26",
"mtu": 9000,
"type": "ovn-k8s-cni-overlay",
"logFile": "/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log",
"logLevel": "5",
"logfile-maxsize": 100,
"logfile-maxbackups": 5,
"logfile-maxage": 5
}'

and pod spec like below

apiVersion: v1
kind: Pod
metadata:
name: ubuntu2
annotations:
k8s.v1.cni.cncf.io/networks: default/ovn-storage
spec:
containers:
- name: ubuntu3

The Multus config has `ovn-primary` as the default network.

cni-conf.json: |
{
"cniVersion": "0.4.0",
"name": "multus-cni-network",
"type": "multus",
"logLevel": "debug",
"logFile": "/var/log/multus.log",
"logToStderr": false,
"systemNamespaces": ["kube-system"],
"delegates": [
{
"cniVersion": "0.4.0",
"type": "ovn-k8s-cni-overlay",
"name": "ovn-primary",
"logFile": "/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log",
"logLevel": "5",
"logfile-maxsize": 100,
"logfile-maxbackups": 5,
"logfile-maxage": 5
}
],
"confDir": "/etc/cni/net.d",
"readinessindicatorfile": "/etc/cni/net.d/10-ovn-kubernetes.conf",
"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
}

With this, Multus will
- first setup ovn-primary network through ovn-k8s-cni-overlay and ovn-primary configuration
- next it will setup ovn-storage network through ovn-k8s-cni-overlay and ovn-primary configuration

Regards,
~Girish

On 6/10/21, 8:45 AM, "ovn-kub...@googlegroups.com on behalf of Dan Williams" <ovn-kub...@googlegroups.com on behalf of dc...@redhat.com> wrote:

External email: Use caution opening links or attachments
To view this discussion on the web visit https://groups.google.com/d/msgid/ovn-kubernetes/31dfab7678736b2c68b16b75e4ac49c7b0d1f95f.camel%40redhat.com.

Han Zhou

unread,
Jun 11, 2021, 3:03:02 AM6/11/21
to Girish Moodalbail, Dan Williams, Girish M G (GmG), Miguel Duarte de Mora Barroso, Yun Zhou, ovn-kub...@googlegroups.com
Hi folks,

I have some questions regarding network policy on the "flat" secondary network.
1) Since each pod may have multiple interfaces (and IPs), how would we apply a network policy such as: pods with label A can access pods with label B on tcp port 80? (this question has nothing to do with "flat" but just for multus + network policy).
2) For egress policies, now ovn-k8s uses OVN's "to-lport" direction so that it can be applied AFTER the cluster VIPs are converted to individual backend IPs. Because of the per node LS this was possible - when the packet exits the L2 pipeline on the local node the ACLs for the "to-lport" direction is examined locally. However, if we use a flat L2 that can cross multiple nodes, then that implementation would not be "egress" policy any more because it will be enforced on the remote node and egress policy becomes useless.

Thanks,
Han

Miguel Duarte de Mora Barroso

unread,
Jun 23, 2021, 7:00:33 AM6/23/21
to Han Zhou, Girish Moodalbail, Dan Williams, Girish M G (GmG), Yun Zhou, ovn-kub...@googlegroups.com
On Fri, Jun 11, 2021 at 9:03 AM Han Zhou <zho...@gmail.com> wrote:
Hi folks,

I have some questions regarding network policy on the "flat" secondary network.
1) Since each pod may have multiple interfaces (and IPs), how would we apply a network policy such as: pods with label A can access pods with label B on tcp port 80? (this question has nothing to do with "flat" but just for multus + network policy).

The use cases we're considering (for KubeVirt) do not require network policies. 

As such, I was hoping we could move under the assumption that network policies would apply only to the primary interface. Not sure this is true for @giri...@gmail.com .
 
2) For egress policies, now ovn-k8s uses OVN's "to-lport" direction so that it can be applied AFTER the cluster VIPs are converted to individual backend IPs. Because of the per node LS this was possible - when the packet exits the L2 pipeline on the local node the ACLs for the "to-lport" direction is examined locally. However, if we use a flat L2 that can cross multiple nodes, then that implementation would not be "egress" policy any more because it will be enforced on the remote node and egress policy becomes useless.

Thanks,
Han

On Thu, Jun 10, 2021 at 10:01 AM Girish Moodalbail <gmood...@nvidia.com> wrote:
Hello Dan,

Additional OVN Networks to a Pod is still provisioned through Multus calling OVN K8s CNI. Multus will call OVN K8s CNI as many times as the number of OVN Networks for Pod.

Say, a Pod needs an additional storage network on top of the default OVN network. We define ovn-storage network-attachment-definition like below

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ovn-storage
spec:
  config: '{
  "cniVersion": "0.4.0",
  "name": "ovn-storage",
  "primary": false,
  "net_cidr": "10.193.0.0/16/26",
  "mtu": 9000,
  "type": "ovn-k8s-cni-overlay",
  "logFile": "/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log",
  "logLevel": "5",
  "logfile-maxsize": 100,
  "logfile-maxbackups": 5,
  "logfile-maxage": 5
  }'

This NAD has plenty of what our use cases require (the subnet info). 

I miss a knob for selecting which topology (when not the primary network), and another knob to indicate if we want the network to be connected to the outside.
Reply all
Reply to author
Forward
0 new messages