Hello all,
The lb_force_snat_ip on the GR changes the source address of the packet arriving from the external network to the GR's internal IP in the 100.64.X.Y/29 subnet. So, when the packet arrives at the destination pod on the overlay network, it would have lost the original source IP address.
Now, how would one define the Ingress Network Policy for that Pod? Say, these are the configurations:
K8s Node Subnets:
10.8.48.0/24
GR to DR Join Switch Subnets:
100.64.0.0/16 (divided into /29 chunks)
Cluster Subnet:
192.168.0.0/16
Consider the workload below.
Cluster Subnet
192.168.0.0/16
+------------------------------+
|Web Deployment (replicas 3) |
|+--------++--------++--------+|
|| web || web || web ||
|| pod1 || pod2 || pod3 ||
|| 4.5 || 5.6 || 7.9 ||
|+--------++--------++--------+|
+--------------^---------------+
|
|
|
+--------------+---------------+
| Kubernetes Service of |
| Type LoadBalancer |
| (and NodePort Service) |
+------------------------------+
So, if a NetworkPolicy for the web pods above is defined with the ingress as below...
ingress:
- from:
- ipBlock:
cidr:
10.8.48.0/24
ports:
- protocol: TCP
port: 80
...it is not of any use, since when the packet arrives at one of the POD, the src IP will be from
100.64.0.0/16 subnet. If we fix the above ingress by using the ingress as below...
ingress:
- from:
- ipBlock:
cidr:
100.64.0.0/16
ports:
- protocol: TCP
port: 80
... it will work, but then again the tenants will now need to know about
100.64.0.0/16 subnet which was supposed to be abstracted for them.
Also, say you want to restrict the pod access from only certain section of the DC (some subnet say
172.16.1.0/24), then we will not be able to do so since all the packets gets SNATed to
100.64.0.0/16?
Am I missing something here?
Regards,
~Girish