Question about connecting to external VM from a POD, when that VM is on the same network as the node

2,600 views
Skip to first unread message

Bruno Vilhena

unread,
Sep 29, 2016, 9:10:18 AM9/29/16
to Kubernetes developer/contributor discussion
I've got a node (10.55.1.4) where my service pod runs, and from within that node VM I can ping an elasticsearch  cluster (not containerised) which runs on a 10.66.1.4 VM. However from within the pod itself, I don't have connectivity to that VM. Is there anyway of configuring this?

Thanks

Vishnu Kannan

unread,
Sep 29, 2016, 10:50:14 AM9/29/16
to Bruno Vilhena, Kubernetes developer/contributor discussion
Take a look at the "Services without selectors" section in this doc

On Thu, Sep 29, 2016 at 6:10 AM, Bruno Vilhena <bruno.rv...@gmail.com> wrote:
I've got a node (10.55.1.4) where my service pod runs, and from within that node VM I can ping an elasticsearch  cluster (not containerised) which runs on a 10.66.1.4 VM. However from within the pod itself, I don't have connectivity to that VM. Is there anyway of configuring this?

Thanks

--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/36e4e4b2-578e-498c-8617-3085a98eb694%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Bruno Vilhena

unread,
Sep 29, 2016, 1:39:59 PM9/29/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com
Hi Vishnu, I might be missing something... I did try that, used the following config:

kubectl create -f service.yaml

where my yaml file is:

{
   
"kind": "service",
   
"apiVersion": "v1",
   
"metadata": {
       
"name": "TT"
   
},
   
"subsets": [
       
{
           
"addresses": [
               
{ "ip": "10.66.1.4" }
           
],
           
"ports": [
               
{ "port": 9200 }
           
]
       
}
   
]
}

However I see the endpoint created, my node (10.55.1.10) can connect to elasticsearch, but if I go into the previously deployed pod with my application, it still cannot connect to 10.66.1.4

On Thursday, 29 September 2016 15:50:14 UTC+1, Vishnu Kannan wrote:
Take a look at the "Services without selectors" section in this doc
On Thu, Sep 29, 2016 at 6:10 AM, Bruno Vilhena <bruno.rv...@gmail.com> wrote:
I've got a node (10.55.1.4) where my service pod runs, and from within that node VM I can ping an elasticsearch  cluster (not containerised) which runs on a 10.66.1.4 VM. However from within the pod itself, I don't have connectivity to that VM. Is there anyway of configuring this?

Thanks

--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.

Vishnu Kannan

unread,
Sep 29, 2016, 2:28:56 PM9/29/16
to Bruno Vilhena, Kubernetes developer/contributor discussion
On Thu, Sep 29, 2016 at 10:39 AM, Bruno Vilhena <bruno.rv...@gmail.com> wrote:
Hi Vishnu, I might be missing something... I did try that, used the following config:

kubectl create -f service.yaml

where my yaml file is:

{
   
"kind": "service",
   
"apiVersion": "v1",
   
"metadata": {
       
"name": "TT"
   
},
   
"subsets": [
       
{
           
"addresses": [
               
{ "ip": "10.66.1.4" }
           
],
           
"ports": [
               
{ "port": 9200 }
           
]
       
}
   
]
}

However I see the endpoint created, my node (10.55.1.10) can connect to elasticsearch, but if I go into the previously deployed pod with my application, it still cannot connect to 10.66.1.4
From within your pod, you'd reach the service and not its external endpoint directly. This helps isolate your app against external endpoint changes. In your case, can you try accessing the `serviceName` (via DNS) from within the pod?
 

On Thursday, 29 September 2016 15:50:14 UTC+1, Vishnu Kannan wrote:
Take a look at the "Services without selectors" section in this doc

On Thu, Sep 29, 2016 at 6:10 AM, Bruno Vilhena <bruno.rv...@gmail.com> wrote:
I've got a node (10.55.1.4) where my service pod runs, and from within that node VM I can ping an elasticsearch  cluster (not containerised) which runs on a 10.66.1.4 VM. However from within the pod itself, I don't have connectivity to that VM. Is there anyway of configuring this?

Thanks

--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/36e4e4b2-578e-498c-8617-3085a98eb694%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/ae401e57-953b-4a7c-8fb0-6e043e680a90%40googlegroups.com.

Bruno Vilhena

unread,
Sep 29, 2016, 3:34:26 PM9/29/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com
Hi Vishnu, I can't this is an elasticsearch running on a vm, not a kubernetes cluster, it only has the private network ip address, so no service/dns name resolution.

The elasticsearch runs on an azure vm, which is part of the same private network as the node where the pods run.

Vishnu Kannan

unread,
Sep 29, 2016, 3:37:46 PM9/29/16
to Bruno Vilhena, Kubernetes developer/contributor discussion
On Thu, Sep 29, 2016 at 12:34 PM, Bruno Vilhena <bruno.rv...@gmail.com> wrote:
Hi Vishnu, I can't this is an elasticsearch running on a vm, not a kubernetes cluster, it only has the private network ip address, so no service/dns name resolution.
By "service" I was referring to the k8s service that you had created to represent your external elasticsearch cluster. 
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/b86c835c-2240-4f7e-8619-0c99818b603d%40googlegroups.com.

Bruno Vilhena

unread,
Sep 29, 2016, 4:42:22 PM9/29/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com

I see, I created a service to talk to the elastic backend, and one for my app pods.

If I curl 10.0.130.179:9200, from the Node box, all is good I get connection to elastic. However if I bash into the pod for my app (which need the connectivity to elastic) and curl 10.0.130.179:9200, no connection is established.

Name: elas
Namespace: default
Labels: <none>
Selector: <none>
Type: ClusterIP
IP: 10.0.130.179
Port: <unset> 9200/TCP
Endpoints: 10.6.3.4:9200
Session Affinity: None
No events.

Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Selector: <none>
Type: ClusterIP
IP: 10.0.0.1
Port: https 443/TCP
Endpoints: 10.55.1.10:443
Session Affinity: ClientIP
No events.

Name: spiffy
Namespace: default
Labels: run=spiffy
Selector: <none>
Type: NodePort
IP: 10.0.47.30
Port: http 8081/TCP
NodePort: http 30860/TCP
Endpoints: <none>
Session Affinity: None
Enter code here...

Vishnu Kannan

unread,
Sep 29, 2016, 4:47:43 PM9/29/16
to Bruno Vilhena, Kubernetes developer/contributor discussion, Tim Hockin
+Tim Hockin

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/86fc98c6-3696-40d7-b2b8-fe253a48139f%40googlegroups.com.

Rodrigo Campos

unread,
Sep 29, 2016, 6:41:26 PM9/29/16
to Bruno Vilhena, Kubernetes developer/contributor discussion
On Thu, Sep 29, 2016 at 01:42:21PM -0700, Bruno Vilhena wrote:
>
> I see, I created a service to talk to the elastic backend, and one for my
> app pods.
>
> If I curl 10.0.130.179:9200, from the Node box, all is good I get
> connection to elastic. However if I bash into the pod for my app (which
> need the connectivity to elastic) and curl 10.0.130.179:9200, no connection
> is established.

Does elastic search accept connections from other network? (in aws the hosted
elastic search is PITA and has those issues). Or is any authentication needed?

Also, can you ping the nodes or access some other hosted service? Is just
elastic search?

Bruno Vilhena

unread,
Sep 29, 2016, 6:54:39 PM9/29/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com
Hi,

I think it's not elastic, as I should still be able to ping the private ip address of the VM running elastic from within the pods (which I can do from the k8s node VM).

No auth is required.

I can ping the nodes only, and the nodes themselves can ping elastic boxes, I can't, however, ping any other VMs that are not  part of the nodes subnet.

So this is not related with elastic, it's either related with the ip range of the pods, or the node iptables no routing my request properly, from the pod.

Rodrigo Campos

unread,
Sep 29, 2016, 7:09:28 PM9/29/16
to Bruno Vilhena, Kubernetes developer/contributor discussion
On Thu, Sep 29, 2016 at 03:54:39PM -0700, Bruno Vilhena wrote:
> Hi,
>
> I think it's not elastic, as I should still be able to ping the private ip
> address of the VM running elastic from within the pods (which I can do from
> the k8s node VM).

Oh, cool. So, ip route on the pods and nodes? traceroute gives more info?

Don't know how the networking is configured on azure, but that is the key, of
course. Maybe it's just not adding some rules to the azure internal network or
something.

Tim Hockin

unread,
Sep 30, 2016, 12:29:14 AM9/30/16
to Rodrigo Campos, Bruno Vilhena, Kubernetes developer/contributor discussion
Is this running with flannel or weave or ... or with some native
networking (I don't know Azure networking)?

Can a pod ping *any* VM other than its own? Can it ping other VMs in
the kube cluster? Outside the kube cluster? How about google.com ?
> --
> You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
> To post to this group, send email to kuberne...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/20160929230922.GB9427%40sdfg.com.ar.

Cole Mickens

unread,
Sep 30, 2016, 1:26:56 AM9/30/16
to Tim Hockin, Rodrigo Campos, Bruno Vilhena, Kubernetes developer/contributor discussion
If you're on Azure, can you let me know how you booted the cluster, in addition to Tim's question about if you're using an overlay vs native (with the new cloudprovider support)?

It sounds as if you've placed a headless Service in front of the external Elasticsearch cluster. Is there any chance this is a cluster you deployed with kubernetes-anywhere and you left it deploying a 1.4.0-beta.2 cluster? If so, you may be hitting this bug which has since been fixed: https://github.com/kubernetes/kubernetes-anywhere/issues/232

On Thu, Sep 29, 2016 at 9:28 PM, 'Tim Hockin' via Kubernetes developer/contributor discussion <kuberne...@googlegroups.com> wrote:
Is this running with flannel or weave or ... or with some native
networking (I don't know Azure networking)?

Can a pod ping *any* VM other than its own?  Can it ping other VMs in
the kube cluster?  Outside the kube cluster?  How about google.com ?

On Thu, Sep 29, 2016 at 4:09 PM, Rodrigo Campos <rod...@sdfg.com.ar> wrote:
> On Thu, Sep 29, 2016 at 03:54:39PM -0700, Bruno Vilhena wrote:
>> Hi,
>>
>> I think it's not elastic, as I should still be able to ping the private ip
>> address of the VM running elastic from within the pods (which I can do from
>> the k8s node VM).
>
> Oh, cool. So, ip route on the pods and nodes? traceroute gives more info?
>
> Don't know how the networking is configured on azure, but that is the key, of
> course. Maybe it's just not adding some rules to the azure internal network or
> something.
>
> --
> You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kubernetes-dev@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/CAO_RewZ2%2BRUZ-daEPCvysfrvR2VgX3JXTvR3Lu-ELZFQOin7fw%40mail.gmail.com.

Bruno Vilhena

unread,
Sep 30, 2016, 2:48:39 AM9/30/16
to Kubernetes developer/contributor discussion, tho...@google.com, rod...@sdfg.com.ar, bruno.rv...@gmail.com
@Rodrigo - I can see routes defined for for the node and master boxes on Azure, but yeah I'll try your suggestion of finding the traceroutes. I agree it does looks like my setup on azure is flawed somehow.

@Tim - I think this is native, not flannel. The pod can ping any VM that was created for the kubernetes cluster under subnet 10.55.1.0/24, if I try to ping anything on a different subnet from the pod (even subnets on the same vnet as the k8s cluster), it fails. The node however can ping everything with no problems. I've even tried adding different subnets to the routing table created with the cluster.

@Cole - I am using kubernetes-anywhere to bootup the cluster. I'm not entirely sure, to me it looks like the native network, but I could be wrong.
              I've tried connecting by deploying a headless (no selector) service in kubernetes that routes to elastic, and I've tried connecting straight to the elastic VMs, no luck in either case.

Additional description on how I boot up. 

I use kubernetes-anywhere with a address prefix of 10.55.0.0/16, I create subnet for the nodes and master which is 10.55.1.0/24.

After the cluster boots up I use the azure node sdk, to create a gateway and two vpn connections to an existing vnet (where the elasticsearch VMs live). From this point on any VM on the kubernetes cluster can connect to elastic.

I appreciate all of your help, guys.



On Friday, 30 September 2016 06:26:56 UTC+1, Cole Mickens wrote:
If you're on Azure, can you let me know how you booted the cluster, in addition to Tim's question about if you're using an overlay vs native (with the new cloudprovider support)?

It sounds as if you've placed a headless Service in front of the external Elasticsearch cluster. Is there any chance this is a cluster you deployed with kubernetes-anywhere and you left it deploying a 1.4.0-beta.2 cluster? If so, you may be hitting this bug which has since been fixed: https://github.com/kubernetes/kubernetes-anywhere/issues/232
On Thu, Sep 29, 2016 at 9:28 PM, 'Tim Hockin' via Kubernetes developer/contributor discussion <kuberne...@googlegroups.com> wrote:
Is this running with flannel or weave or ... or with some native
networking (I don't know Azure networking)?

Can a pod ping *any* VM other than its own?  Can it ping other VMs in
the kube cluster?  Outside the kube cluster?  How about google.com ?

On Thu, Sep 29, 2016 at 4:09 PM, Rodrigo Campos <rod...@sdfg.com.ar> wrote:
> On Thu, Sep 29, 2016 at 03:54:39PM -0700, Bruno Vilhena wrote:
>> Hi,
>>
>> I think it's not elastic, as I should still be able to ping the private ip
>> address of the VM running elastic from within the pods (which I can do from
>> the k8s node VM).
>
> Oh, cool. So, ip route on the pods and nodes? traceroute gives more info?
>
> Don't know how the networking is configured on azure, but that is the key, of
> course. Maybe it's just not adding some rules to the azure internal network or
> something.
>
> --
> You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
> To post to this group, send email to kuberne...@googlegroups.com.

> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/20160929230922.GB9427%40sdfg.com.ar.
> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.

Tim Hockin

unread,
Sep 30, 2016, 3:01:45 AM9/30/16
to Bruno Vilhena, Kubernetes developer/contributor discussion, Rodrigo Campos
It may be that you can only cross subnets (I assume that is a formal
construct in Azure) with VM IPs (GCE has this restriction for
edge-NATs) and you need to tweak the `non-masquerade-cidr` flag to
just your local subnet? It's a wild guess...

Bruno Vilhena

unread,
Sep 30, 2016, 6:13:22 AM9/30/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
I have an update, it seems that adding hostNetwork: true to my service deployment yaml configuration, makes it work. So it looks like the pod is sharing the host's network? Any issues with doing this?

Bruno Vilhena

unread,
Sep 30, 2016, 6:45:04 AM9/30/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
yeah, this doesn't work, because it doesn't allow me to deploy more then one pod in the same host.

Tim Hockin

unread,
Sep 30, 2016, 11:25:31 AM9/30/16
to Bruno Vilhena, Kubernetes developer/contributor discussion, Rodrigo Campos
host* is always a last resort - use with caution.

I still have my money on the non-masquerade-cidr
> https://groups.google.com/d/msgid/kubernetes-dev/432ad625-0213-476a-a0f0-9131c300debf%40googlegroups.com.

Bruno Vilhena

unread,
Sep 30, 2016, 1:24:13 PM9/30/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
I will have a look into the azure documenation, to see if there's anything that can be done about the non-masqueraded-cidr.

I have post on the kubernetes-anywhere project a better description of my issue:

I'm facing an issue right now with an azure/kubernetes migration, which is related with the following:




We have a VNET1 (ex: 10.66.0.0/16) and a VNET2 (ex:10.55.0.0/16), within each of the VNETs we have a gateway and we connect both VNETs using these gateways and VPN connections.


VNET1 has an elasticsearch cluster, hosted on subnet 10.66.1.0/24 and VNET2 has a kubernetes cluster hosted on 10.55.1.0/24 (which is provisioned by kubernetes-anywhere).


VNET1 has 3 client boxes for elastic (ex: 10.66.1.4, 10.66.1.5 and 10.66.1.6), and VNET2 has 2 kuberenetes nodes (ex: 10.55.1.4 and 10.55.1.5) as well as a master.


Inside the VNET2 kuberenetes nodes run kubernetes pods (which have a containerized api, that needs a connection to the elasticsearch cluster).


Our kubenetes nodes (ex: VNET2 - 10.55.1.4 and 10.55.1.5), can connect to the elasticsearch nodes (ex: VNET1 - 10.66.1.4, 10.66.1.5 and 10.66.1.6).


However the pods/containers which run on these nodes, and are assigned to their own subnets (ex: 10.244.0.0/24, 10.244.1.0/24), cannot connect to the elasticsearch nodes/VMs or to VM outside of the nodes subnet for that matter. Independent of which VNET this VM might sit on.


Is there something misconfigured in the networking model? Should I be using an overlay network? If I deploy my pods with shared host network, I get connection to elastic, but I can only share one pod per host, otherwise they conflict.

Bruno Vilhena

unread,
Sep 30, 2016, 2:56:12 PM9/30/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar

Bit of an update on this one. If I create a completely new VM/network interface on a diferent subnet in the k8s cluster VNET, I am now able to have the pods ping it, as long as I add that subnet on the route tables generated on azure.


I'm still not able to ping a VM that is on the other VNET (connected by gateway vpn connections).


So the node can ping these VMs, but the container still cannot.


I tried adding the non-masquerade-flag on the kubelet service, set to 172.0.0.0/8 and restart the service, but still no luck.

Tim Hockin

unread,
Sep 30, 2016, 3:51:55 PM9/30/16
to Bruno Vilhena, Kubernetes developer/contributor discussion, Rodrigo Campos
I would try the CIDR to be limited to just your local VNet?

On Fri, Sep 30, 2016 at 11:56 AM, Bruno Vilhena
> https://groups.google.com/d/msgid/kubernetes-dev/7785e289-4b1f-4926-aad1-d14d74141384%40googlegroups.com.

Bruno Vilhena

unread,
Sep 30, 2016, 4:26:58 PM9/30/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
The kubelet non-masquerade-flag CIDR you mean?

Bruno Vilhena

unread,
Sep 30, 2016, 4:39:58 PM9/30/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
azureuser@sayt-dev1-master:~$ sudo iptables -n -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination        
KUBE
-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0          
ACCEPT     udp  
--  0.0.0.0/0            0.0.0.0/0            udp dpt:68


Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        
DOCKER
-ISOLATION  all  --  0.0.0.0/0            0.0.0.0/0          
DOCKER     all  
--  0.0.0.0/0            0.0.0.0/0          
ACCEPT     all  
--  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  
--  0.0.0.0/0            0.0.0.0/0          
ACCEPT     all  
--  0.0.0.0/0            0.0.0.0/0          


Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        
KUBE
-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
KUBE
-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0          


Chain DOCKER (1 references)
target     prot opt source               destination        


Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination        
RETURN     all  
--  0.0.0.0/0            0.0.0.0/0          


Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination        
DROP       all  
--  0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000


Chain KUBE-SERVICES (1 references)
target     prot opt source               destination        
REJECT     tcp  
--  0.0.0.0/0            10.0.159.92          /* default/elasticsearch: has no endpoints */ tcp dpt:9200 reject-with icmp-port-unreachable
REJECT     tcp  
--  0.0.0.0/0            10.0.148.42          /* default/spiffy:http has no endpoints */ tcp dpt:8081 reject-with icmp-port-unreachable

master iptables

Ilya Dmitrichenko

unread,
Sep 30, 2016, 6:08:34 PM9/30/16
to Bruno Vilhena, Kubernetes developer/contributor discussion, rod...@sdfg.com.ar
Hey Bruno,


On Fri, 30 Sep 2016, 18:24 Bruno Vilhena, <bruno.rv...@gmail.com> wrote:

Is there something misconfigured in the networking model? Should I be using an overlay network? If I deploy my pods with shared host network, I get connection to elastic, but I can only share one pod per host, otherwise they conflict.

You might want to try using an overlay, it can simplify connectivity issues like this, not sure about flannel, but Weave would work around this quite well.

Just to be clear, are you saying that packets from different pods on the same host turn up with the same src address?

Bruno Vilhena

unread,
Oct 1, 2016, 4:07:09 AM10/1/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
Hi,

not sure I understand your last question, the issue is the pods are not able to ping or connect to anything outside the VNET even though there are existing VPN gateway connections and the host can connect to those machines. As Tim and Cole pointed out this is a network configuration issue, I've just not been able to sort it, probably due to my little knowledge of networking.

I'll keep playing around with it, and if I get to a point where I'm absolutely sure I'll get nowhere, then I'll probably have to think of using an overlay solution.

Ilya Dmitrichenko

unread,
Oct 1, 2016, 4:17:47 AM10/1/16
to Bruno Vilhena, Kubernetes developer/contributor discussion, rod...@sdfg.com.ar
Bruno,

I was just trying to interpret what you mean by "I get connection to elastic, but I can only share one pod per host, otherwise they conflict".

Cheers,
Ilya 

Bruno Vilhena

unread,
Oct 1, 2016, 7:26:30 AM10/1/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
Regarding that, it only happens when I set the yaml pod to use the hostNetwork option, this basically shares the same network configuration has the host is using, which means you can only have one pod assigned to that host using such a configuration, otherwise you have overlapping ports. So this is not a good solution.

the proper solution I reckon is related with iptables rules that enable routing of the 172.0.0.0/8 range the pods get attributed to the VM on the other vnet.

Tim Hockin

unread,
Oct 1, 2016, 2:18:58 PM10/1/16
to Bruno Vilhena, Kubernetes developer/contributor discussion, Rodrigo Campos
To explain the non-masquerade-cidr flag: Some network environments
"know" what IPs are in use, and only allow those IPs to cross network
boundaries. GCE has this rule for the public IP NAT. I am
hypothesizing that maybe VNMet has the same rule. To dodge this rule
you have to make it look like traffic from the pod is actually from
the node (which is really unfortunate because you lose the source IP).
The way to do this is called IP masquerade. Kubelet configures IP
masquerade for all connections EXCEPT the non-masquerade-cidr. So set
that flag to the largest range you can which is known to not need
masquerade.

On Sat, Oct 1, 2016 at 4:26 AM, Bruno Vilhena
> https://groups.google.com/d/msgid/kubernetes-dev/a37271c2-70a8-4cca-ba97-8d95828469eb%40googlegroups.com.

Bruno Vilhena

unread,
Oct 1, 2016, 5:02:27 PM10/1/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
Hi Tim thanks,

the VMs in my VNET1 are on the 10.6.0.0/16 range.

the VMs in my kubernetes VNET are on the 10.55.0.0/16 range.



"ip addr show" inside the container shows the following:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:f4:00:12 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.18/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2c7a:93ff:fe51:8658/64 scope link 
       valid_lft forever preferred_lft forever



"ip addr show" inside the VM shows the following:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0d:3a:23:77:ca brd ff:ff:ff:ff:ff:ff
    inet 10.55.1.10/24 brd 10.55.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20d:3aff:fe23:77ca/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:78:36:25:1e brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever


I was trying messing around with the kubelet flag for non masquerade. From what I understood, the non-masquerade-cidr ought to be anything that doesn't cover 10.6.0.0/16 or 10.55.0.0/16, but covers 10.244.0.0/16? Is this correct, I'm not sure I can do that without a list of cidr ranges.

Tim Hockin

unread,
Oct 1, 2016, 5:15:15 PM10/1/16
to Bruno Vilhena, kuberne...@googlegroups.com, Rodrigo Campos

For the VMs in your Kube vent, set the flag to 10.55.0.0/16 h you said containers can ping VMs in the same VNET, right?


To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/e03512e7-24db-4e04-adc4-c253b4c42fd4%40googlegroups.com.

Bruno Vilhena

unread,
Oct 1, 2016, 5:45:23 PM10/1/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
Tried it, but no luck.

/hyperkube kubelet --address=0.0.0.0 --allow-privileged=true --cloud-provider=azure --enable-server --enable-debugging-handlers --kubeconfig=/srv/kubernetes/kubeconfig.json --config=/etc/kubernetes/manifests --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --v=2 --api-servers=http://localhost:8080 --register-schedulable=false --cloud-config=/etc/kubernetes/azure.json --non-masquerade-cidr=10.55.0.0/16

Tim Hockin

unread,
Oct 1, 2016, 5:55:44 PM10/1/16
to Bruno Vilhena, kuberne...@googlegroups.com, Rodrigo Campos

Ok, the next step is to tcpdump in the root of your VM whole a pod tries to access a VM in another vnet.  I want to know how far the packets are going and what is coming back.


To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/697ef36b-e7e8-4759-b140-4706967897b8%40googlegroups.com.

Bruno Vilhena

unread,
Oct 1, 2016, 6:27:07 PM10/1/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
Hi Tim, 

so if I tcpdump, on the VM I'm trying to access (the one on the other VNET), and I grep for the ip addres of the pods I'm pinging from, I actually see the ping in the dump:

from POD 10.244.0.18 (on the kubernetes VNET) I'm pinging the elasticsearch backend 10.6.3.4, on the other subnet. Here is the dump:

listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
23:24:05.690502 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 34, length 64
23:24:05.690517 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 34, length 64
23:24:06.691229 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 35, length 64
23:24:06.691241 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 35, length 64
23:24:07.691334 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 36, length 64
23:24:07.691357 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 36, length 64
23:24:08.692566 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 37, length 64
23:24:08.692584 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 37, length 64
23:24:09.693477 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 38, length 64
23:24:09.693490 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 38, length 64
23:24:10.695084 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 39, length 64
23:24:10.695105 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 39, length 64
23:24:11.695285 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 40, length 64
23:24:11.695299 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 40, length 64
23:24:12.696660 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 41, length 64
23:24:12.696684 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 41, length 64
23:24:13.698257 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 42, length 64
23:24:13.698277 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 42, length 64
23:24:14.698576 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 43, length 64
23:24:14.698591 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 43, length 64
23:24:15.699834 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 44, length 64
23:24:15.699859 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 44, length 64
23:24:16.700761 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 45, length 64
23:24:16.700775 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 45, length 64
23:24:17.702592 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 46, length 64
23:24:17.702604 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 46, length 64

Tim Hockin

unread,
Oct 1, 2016, 7:04:15 PM10/1/16
to Bruno Vilhena, kuberne...@googlegroups.com, Rodrigo Campos

Soon... It is pinging and getting a reply... But you assert the container is not seeing the ping response?

Can you tcpdump from inside the container and see what that says.


To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/abc74c36-12ae-4c41-9360-893fca7c26c0%40googlegroups.com.

Bruno Vilhena

unread,
Oct 1, 2016, 8:06:15 PM10/1/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
Yeah, took me a bit longer because I had to build an image with tcpdump available (POD now has 10.244.1.15 IP):

The tcpdump on pod when pinging from 10.6.3.4 (the elasticsearch VM), shows no result, so no packets are getting there.

If I tcpdump on the pods and ping 10.6.3.4, I get this:

00:04:52.742172 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 71, length 64
00:04:53.743259 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 72, length 64
00:04:54.744361 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 73, length 64
00:04:55.745482 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 74, length 64
00:04:56.746579 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 75, length 64



traceroute from pod to VM shows this:

traceroute 10.6.3.4
traceroute to 10.6.3.4 (10.6.3.4), 30 hops max, 60 byte packets
 1  10.244.1.1 (10.244.1.1)  0.049 ms  0.026 ms  0.009 ms
 2  10.55.2.5 (10.55.2.5)  2.225 ms  2.209 ms  2.195 ms
 3  * * *
 4  * * *
 5  * * *
 6  * * *
 7  * * *
 8  * * *
 9  * * *
10  * * *
11  * * *


traceroute from VM to pod shows this:

traceroute 10.244.1.15
traceroute to 10.244.1.15 (10.244.1.15), 30 hops max, 60 byte packets
 1  * * *
 2  * * *
 3  * * *
 4  * * *
 5  * * *
 6  * * *
 7  * * *
 8  * * *


Tim Hockin

unread,
Oct 1, 2016, 8:17:27 PM10/1/16
to Bruno Vilhena, kuberne...@googlegroups.com, Rodrigo Campos

So the root namespace sees the ICMP reply, but not the container namespace?  Now we're making progress.  The next trick would be to throw an iptables TRACE in, but you may need to load a kernel module to make it work, depending on distro.

In the root namespace:

Iptables -t raw -i PREROUTING -s other.ip -j TRACE

Run dmesg -c first, to clear the buffer.


To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/0d861a41-d901-4668-b1c1-adb2e5f57f15%40googlegroups.com.

Bruno Vilhena

unread,
Oct 2, 2016, 2:46:49 AM10/2/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar
Hi Tim,

this is really helpful, but I'm going to travel out of the country in a couple of hours, for 2 weeks, and will have no access to a laptop.

So I'll have to resume this when I'm back, I'm really sorry, I'm really sorry about this.

The iptables entry didn't work for me so I'll need to load the kernel module and I'll retry.
Message has been deleted
Message has been deleted
Message has been deleted

Bruno Vilhena

unread,
Oct 17, 2016, 5:36:11 AM10/17/16
to Kubernetes developer/contributor discussion, bruno.rv...@gmail.com, rod...@sdfg.com.ar

Hi Tim,


sorry for the long delay in replying while I was away.

I got back today and tried the iptables trace as you suggested, here is what I found.


ping from pod to VM:

Oct 17 10:14:51 ess-elas-client2-dev1 kernel: [2683185.298244] TRACE: raw:PREROUTING:policy:8 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51666 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=0 
Oct 17 10:14:51 ess-elas-client2-dev1 kernel: [2683185.298263] TRACE: filter:INPUT:policy:2 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51666 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=0 
Oct 17 10:14:51 ess-elas-client2-dev1 kernel: [2683185.298271] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6095 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=0 
Oct 17 10:14:51 ess-elas-client2-dev1 kernel: [2683185.298273] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6095 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=0 
Oct 17 10:14:52 ess-elas-client2-dev1 kernel: [2683186.298497] TRACE: raw:PREROUTING:policy:8 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51766 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=1 
Oct 17 10:14:52 ess-elas-client2-dev1 kernel: [2683186.298516] TRACE: filter:INPUT:policy:2 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51766 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=1 
Oct 17 10:14:52 ess-elas-client2-dev1 kernel: [2683186.298524] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6328 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=1 
Oct 17 10:14:52 ess-elas-client2-dev1 kernel: [2683186.298527] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6328 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=1 
Oct 17 10:14:53 ess-elas-client2-dev1 kernel: [2683187.299114] TRACE: raw:PREROUTING:policy:8 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51888 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=2 
Oct 17 10:14:53 ess-elas-client2-dev1 kernel: [2683187.299133] TRACE: filter:INPUT:policy:2 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51888 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=2 
Oct 17 10:14:53 ess-elas-client2-dev1 kernel: [2683187.299142] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6383 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=2 
Oct 17 10:14:53 ess-elas-client2-dev1 kernel: [2683187.299144] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6383 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=2 



ping from VM to pod:


Oct 17 10:19:13 ess-elas-client2-dev1 kernel: [2683446.769289] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16189 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=1 UID=1000 GID=1000 
Oct 17 10:19:13 ess-elas-client2-dev1 kernel: [2683446.769295] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16189 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=1 UID=1000 GID=1000 
Oct 17 10:19:14 ess-elas-client2-dev1 kernel: [2683447.778164] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16398 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=2 UID=1000 GID=1000 
Oct 17 10:19:14 ess-elas-client2-dev1 kernel: [2683447.778170] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16398 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=2 UID=1000 GID=1000 
Oct 17 10:19:15 ess-elas-client2-dev1 kernel: [2683448.786141] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16629 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=3 UID=1000 GID=1000 
Oct 17 10:19:15 ess-elas-client2-dev1 kernel: [2683448.786147] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16629 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=3 UID=1000 GID=1000 
Oct 17 10:19:16 ess-elas-client2-dev1 kernel: [2683449.794155] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16634 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=4 UID=1000 GID=1000 
Oct 17 10:19:16 ess-elas-client2-dev1 kernel: [2683449.794160] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16634 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=4 UID=1000 GID=1000 


I hope this is what you we're looking for. It seems the prerouting is missing on the logs when pinging from the VM to the pod (which is the ping that fails).
&gt

Bruno Vilhena

unread,
Oct 17, 2016, 8:53:05 AM10/17/16
to Kubernetes developer/contributor discussion
I followed the suggestion from Microsoft's Narayan: https://github.com/NarayanAnnamalai

And rather then using gateway connections did a peering of both vnets: https://azure.microsoft.com/en-us/documentation/articles/virtual-network-peering-overview/

this sorts the connectivity issues between VM in the first VNET and pods in the second VNET.

On Thursday, 29 September 2016 14:10:18 UTC+1, Bruno Vilhena wrote:
I've got a node (10.55.1.4) where my service pod runs, and from within that node VM I can ping an elasticsearch  cluster (not containerised) which runs on a 10.66.1.4 VM. However from within the pod itself, I don't have connectivity to that VM. Is there anyway of configuring this?

Thanks

Hafizullah Nikben

unread,
Mar 14, 2018, 7:42:51 AM3/14/18
to Kubernetes developer/contributor discussion
I am having a similar issue to this. I have VM in another subnet within the same network. This VM hosts a service on port 80. I want to be able to access this service from a pod within my k8s cluster using the Vm's private ip address(vm does not have a public ip).

I can ping/access it from the host machine of the k8s cluster, but when I exec into the k8s cluster and try to access this service, it fails.

Harrison Jung

unread,
Jul 2, 2018, 1:28:17 AM7/2/18
to Kubernetes developer/contributor discussion
If you use GKE. See Firewall rule.

Create New Firewall rule.
Source IP ranges are in Kubernetes Pod IP Range.




2018년 3월 14일 수요일 오후 8시 42분 51초 UTC+9, Hafizullah Nikben 님의 말:
Reply all
Reply to author
Forward
This conversation is locked
You cannot reply and perform actions on locked conversations.
0 new messages