I've got a node (10.55.1.4) where my service pod runs, and from within that node VM I can ping an elasticsearch cluster (not containerised) which runs on a 10.66.1.4 VM. However from within the pod itself, I don't have connectivity to that VM. Is there anyway of configuring this?Thanks
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/36e4e4b2-578e-498c-8617-3085a98eb694%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
{
"kind": "service",
"apiVersion": "v1",
"metadata": {
"name": "TT"
},
"subsets": [
{
"addresses": [
{ "ip": "10.66.1.4" }
],
"ports": [
{ "port": 9200 }
]
}
]
}
Take a look at the "Services without selectors" section in this doc.
On Thu, Sep 29, 2016 at 6:10 AM, Bruno Vilhena <bruno.rv...@gmail.com> wrote:
I've got a node (10.55.1.4) where my service pod runs, and from within that node VM I can ping an elasticsearch cluster (not containerised) which runs on a 10.66.1.4 VM. However from within the pod itself, I don't have connectivity to that VM. Is there anyway of configuring this?Thanks
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
Hi Vishnu, I might be missing something... I did try that, used the following config:kubectl create -f service.yamlwhere my yaml file is:
{
"kind": "service",
"apiVersion": "v1",
"metadata": {
"name": "TT"
},
"subsets": [
{
"addresses": [
{ "ip": "10.66.1.4" }
],
"ports": [
{ "port": 9200 }
]
}
]
}However I see the endpoint created, my node (10.55.1.10) can connect to elasticsearch, but if I go into the previously deployed pod with my application, it still cannot connect to 10.66.1.4
On Thursday, 29 September 2016 15:50:14 UTC+1, Vishnu Kannan wrote:Take a look at the "Services without selectors" section in this doc.On Thu, Sep 29, 2016 at 6:10 AM, Bruno Vilhena <bruno.rv...@gmail.com> wrote:I've got a node (10.55.1.4) where my service pod runs, and from within that node VM I can ping an elasticsearch cluster (not containerised) which runs on a 10.66.1.4 VM. However from within the pod itself, I don't have connectivity to that VM. Is there anyway of configuring this?--Thanks
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/36e4e4b2-578e-498c-8617-3085a98eb694%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/ae401e57-953b-4a7c-8fb0-6e043e680a90%40googlegroups.com.
Hi Vishnu, I can't this is an elasticsearch running on a vm, not a kubernetes cluster, it only has the private network ip address, so no service/dns name resolution.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/b86c835c-2240-4f7e-8619-0c99818b603d%40googlegroups.com.
Name: elasNamespace: defaultLabels: <none>Selector: <none>Type: ClusterIPIP: 10.0.130.179Port: <unset> 9200/TCPEndpoints: 10.6.3.4:9200Session Affinity: NoneNo events.
Name: kubernetesNamespace: defaultLabels: component=apiserver provider=kubernetesSelector: <none>Type: ClusterIPIP: 10.0.0.1Port: https 443/TCPEndpoints: 10.55.1.10:443Session Affinity: ClientIPNo events.
Name: spiffyNamespace: defaultLabels: run=spiffySelector: <none>Type: NodePortIP: 10.0.47.30Port: http 8081/TCPNodePort: http 30860/TCPEndpoints: <none>Session Affinity: NoneEnter code here...
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/86fc98c6-3696-40d7-b2b8-fe253a48139f%40googlegroups.com.
Is this running with flannel or weave or ... or with some native
networking (I don't know Azure networking)?
Can a pod ping *any* VM other than its own? Can it ping other VMs in
the kube cluster? Outside the kube cluster? How about google.com ?
On Thu, Sep 29, 2016 at 4:09 PM, Rodrigo Campos <rod...@sdfg.com.ar> wrote:
> On Thu, Sep 29, 2016 at 03:54:39PM -0700, Bruno Vilhena wrote:
>> Hi,
>>
>> I think it's not elastic, as I should still be able to ping the private ip
>> address of the VM running elastic from within the pods (which I can do from
>> the k8s node VM).
>
> Oh, cool. So, ip route on the pods and nodes? traceroute gives more info?
>
> Don't know how the networking is configured on azure, but that is the key, of
> course. Maybe it's just not adding some rules to the azure internal network or
> something.
>
> --
> You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kubernetes-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/20160929230922.GB9427%40sdfg.com.ar.
> For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/CAO_RewZ2%2BRUZ-daEPCvysfrvR2VgX3JXTvR3Lu-ELZFQOin7fw%40mail.gmail.com.To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
If you're on Azure, can you let me know how you booted the cluster, in addition to Tim's question about if you're using an overlay vs native (with the new cloudprovider support)?It sounds as if you've placed a headless Service in front of the external Elasticsearch cluster. Is there any chance this is a cluster you deployed with kubernetes-anywhere and you left it deploying a 1.4.0-beta.2 cluster? If so, you may be hitting this bug which has since been fixed: https://github.com/kubernetes/kubernetes-anywhere/issues/232
On Thu, Sep 29, 2016 at 9:28 PM, 'Tim Hockin' via Kubernetes developer/contributor discussion <kuberne...@googlegroups.com> wrote:
Is this running with flannel or weave or ... or with some native
networking (I don't know Azure networking)?
Can a pod ping *any* VM other than its own? Can it ping other VMs in
the kube cluster? Outside the kube cluster? How about google.com ?
On Thu, Sep 29, 2016 at 4:09 PM, Rodrigo Campos <rod...@sdfg.com.ar> wrote:
> On Thu, Sep 29, 2016 at 03:54:39PM -0700, Bruno Vilhena wrote:
>> Hi,
>>
>> I think it's not elastic, as I should still be able to ping the private ip
>> address of the VM running elastic from within the pods (which I can do from
>> the k8s node VM).
>
> Oh, cool. So, ip route on the pods and nodes? traceroute gives more info?
>
> Don't know how the networking is configured on azure, but that is the key, of
> course. Maybe it's just not adding some rules to the azure internal network or
> something.
>
> --
> You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
> To post to this group, send email to kuberne...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/20160929230922.GB9427%40sdfg.com.ar.
> For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
We have a VNET1 (ex: 10.66.0.0/16) and a VNET2 (ex:10.55.0.0/16), within each of the VNETs we have a gateway and we connect both VNETs using these gateways and VPN connections.
VNET1 has an elasticsearch cluster, hosted on subnet 10.66.1.0/24 and VNET2 has a kubernetes cluster hosted on 10.55.1.0/24 (which is provisioned by kubernetes-anywhere).
VNET1 has 3 client boxes for elastic (ex: 10.66.1.4, 10.66.1.5 and 10.66.1.6), and VNET2 has 2 kuberenetes nodes (ex: 10.55.1.4 and 10.55.1.5) as well as a master.
Inside the VNET2 kuberenetes nodes run kubernetes pods (which have a containerized api, that needs a connection to the elasticsearch cluster).
Our kubenetes nodes (ex: VNET2 - 10.55.1.4 and 10.55.1.5), can connect to the elasticsearch nodes (ex: VNET1 - 10.66.1.4, 10.66.1.5 and 10.66.1.6).
However the pods/containers which run on these nodes, and are assigned to their own subnets (ex: 10.244.0.0/24, 10.244.1.0/24), cannot connect to the elasticsearch nodes/VMs or to VM outside of the nodes subnet for that matter. Independent of which VNET this VM might sit on.
Is there something misconfigured in the networking model? Should I be using an overlay network? If I deploy my pods with shared host network, I get connection to elastic, but I can only share one pod per host, otherwise they conflict.
Bit of an update on this one. If I create a completely new VM/network interface on a diferent subnet in the k8s cluster VNET, I am now able to have the pods ping it, as long as I add that subnet on the route tables generated on azure.
I'm still not able to ping a VM that is on the other VNET (connected by gateway vpn connections).
So the node can ping these VMs, but the container still cannot.
I tried adding the non-masquerade-flag on the kubelet service, set to 172.0.0.0/8 and restart the service, but still no luck.
azureuser@sayt-dev1-master:~$ sudo iptables -n -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-ISOLATION all -- 0.0.0.0/0 0.0.0.0/0
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- 0.0.0.0/0 10.0.159.92 /* default/elasticsearch: has no endpoints */ tcp dpt:9200 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 10.0.148.42 /* default/spiffy:http has no endpoints */ tcp dpt:8081 reject-with icmp-port-unreachable
Is there something misconfigured in the networking model? Should I be using an overlay network? If I deploy my pods with shared host network, I get connection to elastic, but I can only share one pod per host, otherwise they conflict.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/b94d5146-72db-470b-99c7-e30cb74e1830%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/934aa6fd-9f3f-40a0-81bd-e4ff596c58fc%40googlegroups.com.
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever3: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 0a:58:0a:f4:00:12 brd ff:ff:ff:ff:ff:ff inet 10.244.0.18/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::2c7a:93ff:fe51:8658/64 scope link valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0d:3a:23:77:ca brd ff:ff:ff:ff:ff:ff inet 10.55.1.10/24 brd 10.55.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20d:3aff:fe23:77ca/64 scope link valid_lft forever preferred_lft forever3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:78:36:25:1e brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever
For the VMs in your Kube vent, set the flag to 10.55.0.0/16 h you said containers can ping VMs in the same VNET, right?
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/e03512e7-24db-4e04-adc4-c253b4c42fd4%40googlegroups.com.
Ok, the next step is to tcpdump in the root of your VM whole a pod tries to access a VM in another vnet. I want to know how far the packets are going and what is coming back.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/697ef36b-e7e8-4759-b140-4706967897b8%40googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes23:24:05.690502 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 34, length 6423:24:05.690517 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 34, length 6423:24:06.691229 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 35, length 6423:24:06.691241 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 35, length 6423:24:07.691334 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 36, length 6423:24:07.691357 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 36, length 6423:24:08.692566 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 37, length 6423:24:08.692584 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 37, length 6423:24:09.693477 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 38, length 6423:24:09.693490 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 38, length 6423:24:10.695084 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 39, length 6423:24:10.695105 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 39, length 6423:24:11.695285 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 40, length 6423:24:11.695299 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 40, length 6423:24:12.696660 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 41, length 6423:24:12.696684 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 41, length 6423:24:13.698257 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 42, length 6423:24:13.698277 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 42, length 6423:24:14.698576 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 43, length 6423:24:14.698591 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 43, length 6423:24:15.699834 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 44, length 6423:24:15.699859 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 44, length 6423:24:16.700761 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 45, length 6423:24:16.700775 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 45, length 6423:24:17.702592 IP 10.244.0.18 > 10.6.3.4: ICMP echo request, id 219, seq 46, length 6423:24:17.702604 IP 10.6.3.4 > 10.244.0.18: ICMP echo reply, id 219, seq 46, length 64
Soon... It is pinging and getting a reply... But you assert the container is not seeing the ping response?
Can you tcpdump from inside the container and see what that says.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/abc74c36-12ae-4c41-9360-893fca7c26c0%40googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
00:04:52.742172 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 71, length 6400:04:53.743259 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 72, length 6400:04:54.744361 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 73, length 6400:04:55.745482 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 74, length 6400:04:56.746579 IP 10.244.1.15 > 10.6.3.4: ICMP echo request, id 30, seq 75, length 64
traceroute 10.6.3.4traceroute to 10.6.3.4 (10.6.3.4), 30 hops max, 60 byte packets 1 10.244.1.1 (10.244.1.1) 0.049 ms 0.026 ms 0.009 ms 2 10.55.2.5 (10.55.2.5) 2.225 ms 2.209 ms 2.195 ms 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 * * * 9 * * *10 * * *11 * * *
traceroute 10.244.1.15traceroute to 10.244.1.15 (10.244.1.15), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 * * *
So the root namespace sees the ICMP reply, but not the container namespace? Now we're making progress. The next trick would be to throw an iptables TRACE in, but you may need to load a kernel module to make it work, depending on distro.
In the root namespace:
Iptables -t raw -i PREROUTING -s other.ip -j TRACE
Run dmesg -c first, to clear the buffer.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/0d861a41-d901-4668-b1c1-adb2e5f57f15%40googlegroups.com.
Oct 17 10:14:51 ess-elas-client2-dev1 kernel: [2683185.298244] TRACE: raw:PREROUTING:policy:8 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51666 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=0
Oct 17 10:14:51 ess-elas-client2-dev1 kernel: [2683185.298263] TRACE: filter:INPUT:policy:2 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51666 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=0
Oct 17 10:14:51 ess-elas-client2-dev1 kernel: [2683185.298271] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6095 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=0
Oct 17 10:14:51 ess-elas-client2-dev1 kernel: [2683185.298273] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6095 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=0
Oct 17 10:14:52 ess-elas-client2-dev1 kernel: [2683186.298497] TRACE: raw:PREROUTING:policy:8 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51766 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=1
Oct 17 10:14:52 ess-elas-client2-dev1 kernel: [2683186.298516] TRACE: filter:INPUT:policy:2 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51766 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=1
Oct 17 10:14:52 ess-elas-client2-dev1 kernel: [2683186.298524] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6328 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=1
Oct 17 10:14:52 ess-elas-client2-dev1 kernel: [2683186.298527] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6328 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=1
Oct 17 10:14:53 ess-elas-client2-dev1 kernel: [2683187.299114] TRACE: raw:PREROUTING:policy:8 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51888 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=2
Oct 17 10:14:53 ess-elas-client2-dev1 kernel: [2683187.299133] TRACE: filter:INPUT:policy:2 IN=eth0 OUT= MAC=00:0d:3a:21:c4:ca:cc:46:d6:21:15:7f:08:00 SRC=10.244.1.15 DST=10.6.3.5 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=51888 DF PROTO=ICMP TYPE=8 CODE=0 ID=186 SEQ=2
Oct 17 10:14:53 ess-elas-client2-dev1 kernel: [2683187.299142] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6383 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=2
Oct 17 10:14:53 ess-elas-client2-dev1 kernel: [2683187.299144] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6383 PROTO=ICMP TYPE=0 CODE=0 ID=186 SEQ=2
Oct 17 10:19:13 ess-elas-client2-dev1 kernel: [2683446.769289] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16189 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=1 UID=1000 GID=1000
Oct 17 10:19:13 ess-elas-client2-dev1 kernel: [2683446.769295] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16189 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=1 UID=1000 GID=1000
Oct 17 10:19:14 ess-elas-client2-dev1 kernel: [2683447.778164] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16398 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=2 UID=1000 GID=1000
Oct 17 10:19:14 ess-elas-client2-dev1 kernel: [2683447.778170] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16398 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=2 UID=1000 GID=1000
Oct 17 10:19:15 ess-elas-client2-dev1 kernel: [2683448.786141] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16629 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=3 UID=1000 GID=1000
Oct 17 10:19:15 ess-elas-client2-dev1 kernel: [2683448.786147] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16629 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=3 UID=1000 GID=1000
Oct 17 10:19:16 ess-elas-client2-dev1 kernel: [2683449.794155] TRACE: raw:OUTPUT:policy:4 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16634 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=4 UID=1000 GID=1000
Oct 17 10:19:16 ess-elas-client2-dev1 kernel: [2683449.794160] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=10.6.3.5 DST=10.244.1.15 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=16634 DF PROTO=ICMP TYPE=8 CODE=0 ID=55793 SEQ=4 UID=1000 GID=1000
>
I've got a node (10.55.1.4) where my service pod runs, and from within that node VM I can ping an elasticsearch cluster (not containerised) which runs on a 10.66.1.4 VM. However from within the pod itself, I don't have connectivity to that VM. Is there anyway of configuring this?Thanks