--
You received this message because you are subscribed to the Google Groups "Containers at Google" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-contain...@googlegroups.com.
To post to this group, send email to google-c...@googlegroups.com.
Visit this group at http://groups.google.com/group/google-containers.
For more options, visit https://groups.google.com/d/optout.
Hrm, I don't know enough about the details of flannel's encap/decap scheme. My guess is that this is because there is NAT somewhere in the flannel network routing, and so the packets that the namenode is seeing are NAT-ed packets from the host rather than from the container.
On Tue, Dec 23, 2014 at 1:00 AM, Luqman <lgs...@gmail.com> wrote:I have setup Kubernetes cluster over CoreOS, using Flannel on DigitalOcean. I have images for Hadoop Namenode, Hadoop Datanode. The datanode binds to 0.0.0.0:50010 to serve as default.Problem:Now when the datanode tries to register itself to the namenode, it sends a rpc request to namenode. Now, the namenode registers the datanode with the IP of the docker (or flannel) interface.Look at this gist: https://gist.github.com/LuqmanSahaf/fd7ee3bf9b1766e4a5adI want to ask why is that the IP of the container is not used instead. Is it that the header of the request get changed during the forwarding?
$ sudo iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
5012 524K KUBE-PROXY all -- * * 0.0.0.0/0 0.0.0.0/0
2802 162K DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 24 packets, 1440 bytes)
pkts bytes target prot opt in out source destination
51508 3143K KUBE-PROXY all -- * * 0.0.0.0/0 0.0.0.0/0
12789 767K DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 24 packets, 1440 bytes)
pkts bytes target prot opt in out source destination
4296 259K FLANNEL all -- * * 10.244.0.0/16 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6062 to:10.244.32.9:6062
Chain FLANNEL (1 references)
pkts bytes target prot opt in out source destination
16 960 ACCEPT all -- * * 0.0.0.0/0 10.244.0.0/16
0 0 ACCEPT all -- * * 0.0.0.0/0 224.0.0.0/4
20 1501 MASQUERADE all -- * !flannel0 0.0.0.0/0 0.0.0.0/0
Chain KUBE-PROXY (2 references)
pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp -- * * 0.0.0.0/0 10.1.170.62 /* kubernetes */ tcp dpt:443 redir ports 51646
0 0 REDIRECT tcp -- * * 0.0.0.0/0 10.1.64.54 /* kubernetes-ro */ tcp dpt:80 redir ports 45309
3 180 REDIRECT tcp -- * * 0.0.0.0/0 10.1.244.74 /* hbase-master */ tcp dpt:6060 redir ports 43020
0 0 REDIRECT tcp -- * * 0.0.0.0/0 10.1.163.130 /* hadoop-datanode */ tcp dpt:50010 redir ports 46277
3 180 REDIRECT tcp -- * * 0.0.0.0/0 10.1.172.216 /* hadoop-namenode */ tcp dpt:9000 redir ports 57779
1 60 REDIRECT tcp -- * * 0.0.0.0/0 10.1.190.187 /* zookeeper */ tcp dpt:2181 redir ports 35934
$ sudo iptables -t raw -A OUTPUT -j TRACE
$ sudo iptables -t raw -A PREROUTING -j TRACE
$ sudo iptables -t raw -A OUTPUT -m limit --limit 2/m --limit-burst 5 -j TRACE $ sudo iptables -t raw -A PREROUTING -m limit --limit 2/m --limit-burst 5 -j TRACE $ sudo iptables -t raw -A OUTPUT -m limit --limit 2/m --limit-burst 10 -j LOG $
sudo iptables -t raw -A PREROUTING -m limit --limit 2/m --limit-burst 10 -j LOG
$ sudo iptables -t raw -A OUTPUT -o flannel0 -m limit --limit 10/m --limit-burst 10 -j TRACE
Chain PREROUTING (policy ACCEPT 6 packets, 360 bytes) num pkts bytes target prot opt in out source destination 1 974 87618 KUBE-PROXY all -- * * 0.0.0.0/0 0.0.0.0/0 2 826 49671 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 6 packets, 360 bytes) num pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 6 packets, 360 bytes) num pkts bytes target prot opt in out source destination 1 6960 423K KUBE-PROXY all -- * * 0.0.0.0/0 0.0.0.0/0 2 3029 182K DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 6 packets, 360 bytes) num pkts bytes target prot opt in out source destination 1 24 1717 FLANNEL all -- * * 10.244.0.0/16 0.0.0.0/0 Chain DOCKER (2 references) num pkts bytes target prot opt in out source destination 1 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:50010 to:10.244.93.2:50010 Chain FLANNEL (1 references) num pkts bytes target prot opt in out source destination 1 5 300 ACCEPT all -- * * 0.0.0.0/0 10.244.0.0/16 2 0 0 ACCEPT all -- * * 0.0.0.0/0 224.0.0.0/4 3 19 1417 MASQUERADE all -- * !flannel0 0.0.0.0/0 0.0.0.0/0 Chain KUBE-PROXY (2 references) num pkts bytes target prot opt in out source destination 1 0 0 REDIRECT tcp -- * * 0.0.0.0/0 10.1.192.92 /* kubernetes */ tcp dpt:443 redir ports 55849 2 0 0 REDIRECT tcp -- * * 0.0.0.0/0 10.1.190.79 /* kubernetes-ro */ tcp dpt:80 redir ports 49995 3 5 300 REDIRECT tcp -- * * 0.0.0.0/0 10.1.53.12 /* hadoop-namenode */ tcp dpt:9000 redir ports 55430
@Luqman, I encountered the same issue as you. But even if I moved to the 1st solution, it still doesn't work. In my case, the actual IP of Namenode container is considered to be "k8s_POD-2fdae8b2_namenode-controller-keptk_default_55b8147c-881f-11e5-abad-02d07c9f6649_e41f815f.bridge" by datanode. And datanode failed to start due to this. Do you happen to know why? Also does option 2 work now?
On Monday, January 19, 2015 at 3:06:35 PM UTC+8, Luqman wrote:@prateek, @eugeneI have moved to the 1st solution for now. But I don't think this is a permanent solution. A user might want to use services, which in this use case, he cannot.
--
@zhenglin, I have long been using DNS to solve the problems like these. For Hadoop and HBase, the only solution seemed plausible was DNS. I used SkyDNS, and used scripts to upload IPs of every container into the etcd when the container starts (SkyDNS uses etcd). A Namenode pod, say "k8s_POD-f23f_namenode-f2ff444-4f4fsd", for instance, will save its IP into etcd like : k8s_POD-f23f_namenode-f2ff444-4f4fsd.domain.com. Then this name is injected into the Datanode containers as an env variable. I hope this gives the answer.
sudo service docker stop
/usr/bin/dockerd --ip-masq=false