Question on external access to nodePort Service

191 views
Skip to first unread message

Daniel Suen

unread,
Sep 3, 2017, 11:29:35 AM9/3/17
to Kubernetes user discussion and Q&A
I have set up a k8s two-node cluster (with flannel cidr 10.244.0.0/16) on Ubuntu for experimental purpose at home. The setup is pretty basic.

I have docker is version 17.06-ce. I installed k8s and the two nodes are reported as ready.

ube@sage ~/k8s $ kubectl get nodes
NAME      STATUS    AGE       VERSION
parsley   Ready     3d        v1.7.4
sage      Ready     3d        v1.7.5


I know the version has some discrepancies as I did a apt-get update on sage. Then I deployed a simple helloworld webapp, which is a default Mojolicious webapp listening on port 3000...
This is the service definition:

apiVersion: v1
kind: Service
metadata:
  name: helloworld-service
  labels:
    app: helloworld
spec:
  type: NodePort
  ports:
    - port: 3000
      nodePort: 30000
      protocol: TCP
  selector:
    app: helloworld

And,

kube@sage ~/k8s $ kubectl describe service
Name: helloworld-service
Namespace: default
Labels: app=helloworld
Annotations: <none>
Selector: app=helloworld
Type: NodePort
IP: 10.100.172.154
Port: <unset> 3000/TCP
NodePort: <unset> 30000/TCP
Session Affinity: None
Events: <none>

Then, I fired up a deployment:

kube@sage ~/k8s $ kubectl describe deployments
Name: helloworld-deployment
Namespace: default
CreationTimestamp: Sat, 02 Sep 2017 09:37:51 -0400
Labels: app=helloworld
Selector: app=helloworld
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
  Labels: app=helloworld
  Containers:
   helloworld:
    Image: helloworld:1.0.0
    Port: 3000/TCP
    Environment: <none>
    Mounts: <none>
  Volumes: <none>
Conditions:
  Type Status Reason
  ---- ------ ------
  Available True MinimumReplicasAvailable
  Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: helloworld-deployment-1481581006 (3/3 replicas created)
Events: <none>

And sure, all the pods are run on parsley (sage is the k8s master, but also plays the role as node):

kube@sage ~/k8s $ kubectl get pods --output=wide
NAME                                     READY     STATUS    RESTARTS   AGE       IP           NODE
helloworld-deployment-1481581006-6dgpp   1/1       Running   0          1d        10.244.1.7   parsley
helloworld-deployment-1481581006-j47cj   1/1       Running   0          1d        10.244.1.9   parsley
helloworld-deployment-1481581006-tmbxl   1/1       Running   0          1d        10.244.1.8   parsley

If I am on parsley, any of these work properly:


But, these do not work:

curl http://[ip of sage]:30000/

On sage, all those curl commands listed above don't work...(it just times out, i.e. after issuing the curl command, it kind of hangs there till it times out)

I don't know if I understand the documentation correctly. I know a service is run through kube-proxies which are present on each node, and the port I specified (30000) should be open for connection, which I checked with netstat -tupln | grep "30000", and it is the case on both sage and parsley:

kube@sage ~/k8s $ netstat -tupln | grep "30000"
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp6       0      0 :::30000                :::*                    LISTEN      -     

And, if I hit this port on either node should direct me to the pod (which I don't quite understand how this can be done in this case, as all pods are on one node, and no pods are running on sage). How can I troubleshooting this? Or I misunderstand the documentation. 

I also do not quite understand what the IP of the service means: 10.100.172.154 in the service description.

Appreciate if someone can help me on this. Thanks!

Daniel.





Ian Lewis

unread,
Sep 4, 2017, 5:27:06 AM9/4/17
to Kubernetes user discussion and Q&A
Daniel,

Accessing 30000 from sage should work. I suspect this is an issue with flannel and/or kube-proxy. Do you have kube-proxy running correctly on each machine? kube-proxy is necessary to set up the iptables rules that allow traffic forwarding for services.

Please see this doc for more info on debugging services in general. Particularly, the parts "Is the kube-proxy working?" section should be relevant.

Ian

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages