How to expose kubernetes service to public

446 views
Skip to first unread message

Turgos

unread,
Jun 24, 2016, 11:24:52 AM6/24/16
to CoreOS Dev

I have a Kubernetes cluster (Vagrant & CoreOS) running with 2 workers locally. 


I can deploy a Docker Image on this Kubernetes cluster with:

$ kubectl run api4docker --image=myhost:5000/api4docker:latest --replicas=2 --port=8080 —env="SPRING_PROFILES_ACTIVE=production"


When I get the pods, I see them running fine

$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
api4docker-2839975483-9muv5   1/1       Running   0          8s

api4docker-2839975483-lbiny   1/1       Running   0          8s


I expose this deployment as service with:

$ kubectl expose deployment api4docker --port=8080 --type=LoadBalancer  


Here is more information about exposed service.

$ kubectl get svc api4docker
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
api4docker   10.3.0.95                                 8090/TCP   20m

$ kubectl describe services api4docker
Name:          api4docker
Namespace: default
Labels:         run=api4docker
Selector:       run=api4docker
Type:            LoadBalancer
IP:               10.3.0.95
Port:             <unset> 8090/TCP
NodePort:     <unset> 30959/TCP
Endpoints:    10.2.46.2:8080,10.2.97.3:8080
Session Affinity: None

No events.

After that, I can access this service only from the worker nodes. How can I make my service accessible from outside?

What is the suggested practice for exposing service to the public?

Thank you,

Turgos

Rob Szumski

unread,
Jun 24, 2016, 12:59:12 PM6/24/16
to coreo...@googlegroups.com
It doesn’t look like you have cloud credentials set up to use Type=LoadBalancer. If it had worked, you’d see a “loadBalancerIP” field.

You could also expose this service as a NodePort, which is just a port in the 32xxx range that works on every machine in the cluster. You can then hook this up to a load balancer yourself, or just use the port directly. I find that NodePorts are great for testing since they work in all environment pretty easily.

 - Rob

Gokhan Sevik

unread,
Jun 24, 2016, 1:53:04 PM6/24/16
to coreo...@googlegroups.com
Hi Rob, 
Does cloud credentials set up required/works for local set up? Is there any link how to set it for my Kubernetes with local Vagrant&CoreOS.

Thank you, Turgos,

Rob Szumski

unread,
Jun 24, 2016, 1:56:39 PM6/24/16
to coreo...@googlegroups.com
Nope, it only works for VMs set up on the cloud. The NodePort should work for you though.

Gokhan Sevik

unread,
Jun 24, 2016, 2:24:43 PM6/24/16
to coreo...@googlegroups.com
I still cannot access after setting the type NodePort.

$ kubectl expose deployment api4docker --type=NodePort

$ kubectl describe services api4docker

Name: api4docker

Namespace: default

Labels: run=api4docker

Selector: run=api4docker

Type: NodePort

IP: 10.3.0.88

Port: <unset> 8080/TCP

NodePort: <unset> 31713/TCP

Endpoints: 10.2.46.2:8080,10.2.97.3:8080

Session Affinity: None

No events.

$ curl http://10.3.0.88:31713

curl: (7) Failed to connect to 10.3.0.88 port 31713: Operation timed out



By the way, I can ping the 10.3.0.88 and get reply 


Rob Szumski

unread,
Jun 24, 2016, 3:04:21 PM6/24/16
to coreo...@googlegroups.com
Are you using the coreos-kubernetes Vagrant boxes? Those should be set up with 172.17.4.x IP addresses, which is the node’s IP address. That box should have the networking set up such that you can access it from your laptop/host machine.

Brandon Philips

unread,
Jun 24, 2016, 8:31:09 PM6/24/16
to coreo...@googlegroups.com
Right, the IP listed is the IP of the pod. Not the IP of the virtual machine. You need to hit the 172.17.4.x address as rob mentions.

Gokhan Sevik

unread,
Jun 27, 2016, 10:15:12 AM6/27/16
to coreo...@googlegroups.com
Thank you Brandon and Rob.

Yes, After trying with 172.17.4.x address, it all worked fine. 

We are planning to use Kubernetes on Local Vagrant&CoreOS environment. Do you have any recommendation/best practices for how to expose our APIs to users on our intranet?

Thank you again,
Turgos.

Rob Szumski

unread,
Jun 27, 2016, 1:36:15 PM6/27/16
to coreos-dev
Glad you got it working.

Is your goal to run specific services locally on developers laptops, but also share them so other developers can access them?

Vagrant has some facilities for attaching your VMs to a “public” network. You’ll have to modify the Vagrantfile with those options:


From the Kubernetes side, using NodePorts for only the services that will be “public” is a good idea, with the rest remaining internal to the cluster.

Another method would be to host your yaml definitions in a single repo, which would be a single source for all of your developers to run a copy of other teams services locally. This would prevent you from having to do this more complex networking.

An example workflow might be to have a single file that contains the latest service, deployment, sample secrets, etc. Someone wanting to run a dev install of this API would just have to do `kubectl create -f api-stack.yml` and access the API over the node port. Keep in mind that all users will need to be able to pull the containers required, or you can include a pull secret as part of your file.

 - Rob
Reply all
Reply to author
Forward
0 new messages